

CoffeeBeans
https://www.coffeebeans.io/About
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projects—driving meaningful business outcomes through tech excellence.
Jobs at CoffeeBeans

Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required


As an L3 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 6–9 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or a strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.


As an L1/L2 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 2.5–5 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.
Job Description - Lead Java Developer
As a Backend Developer, you will play a crucial role in designing and developing the business logic and backend systems for our products. You will work closely with frontend developers to design and develop functional, performant, and complete APIs. You will also work on decrypting existing enterprise software systems and connecting applications to applicable data sources. Additionally, you will write unit, integration, and performance tests, develop automation tools, and continuous integration pipelines for daily tasks. Your work will be of high quality, well-documented, and efficient. You will also challenge ideas and opinions to avoid errors and inefficient solutions.
What are we looking for?
- A bachelor's degree in Computer Science or a related field is a plus, but not mandatory.
- 7+ years of experience as a backend developer with experience in Java, Microservices, SpringBoot, etc.
- Significant API expertise for large-scale apps and performance optimization.
- Deep knowledge of programming and object-oriented engineering (e.g., SOLID, clean architecture).
- Good knowledge of Java.
- Knowledge of distributed systems tech stacks like Kafka, ELK, in-memory databases, Cassandra or other such DB's
- Strong communication skills with the ability to communicate complex technical concepts and align the organization on decisions.
- Strong problem-solving skills to quickly process complex information and present it clearly and effectively.
- Ability to utilize team collaboration to create innovative solutions efficientl

Similar companies
About the company
MyOperator is India's cloud communications leader trusted by 10000+ businesses. MyOperator offers an omni-channel SAAS platform with:
- Cloud Call Center/ Contact Center Software
- WhatsApp API
- IVR and Toll-free Number
- Multi-store telephony
- Enterprise Mobility
MyOperator has been adopted by 10000+ businesses including IRCTC, Razorpay, Amazon, PwC, E&Y, Apollo and more.
MyOperator has been rated as a champion in India's cloud communications segment(Software Suggest), awarded for ease of use by Capterra and for exceptional customer service at UBS India BPO Conclave.
In 2022, MyOperator launched SMB focused conversation app Heyo Phone backed by super-angels Amit Chaudhary (Lenskart), Aakash Chaudhry(Aakash-Byjus)
Jobs
24
About the company
Jobs
6
About the company
Certa’s no-code platform makes it easy to digitize and manage the lifecycle of all your suppliers, partners, and customers. With automated onboarding, contract lifecycle management, and ESG management, Certa eliminates the procurement bottleneck and allows companies to onboard third-parties 3x faster.
Jobs
5
About the company
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Jobs
340
About the company
Oddr is the legal industry’s only AI-powered invoice-to-cash platform. Oddr’s AI-powered platform centralizes, streamlines and accelerates every step of billing + collections— from bill preparation and delivery to collections and reconciliation - enabling new possibilities in analytics, forecasting, and client service that eliminate revenue leakage and increase profitability in the billing and collections lifecycle.
www.oddr.com
Jobs
8
About the company
Jobs
218
About the company
Jorie Healthcare Partners is a pioneering HealthTech and FinTech company dedicated to transforming healthcare revenue cycle management through advanced AI and robotic process automation. With over four decades of experience developing, operating, and modernizing healthcare systems, the company has processed billions in claims and transactions with unmatched speed and accuracy.
Its AI-powered platform streamlines billing workflows, reduces costs, minimizes denials, and accelerates cash flow—empowering healthcare providers to reinvest more into patient care. Guided by a collaborative culture symbolized by their rallying cry “Go JT” (Go Jorie Team), Jorie blends cutting-edge technology with strategic consulting to deliver measurable financial outcomes and strengthen operational resilience.
Jobs
4
About the company
Joining the team behind the world’s most trusted artifact firewall isn’t just a job - it’s a mission.
🧩 What the Company Does
This company provides software tools to help development teams manage open-source code securely and efficiently. Its platform covers artifact management, automated policy enforcement, vulnerability detection, software bill of materials (SBOM) management, and AI-powered risk analysis. It's used globally by thousands of enterprises and millions of developers to secure their software supply chains.
👥 Founding Team
The company was founded in the late 2000s by a group of open-source contributors, including one who was heavily involved in building a popular Java-based build automation tool. The company was started by veteran engineers with deep roots in the open-source community—one of whom helped create a widely adopted build automation tool used by millions today.
💰 Funding & Financials
Over the years, the company has raised nearly $150 million across several funding rounds, including a large growth round led by a top-tier private equity firm. It crossed $100 million in annual recurring revenue around 2021 and has remained profitable since. Backers include well-known names in venture capital and private equity.
🏆 Key Milestones & Achievements
- Early on, the company took over stewardship of a widely used public code repository.
- It launched tools for artifact repository management and later expanded into automated security and compliance.
- Has blocked hundreds of thousands of malicious open-source packages and helped companies catch risky components before deployment.
- Released AI-powered tools that go beyond CVE databases to detect deeper threats.
- Recognized as a market leader in software composition analysis by major industry analysts.
- Today, it’s used by many Fortune 100 companies across industries like finance, government, and healthcare.
Jobs
9