

CoffeeBeans
https://www.coffeebeans.io/About
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projects—driving meaningful business outcomes through tech excellence.
Jobs at CoffeeBeans
As an L3 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 6–9 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or a strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.
As an L1/L2 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 2.5–5 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.
Focus Areas:
- Build applications and solutions that process and analyze large-scale data.
- Develop data-driven applications and analytical tools.
- Implement business logic, algorithms, and backend services.
- Design and build APIs for secure and efficient data exchange.
Key Responsibilities:
- Develop and maintain data processing applications using Apache Spark and Hadoop.
- Write MapReduce jobs and complex data transformation logic.
- Implement machine learning models and analytics solutions for business use cases.
- Optimize code for performance and scalability; perform debugging and troubleshooting.
- Work hands-on with Databricks for data engineering and analysis.
- Design and manage Airflow DAGs for orchestration and automation.
- Integrate and maintain CI/CD pipelines (preferably using Jenkins).
Primary Skills & Qualifications:
- Strong programming skills in Scala and Python.
- Expertise in Apache Spark for large-scale data processing.
- Solid understanding of data structures and algorithms.
- Proven experience in application development and software engineering best practices.
- Experience working in agile and collaborative environments.
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required
Similar companies
About the company
Jobs
13
About the company
Founded in 2014 by two passionate individuals during their second year at Christ College, Bangalore, Moshi Moshi is a young, creative, and committed communication company that encourages clients to always "Expect the EXTRA."
Our diverse team of over 160+ people includes Art directors, Cinematographers, Content and copy writers, marketers, developers, coders, and our beloved puppy, Momo. We offer a wide range of services, including strategy, brand design, communications, packaging, film and TVCs, PR, and more. At Moshi Moshi, we believe in creating experiences rather than just running a company.
We are amongst the fastest growing agencies in the country with a very strong value system.
Below are the five of the nine principles we believe in strongly.
- Communicate Clearly.: Prioritize clear and open dialogue.
- Doing things morally right.: Uphold integrity in all endeavors.
- Dream it, do it.: Always Embrace optimism and a can-do attitude.
- Add logic to your life.: Ensure that rationality guides our actions.
- Be that fool.: Fearlessly challenge the impossible.
Come find yourself at Moshi Moshi.
Jobs
121
About the company
We’ve achieved 300% growth in the cybersecurity field. We’ve partnered with the most impactful organisations - whether it is the largest cybersecurity company or the fastest growing unicorn. We're building an amazing place to work and we invite you to join us.
Engineering Centric Culture
At Metron, we strive to maintain and cultivate a developer-centric culture. Here’s what this means in action:
Meaningful and ever-evolving work
We’ve built integrations and custom solutions for 200+ security platforms and keep adding more to the list. You will be constantly learning about new platforms, acquiring new skills, and being exposed to cutting-edge security technology and other tools.
Work directly with clients and end-users
We encourage our developers to have a client-facing approach. At Metron, you will be able to demonstrate and discuss your code with the people who will be using it on a regular basis.
A balanced workspace
Everyone has a personal life outside of the office. We do not expect you to work weekends or overtime (and actively encourage you not to do so). We also aim to provide you with all the required perks and benefits to keep you in good health and spirits.
A level hierarchy
There are only 2 roles in the organisation - Development and QA. Every employee will either be writing code or testing for quality. We don’t believe in helicopter managers who hover above you and do not understand your work.
Hybrid work mode
Like to work at the office? We can accommodate you. Prefer to work from home? No problem! Need a combination of both? Whatever model works for you, works for us.
Jobs
5
About the company
Sim Gems Group is a leading diamond manufacturer, miner, wholesaler, and distributor of natural diamonds. Established in 1993, the company is dedicated to providing customers with cut and polished stones of the highest quality and unmatched brilliance. They focus on ethical sourcing, following the Kimberley Process Certification Scheme, and prioritize sustainable practices in their operations.
Jobs
2
About the company
Jobs
18
About the company
Jobs
13
About the company
Jobs
6
About the company
Jobs
14
About the company
Automate Accounts is a technology-driven company dedicated to building intelligent automation solutions that streamline business operations and boost efficiency. We leverage modern platforms and tools to help businesses transform their workflows with cutting-edge solutions.
Jobs
2
About the company
ConvertLens is an AI-driven Marketing ROI & Lead Optimization Platform built for dental practices and DSOs. It brings together campaign data, call tracking, form submissions, and PMS insights into a unified dashboard, helping practices clearly measure what drives patient growth. Beyond reporting, ConvertLens improves results through AI-powered SMS, email, and voice workflows that integrate directly with practice management systems—so practices can engage new patients within minutes and automate follow-ups seamlessly.
ConvertLens is backed by its parent company, Remedo, founded by Richeek Arya, Ruchir Mehra, and Harsh Vardhan Bansal. Remedo is a leading health-tech platform offering practice management, patient communication, and engagement solutions, and is trusted by thousands of doctors and practices across India and abroad. With this strong foundation in healthcare technology, the ConvertLens team brings the same expertise and vision to transform dental growth through clarity, automation, and measurable impact.
Jobs
2





