

CoffeeBeans
https://coffeebeans.ioAbout
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projects—driving meaningful business outcomes through tech excellence.
Jobs at CoffeeBeans
Role Overview
We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Skills:
Java, GCP, NoSQL, docker, contanerization
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
We are seeking strong Java Full Stack Engineers who can independently contribute across backend and frontend systems. The ideal candidate should take complete ownership of delivery, collaborate effectively with cross-functional teams, and build scalable, high-performance applications.
Key Responsibilities
- Build, enhance, and maintain full-stack applications using Java, Spring Boot, React.js/Next.js.
- Own end-to-end feature development — design, development, testing, and deployment.
- Develop scalable microservices and ensure system performance, reliability, and security.
- Collaborate with product, QA, and architecture teams to deliver high-quality software.
- Optimize applications for speed, responsiveness, and maintainability on both backend and frontend.
- Troubleshoot complex issues across the stack and drive solutions independently.
Technical Skills (Must-Have)
Backend
- Strong experience in Java, Spring Boot, and Microservices
- Solid understanding of Core Java, LLD, Design Patterns, and basic System Design
- Hands-on experience with Kafka, MongoDB, Redis, and distributed systems
- Experience with SQL or NoSQL databases
Frontend
- Strong experience in React.js or Next.js
- Proficiency in API integration, state management (Redux / Context API), and frontend optimization
- Strong knowledge of JavaScript (ES6+), HTML5, CSS3
Additional Expectations
- Ability to work with minimal supervision and deliver high-quality code on time
- Strong debugging, problem-solving, and ownership mindset
- Experience building scalable, resilient, and performant applications
- Excellent communication and collaboration skills
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and data platforms. In this role, you will collaborate with clients and internal teams to deliver robust data solutions that support analytics, AI/ML, and operational systems. You will also mentor junior engineers and bring strong engineering discipline to our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed batch and streaming data pipelines.
- Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Work closely with cross-functional stakeholders to translate business requirements into technical data solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architecture discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python (exp with Java is a good to have).
- Advanced SQL expertise with ability to work on complex queries and optimizations.
- Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Experience with distributed processing frameworks like Apache Spark, Flink, or similar.
- Experience with Snowflake (preferred).
- Hands-on experience building pipelines using orchestration tools such as Airflow or similar.
- Familiarity with CI/CD, version control (Git), and modern development practices.
- Ability to debug, optimize, and scale data pipelines in real-world environments.
Good to Have
- Experience with major cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, data quality frameworks, and observability.
- Certifications in AWS (Data Analytics / Solutions Architect) or Databricks.
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with excellent attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements as needed.
Key Responsibilities
- Design, develop, and implement backend services using Java (latest version), Spring Boot, and Microservices architecture.
- Participate in the end-to-end development lifecycle, from requirement analysis to deployment and support.
- Collaborate with cross-functional teams (UI/UX, DevOps, Product) to deliver high-quality, scalable software solutions.
- Integrate APIs and manage data flow between services and front-end systems.
- Work on cloud-based deployment using AWS or GCP environments.
- Ensure performance, security, and scalability of services in production.
- Contribute to technical documentation, code reviews, and best practice implementations.
Required Skills:
- Strong hands-on experience with Core Java (latest versions), Spring Boot, and Microservices.
- Solid understanding of RESTful APIs, JSON, and distributed systems.
- Basic knowledge of Kubernetes (K8s) for containerization and orchestration.
- Working experience or strong conceptual understanding of cloud platforms (AWS / GCP).
- Exposure to CI/CD pipelines, version control (Git), and deployment automation.
- Familiarity with security best practices, logging, and monitoring tools.
Preferred Skills:
- Experience with end-to-end deployment on AWS or GCP.
- Familiarity with payment gateway integrations or fintech applications.
- Understanding of DevOps concepts and infrastructure-as-code tools (Added advantage).
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.
Focus Areas:
- Build applications and solutions that process and analyze large-scale data.
- Develop data-driven applications and analytical tools.
- Implement business logic, algorithms, and backend services.
- Design and build APIs for secure and efficient data exchange.
Key Responsibilities:
- Develop and maintain data processing applications using Apache Spark and Hadoop.
- Write MapReduce jobs and complex data transformation logic.
- Implement machine learning models and analytics solutions for business use cases.
- Optimize code for performance and scalability; perform debugging and troubleshooting.
- Work hands-on with Databricks for data engineering and analysis.
- Design and manage Airflow DAGs for orchestration and automation.
- Integrate and maintain CI/CD pipelines (preferably using Jenkins).
Primary Skills & Qualifications:
- Strong programming skills in Scala and Python.
- Expertise in Apache Spark for large-scale data processing.
- Solid understanding of data structures and algorithms.
- Proven experience in application development and software engineering best practices.
- Experience working in agile and collaborative environments.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required
Similar companies
About the company
Jobs
12
About the company
Since 2015, Kanerika has been helping businesses turn ideas into impactful software products. With over 75 years of collective expertise, our team brings together deep product engineering knowledge and a passion for solving complex challenges. We focus on building software that is not just functional but transformative—solutions that create lasting value for our clients.
At Kanerika, we believe great outcomes come from strong partnerships, proven experience, and creative thinking. That’s why we keep our team lean, our approach hands-on, and our commitment unwavering: to deliver products that exceed expectations every time.
Jobs
18
About the company
Jobs
13
About the company
Nouveau Labs Pvt. Ltd. is a Product Platform and SW Engineering services and technology company with its HQ in Bangalore and Sales & Marketing office in San Jose. The company has its executive leadership represented by industry veterans with a deep background in software engineering and sales. The company aims to be the most innovative & trusted product development & support partner for global technology companies and start-ups working on digital products or platforms. The core business of the company is to provide world-class experience & out-of-the-box solutions around product engineering services while leveraging its technology expertise around Artificial Intelligence, Machine Learning, Analytics, Automation, Robotics, Cloud, Wireless, and Unified Communications, Embedded Systems, Mobility etc. The company is also creating platforms and solutions to help clients accelerate their product development. The company adopts best of the people policies while recruiting superior technical & managerial minds from the industry. We believe in long-term commitment to its stakeholders so that we work in a family-like environment with passion, transparency and commitment.
Jobs
1
About the company
Jobs
1
About the company
Orbia is a purpose-driven global company with the mission: “to advance life around the world.”
Operating across more than 100 countries and over 50 production sites, Orbia serves customers through five key business groups.
Their offerings span advanced materials, specialty products and innovative solutions focused on:
- Health & well-being
- Food, water and sanitation security
- Connectivity & information access
- Reinventing cities & homes
- Energy transition and sustainable materials
Orbia offers global scale, meaningful impact, and a portfolio of diverse, growth-oriented business units.
Jobs
2
About the company
OIP Insurtech streamlines insurance operations and optimizes workflows by combining deep industry knowledge with advanced technology. Established in 2012, OIP InsurTech partners with carriers, MGAs, program managers, and TPAs in the US, Canada, and Europe, especially the UK.
With 1,200 professionals serving over 100 clients, we deliver insurance process automation, custom software development, high-quality underwriting services, and skilled tech staff to augment our clients.
While saving time and money is the immediate win, the real game-changer is giving our clients the freedom to grow their books, run their businesses, and focus on what they love. We’re proud to support them on this journey and make a positive impact on the industry!
Jobs
4
About the company
Jobs
2
About the company
At Hello Trade, an IndiaMART company, we specialize in offering the most suitable business loans to meet all your business needs. We provide options like Term Loans and Overdraft Facilities under Collateral-Free Business Loan and a Loan Against Property under secured business loans to fuel your business growth. Our commitment is to empower your business with the most suitable business loans at competitive rates. Let's work together to scale your business to new heights.
Jobs
2
About the company
Jobs
1





