

CoffeeBeans
https://coffeebeans.ioAbout
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projects—driving meaningful business outcomes through tech excellence.
Jobs at CoffeeBeans
Role Overview
We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Skills:
Java, GCP, NoSQL, docker, contanerization
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
We are seeking strong Java Full Stack Engineers who can independently contribute across backend and frontend systems. The ideal candidate should take complete ownership of delivery, collaborate effectively with cross-functional teams, and build scalable, high-performance applications.
Key Responsibilities
- Build, enhance, and maintain full-stack applications using Java, Spring Boot, React.js/Next.js.
- Own end-to-end feature development — design, development, testing, and deployment.
- Develop scalable microservices and ensure system performance, reliability, and security.
- Collaborate with product, QA, and architecture teams to deliver high-quality software.
- Optimize applications for speed, responsiveness, and maintainability on both backend and frontend.
- Troubleshoot complex issues across the stack and drive solutions independently.
Technical Skills (Must-Have)
Backend
- Strong experience in Java, Spring Boot, and Microservices
- Solid understanding of Core Java, LLD, Design Patterns, and basic System Design
- Hands-on experience with Kafka, MongoDB, Redis, and distributed systems
- Experience with SQL or NoSQL databases
Frontend
- Strong experience in React.js or Next.js
- Proficiency in API integration, state management (Redux / Context API), and frontend optimization
- Strong knowledge of JavaScript (ES6+), HTML5, CSS3
Additional Expectations
- Ability to work with minimal supervision and deliver high-quality code on time
- Strong debugging, problem-solving, and ownership mindset
- Experience building scalable, resilient, and performant applications
- Excellent communication and collaboration skills
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and data platforms. In this role, you will collaborate with clients and internal teams to deliver robust data solutions that support analytics, AI/ML, and operational systems. You will also mentor junior engineers and bring strong engineering discipline to our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed batch and streaming data pipelines.
- Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Work closely with cross-functional stakeholders to translate business requirements into technical data solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architecture discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python (exp with Java is a good to have).
- Advanced SQL expertise with ability to work on complex queries and optimizations.
- Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Experience with distributed processing frameworks like Apache Spark, Flink, or similar.
- Experience with Snowflake (preferred).
- Hands-on experience building pipelines using orchestration tools such as Airflow or similar.
- Familiarity with CI/CD, version control (Git), and modern development practices.
- Ability to debug, optimize, and scale data pipelines in real-world environments.
Good to Have
- Experience with major cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, data quality frameworks, and observability.
- Certifications in AWS (Data Analytics / Solutions Architect) or Databricks.
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with excellent attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements as needed.
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.
Focus Areas:
- Build applications and solutions that process and analyze large-scale data.
- Develop data-driven applications and analytical tools.
- Implement business logic, algorithms, and backend services.
- Design and build APIs for secure and efficient data exchange.
Key Responsibilities:
- Develop and maintain data processing applications using Apache Spark and Hadoop.
- Write MapReduce jobs and complex data transformation logic.
- Implement machine learning models and analytics solutions for business use cases.
- Optimize code for performance and scalability; perform debugging and troubleshooting.
- Work hands-on with Databricks for data engineering and analysis.
- Design and manage Airflow DAGs for orchestration and automation.
- Integrate and maintain CI/CD pipelines (preferably using Jenkins).
Primary Skills & Qualifications:
- Strong programming skills in Scala and Python.
- Expertise in Apache Spark for large-scale data processing.
- Solid understanding of data structures and algorithms.
- Proven experience in application development and software engineering best practices.
- Experience working in agile and collaborative environments.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Similar companies
About the company
Rocketium is an agile CreativeOps platform that helps enterprises take their communications to market faster and at lower costs. With the combined strength of creative automation, seamless collaboration, automated brand compliance, powerful creative analytics, and smart content management, our platform empowers enterprises to do more with their existing teams, processes, and tools.
Rocketium is funded by marquee investors like 021 Capital, 1Crowd, Blume Ventures, and Emergent Ventures.
We take pride in working with some of the best names in the business including Amazon, Home Depot, Walmart, Henry Schein, Roche, and many more.
To know more about our work, check this out:
https://blog.rocketium.com/about
Jobs
1
About the company
Twinline is driving a financial revolution for NBFCs and MFIs through its flagship platform, Finpage-a secure, scalable SaaS solution built to simplify and transform microlending. From loan origination to regulatory reporting, Finpage streamlines operations, reduces turnaround time from 14 days to just 1, and cuts errors to 0.5%.
Trusted by 40+ clients and managing ₹25,000 Cr+ in AUM, Twinline is reshaping inclusive finance—digitally, intelligently, and at scale.
Jobs
1
About the company
Jobs
12
About the company
Founded in 2016 by ex-Product & IT leaders from BCG, KPMG, RBS & Microsoft, Grey Chain is an AI and mobile-first product and services firm that focuses on design-led solutions.
We are trusted by Global companies including the likes of Accenture, UNICEF, BOSE, WHO and many other Fortune 500 Companies
We offer end-to-end engineering and development services for digital journeys, including mobile apps, CRMs, ERPs, and enterprise-grade solutions. We also provide consulting services and emphasize our expertise in Generative AI.
Jobs
8
About the company
OIP Insurtech streamlines insurance operations and optimizes workflows by combining deep industry knowledge with advanced technology. Established in 2012, OIP InsurTech partners with carriers, MGAs, program managers, and TPAs in the US, Canada, and Europe, especially the UK.
With 1,200 professionals serving over 100 clients, we deliver insurance process automation, custom software development, high-quality underwriting services, and skilled tech staff to augment our clients.
While saving time and money is the immediate win, the real game-changer is giving our clients the freedom to grow their books, run their businesses, and focus on what they love. We’re proud to support them on this journey and make a positive impact on the industry!
Jobs
5
About the company
Jobs
2
About the company
Jobs
314
About the company
Jobs
11
About the company
Jobs
3
About the company
Founded with the mission to fix the broken outsourcing model, ThinkGrid Labs is a globally distributed technology company helping businesses design, build, and scale digital products. From AI-powered solutions to cloud-native applications and APIs, we partner with startups and enterprises to deliver software that is scalable, compliant, and future-ready.
Why ThinkGrid?
We’re not just another services company - we act as an extension of our clients’ teams, taking ownership from ideation to delivery. Our focus is on high-quality engineering, design excellence, and long-term partnerships, which is why we’re consistently rated 4.8/5 by clients on Clutch.
Milestones & Impact
- Built scalable platforms for fintech, healthtech, logistics, and SaaS companies across the globe.
- Grew into a globally distributed team across India, Australia, and beyond.
- Delivered solutions that handle millions of API calls and large-scale cloud workloads daily.
- Recognized by clients for our speed, collaboration, and reliability.
Culture & Careers
At ThinkGrid Labs, you’ll work in a remote-first, global team that values autonomy, curiosity, and impact. We give every team member the chance to own features end-to-end, experiment with new technologies, and see their work make a direct difference.
We thrive on:
- 🌍 Distributed collaboration
- 🚀 Continuous learning
- 🤝 People-first partnerships
👉 Join us to build technology that actually matters. Explore open roles on our LinkedIn Careers Page.
Jobs
3





