

CoffeeBeans
https://coffeebeans.ioAbout
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projects—driving meaningful business outcomes through tech excellence.
Jobs at CoffeeBeans
Role Overview:
- We are looking for strong React + NodeJS Developers with AI-Generated Code Experience who can independently deliver high-quality code, follow best development practices, and ensure scalable, maintainable solutions.
- The ideal candidate is someone who adapts quickly, understands modern engineering practices, and can work effectively with AI-assisted development workflows.
Key Expectations :
- Strong hands-on experience with React.js (hooks, state management, component design)
- Practical experience with Node.js and building backend APIs
- Strong understanding of standard development practices and commonly used design patterns, with the ability to apply them effectively in real-world scenarios.
- Comfortable working with AI-generated code and capable of reviewing, refining, and restructuring it to produce clean, composable, and maintainable components or modules.
The ideal candidate will play a key role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and high-performance data and analytics solutions. This role requires strong expertise in Azure, Databricks, and cloud-native DevOps practices.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
- Implement best practices across resource groups, virtual networks, storage accounts, etc.
- Ensure cost optimization, high availability, and disaster recovery for business-critical systems.
2. Databricks Platform Management
- Set up, configure, and maintain Databricks workspaces for data engineering, ML, and analytics workloads.
- Automate cluster management, job scheduling, and performance monitoring.
- Integrate Databricks seamlessly with Azure data and analytics services.
3. CI/CD Pipeline Development
- Design and implement CI/CD pipelines for infrastructure, applications, and data workflows.
- Work with Azure DevOps / GitHub Actions (or similar) for automated testing and deployments.
- Drive continuous delivery, versioning, and monitoring best practices.
4. Monitoring & Incident Management
- Implement monitoring and alerting with Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
- Diagnose and resolve issues to ensure minimal downtime and smooth operations.
5. Security & Compliance
- Enforce IAM, encryption, network security, and secure development practices.
- Ensure compliance with organizational and regulatory cloud standards.
6. Collaboration & Documentation
- Work closely with data engineers, software developers, architects, and business teams to align infrastructure with business goals.
- Maintain thorough documentation for infrastructure, processes, and configurations.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
Must-Have Experience
- 6+ years in DevOps / Cloud Engineering roles.
- Proven expertise in:
- Microsoft Azure (Azure Data Lake, Databricks, ADF, Azure Functions, AKS, Azure AD)
- Databricks for data engineering / analytics workloads.
- Strong experience applying DevOps practices to cloud-based data and analytics platforms.
Technical Skills
- Infrastructure as Code (Terraform, ARM, Bicep).
- Scripting (Python / Bash).
- Containerization & orchestration (Docker, Kubernetes).
- CI/CD & version control (Git, Azure DevOps, GitHub Actions).
Soft Skills
- Strong analytical and problem-solving mindset.
- Excellent communication and collaboration abilities.
- Ability to operate in cross-functional and fast-paced environments.
Role Overview
We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Skills:
Java, GCP or any other Cloud platform, NoSQL, docker, contanerization
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
We are seeking strong Java Full Stack Engineers who can independently contribute across backend and frontend systems. The ideal candidate should take complete ownership of delivery, collaborate effectively with cross-functional teams, and build scalable, high-performance applications.
Key Responsibilities
- Build, enhance, and maintain full-stack applications using Java, Spring Boot, React.js/Next.js.
- Own end-to-end feature development — design, development, testing, and deployment.
- Develop scalable microservices and ensure system performance, reliability, and security.
- Collaborate with product, QA, and architecture teams to deliver high-quality software.
- Optimize applications for speed, responsiveness, and maintainability on both backend and frontend.
- Troubleshoot complex issues across the stack and drive solutions independently.
Technical Skills (Must-Have)
Backend
- Strong experience in Java, Spring Boot, and Microservices
- Solid understanding of Core Java, LLD, Design Patterns, and basic System Design
- Hands-on experience with Kafka, MongoDB, Redis, and distributed systems
- Experience with SQL or NoSQL databases
Frontend
- Strong experience in React.js or Next.js
- Proficiency in API integration, state management (Redux / Context API), and frontend optimization
- Strong knowledge of JavaScript (ES6+), HTML5, CSS3
Additional Expectations
- Ability to work with minimal supervision and deliver high-quality code on time
- Strong debugging, problem-solving, and ownership mindset
- Experience building scalable, resilient, and performant applications
- Excellent communication and collaboration skills
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and data platforms. In this role, you will collaborate with clients and internal teams to deliver robust data solutions that support analytics, AI/ML, and operational systems. You will also mentor junior engineers and bring strong engineering discipline to our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed batch and streaming data pipelines.
- Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Work closely with cross-functional stakeholders to translate business requirements into technical data solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architecture discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python (exp with Java is a good to have).
- Advanced SQL expertise with ability to work on complex queries and optimizations.
- Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Experience with distributed processing frameworks like Apache Spark, Flink, or similar.
- Experience with Snowflake (preferred).
- Hands-on experience building pipelines using orchestration tools such as Airflow or similar.
- Familiarity with CI/CD, version control (Git), and modern development practices.
- Ability to debug, optimize, and scale data pipelines in real-world environments.
Good to Have
- Experience with major cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, data quality frameworks, and observability.
- Certifications in AWS (Data Analytics / Solutions Architect) or Databricks.
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with excellent attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements as needed.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Similar companies
About the company
MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.
Jobs
2
About the company
Certa’s no-code platform makes it easy to digitize and manage the lifecycle of all your suppliers, partners, and customers. With automated onboarding, contract lifecycle management, and ESG management, Certa eliminates the procurement bottleneck and allows companies to onboard third-parties 3x faster.
Jobs
4
About the company
Beyond Seek is a team of R.A.R.E individuals who're solving impactful problems using the best tools available today!
Jobs
1
About the company
OneSpider Technologies LLP is a leading provider of software and mobile application solutions for the Pharma and FMCG sector. Our products help distributors and retailers streamline operations.
Jobs
1
About the company
Jobs
50
About the company
Founded with the mission to fix the broken outsourcing model, ThinkGrid Labs is a globally distributed technology company helping businesses design, build, and scale digital products. From AI-powered solutions to cloud-native applications and APIs, we partner with startups and enterprises to deliver software that is scalable, compliant, and future-ready.
Why ThinkGrid?
We’re not just another services company - we act as an extension of our clients’ teams, taking ownership from ideation to delivery. Our focus is on high-quality engineering, design excellence, and long-term partnerships, which is why we’re consistently rated 4.8/5 by clients on Clutch.
Milestones & Impact
- Built scalable platforms for fintech, healthtech, logistics, and SaaS companies across the globe.
- Grew into a globally distributed team across India, Australia, and beyond.
- Delivered solutions that handle millions of API calls and large-scale cloud workloads daily.
- Recognized by clients for our speed, collaboration, and reliability.
Culture & Careers
At ThinkGrid Labs, you’ll work in a remote-first, global team that values autonomy, curiosity, and impact. We give every team member the chance to own features end-to-end, experiment with new technologies, and see their work make a direct difference.
We thrive on:
- 🌍 Distributed collaboration
- 🚀 Continuous learning
- 🤝 People-first partnerships
👉 Join us to build technology that actually matters. Explore open roles on our LinkedIn Careers Page.
Jobs
2
About the company
Jobs
5
About the company
JobTwine is an Intelligent Interviewing Platform built to revolutionise the interviewing ecosystem within organisations.
Our innovative technology assists hiring managers to effortlessly craft tailored interview playbooks for any role within minutes, while our Interviewer Copilot provides guidance, consistency and transcriptions to streamline and automate interview feedback.
Our product offering includes:
- Playbook builder - Our Industry first Smart Playbook builder, intelligently recommends how to conduct interviews for a particular round and for a level of competency in seconds. Curated by experts and made intelligent by usage across the similar cohorts, hiring managers can say goodbye to wasting 100’s of hours and unconscious bias.
- Interviewer Copilot - Bring full consistency in the interviews and take the bias completely out. Interviewers get guided on the skills to assess, questions to ask and scoring to give with the friendly Copilot. Every interview is recorded, transcribed and objectively scored in minutes.
- Automated Scheduling - Schedule your Interviews Hassle Free at Scale. Switch to smart scheduling with calendar integration, auto-matching of slots and much more.
- Interview As a Service - Leverage 800+ global interview experts from over 100 technologies and hire the right talent effortlessly. Save 100’s of hours of productivity of your engineers and hire 3X faster
Unlock unparalleled advantages by harnessing the power of our revolutionary SaaS platform.:
1. Reduce Wrong hire by 60%
2. Reduce Churn by 30%
3. Increase Productivity
4. Reduce time to offer by 3X
5. Achieve DEI targets more objectively
Jobs
3





