

CoffeeBeans
https://coffeebeans.ioAbout
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projects—driving meaningful business outcomes through tech excellence.
Jobs at CoffeeBeans
Role Overview:
- We are looking for strong React + NodeJS Developers with AI-Generated Code Experience who can independently deliver high-quality code, follow best development practices, and ensure scalable, maintainable solutions.
- The ideal candidate is someone who adapts quickly, understands modern engineering practices, and can work effectively with AI-assisted development workflows.
Key Expectations :
- Strong hands-on experience with React.js (hooks, state management, component design)
- Practical experience with Node.js and building backend APIs
- Strong understanding of standard development practices and commonly used design patterns, with the ability to apply them effectively in real-world scenarios.
- Comfortable working with AI-generated code and capable of reviewing, refining, and restructuring it to produce clean, composable, and maintainable components or modules.
The ideal candidate will play a key role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and high-performance data and analytics solutions. This role requires strong expertise in Azure, Databricks, and cloud-native DevOps practices.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
- Implement best practices across resource groups, virtual networks, storage accounts, etc.
- Ensure cost optimization, high availability, and disaster recovery for business-critical systems.
2. Databricks Platform Management
- Set up, configure, and maintain Databricks workspaces for data engineering, ML, and analytics workloads.
- Automate cluster management, job scheduling, and performance monitoring.
- Integrate Databricks seamlessly with Azure data and analytics services.
3. CI/CD Pipeline Development
- Design and implement CI/CD pipelines for infrastructure, applications, and data workflows.
- Work with Azure DevOps / GitHub Actions (or similar) for automated testing and deployments.
- Drive continuous delivery, versioning, and monitoring best practices.
4. Monitoring & Incident Management
- Implement monitoring and alerting with Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
- Diagnose and resolve issues to ensure minimal downtime and smooth operations.
5. Security & Compliance
- Enforce IAM, encryption, network security, and secure development practices.
- Ensure compliance with organizational and regulatory cloud standards.
6. Collaboration & Documentation
- Work closely with data engineers, software developers, architects, and business teams to align infrastructure with business goals.
- Maintain thorough documentation for infrastructure, processes, and configurations.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
Must-Have Experience
- 6+ years in DevOps / Cloud Engineering roles.
- Proven expertise in:
- Microsoft Azure (Azure Data Lake, Databricks, ADF, Azure Functions, AKS, Azure AD)
- Databricks for data engineering / analytics workloads.
- Strong experience applying DevOps practices to cloud-based data and analytics platforms.
Technical Skills
- Infrastructure as Code (Terraform, ARM, Bicep).
- Scripting (Python / Bash).
- Containerization & orchestration (Docker, Kubernetes).
- CI/CD & version control (Git, Azure DevOps, GitHub Actions).
Soft Skills
- Strong analytical and problem-solving mindset.
- Excellent communication and collaboration abilities.
- Ability to operate in cross-functional and fast-paced environments.
Role Overview
We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Skills:
Java, GCP or any other Cloud platform, NoSQL, docker, contanerization
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
We are seeking strong Java Full Stack Engineers who can independently contribute across backend and frontend systems. The ideal candidate should take complete ownership of delivery, collaborate effectively with cross-functional teams, and build scalable, high-performance applications.
Key Responsibilities
- Build, enhance, and maintain full-stack applications using Java, Spring Boot, React.js/Next.js.
- Own end-to-end feature development — design, development, testing, and deployment.
- Develop scalable microservices and ensure system performance, reliability, and security.
- Collaborate with product, QA, and architecture teams to deliver high-quality software.
- Optimize applications for speed, responsiveness, and maintainability on both backend and frontend.
- Troubleshoot complex issues across the stack and drive solutions independently.
Technical Skills (Must-Have)
Backend
- Strong experience in Java, Spring Boot, and Microservices
- Solid understanding of Core Java, LLD, Design Patterns, and basic System Design
- Hands-on experience with Kafka, MongoDB, Redis, and distributed systems
- Experience with SQL or NoSQL databases
Frontend
- Strong experience in React.js or Next.js
- Proficiency in API integration, state management (Redux / Context API), and frontend optimization
- Strong knowledge of JavaScript (ES6+), HTML5, CSS3
Additional Expectations
- Ability to work with minimal supervision and deliver high-quality code on time
- Strong debugging, problem-solving, and ownership mindset
- Experience building scalable, resilient, and performant applications
- Excellent communication and collaboration skills
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and data platforms. In this role, you will collaborate with clients and internal teams to deliver robust data solutions that support analytics, AI/ML, and operational systems. You will also mentor junior engineers and bring strong engineering discipline to our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed batch and streaming data pipelines.
- Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Work closely with cross-functional stakeholders to translate business requirements into technical data solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architecture discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python (exp with Java is a good to have).
- Advanced SQL expertise with ability to work on complex queries and optimizations.
- Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Experience with distributed processing frameworks like Apache Spark, Flink, or similar.
- Experience with Snowflake (preferred).
- Hands-on experience building pipelines using orchestration tools such as Airflow or similar.
- Familiarity with CI/CD, version control (Git), and modern development practices.
- Ability to debug, optimize, and scale data pipelines in real-world environments.
Good to Have
- Experience with major cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, data quality frameworks, and observability.
- Certifications in AWS (Data Analytics / Solutions Architect) or Databricks.
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with excellent attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements as needed.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Similar companies
About the company
MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.
Jobs
2
About the company
Jobs
1
About the company
There's an old saying "first you work then you play", well BlogVault is here to break that where you won't feel the difference between work and leisure because it's a free working environment with a lot of encouragement to learn an extra mile.
BlogVault is a place which you'd look out to work for. It's not just the culture, it's not just the friends you meet, it's also the work you do! Well, you might be thinking this is a SaaS company who's growing exponentially and yet they're informal, you're right! we believe in having a relaxed work place where you can be you!
So be who you are and let's show what we are!
Jobs
1
About the company
Who we are
We are Software Craftspeople. We are proud of the way we work and the code we write. We embrace and are evangelists of eXtreme Programming practices. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus own quality. And most importantly, we never stop learning!
We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. We work with large established institutions to help them create internal applications to automate manual opperations and achieve scale.
We design software, design the team a well as the organizational strategy required to successfully release robust and scalable products. Incubyte strives to find people who are passionate about coding, learning and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim to bringing a product mindset into services. More on our website: https://www.incubyte.co/
Join our team! We’re always looking for like minded people!
Jobs
12
About the company
At Torero Softwares Ltd, we build next-gen ERP solutions that power businesses in healthcare, pharma, FMCG, distribution, and retail. With 25+ years of expertise and a 3,500+ client base, our flagship product, Medica Ultimate™, helps companies streamline operations, boost efficiency, and stay compliant.
Why Join Us?
🚀 Fast-Growing Tech Company – Work on industry-leading SaaS & ERP solutions
💡 Innovation-Driven – Be part of a team solving real-world business challenges
📈 Career Acceleration – Hands-on learning, mentorship & growth opportunities
📚 Collaborative Culture – Work alongside tech experts in a dynamic environment
Whether in sales, implementation, or customer success, you'll help transform businesses with technology.
📍 Location: Lower Parel, Mumbai
📩 Contact: Simran Jain | ✉ [email protected] | 📞 9702074236 (Call/WhatsApp)
Jobs
2
About the company
Jobs
6
About the company
Bits In Glass (BIG) is an award-winning software consulting firm that helps organizations improve operations and drive better customer experiences. They specialize in business process automation consulting, helping clients unlock the potential of their people, processes, and data.
Jobs
3
About the company
At Zenius IT Services, we specialize in delivering top-tier Professional Services for industry-leading platforms such as Avaya, Cisco, Genesys, Amazon Connect, Five9, and NICE inContact.
Our expertise extends to Digital Engineering Solutions powered by AI and Machine Learning, helping businesses drive innovation and achieve excellence.
Jobs
2
About the company
Jobs
2
About the company
All-in-1 AI based photo sharing platform, enabling users to get their photos from public events and locations in just 1 tap and intelligently share pictures with friends and family in High Quality without messing up their gallery.
Jobs
1





