11+ k-means clustering Jobs in Bangalore (Bengaluru) | k-means clustering Job openings in Bangalore (Bengaluru)
Apply to 11+ k-means clustering Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest k-means clustering Job opportunities across top companies like Google, Amazon & Adobe.
Senior Big Data Engineer
Note: Notice Period : 45 days
Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA.
We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure.
It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges.
Key Qualifications
· 5+ years of experience working with Java and Spring technologies
· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations
· Knowledge of microservices architecture is plus
· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra
· Experience with Kafka or any streaming tools
· Knowledge of Scala would be preferable
· Experience with agile application development
· Exposure of any Cloud Technologies including containers and Kubernetes
· Demonstrated experience of performing DevOps for platforms
· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity
· Exposure to Graph databases
· Passion for learning new technologies and the ability to do so quickly
· A Bachelor's degree in a computer-related field or equivalent professional experience is required
Key Responsibilities
· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture
· Design and develop the big data-focused micro-Services
· Involve in big data infrastructure, distributed systems, data modeling, and query processing
· Build software with cutting-edge technologies on cloud
· Willing to learn new technologies and research-orientated projects
· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed
What We Expect:
We’re looking for an experienced and driven Senior Project Manager to independently lead strategic projects in the trading and capital markets domain. This role is designed for someone who thrives in client-facing environments, has a solid grip on software delivery in trading systems, and can bridge business requirements with technical execution — confidently and independently.
• 8+ years of project management experience, with at least 5 years in a trading platform environment.
• Proven track record in client-facing roles, including requirement gathering, expectation management, and solution alignment.
• Strong understanding of SOW structuring, commercial terms, and delivery contracts.
• Ability to analyse client requirements for technical feasibility and prioritize features based on business value and effort.
• Familiarity with trading protocols (FIX), market data flows, OMS/EMS, and post-trade workflows.
• Excellent communication, stakeholder management, and leadership skills — you can manage up, down, and across.
• Certifications like PMP, CSM, PMI-ACP, or Agile Coach credentials are a plus.
What You Will Do:
• Independently manage end-to-end delivery of FinTech projects — from initial scope definition to go-live.
• Evaluate feasibility of incoming client requirements — analyze scope, assess risks, and align with engineering leads for accurate estimations.
• Drive Agile delivery across multiple squads — ensuring quality, velocity, and adaptability.
• Collaborate with product managers, developers, QA, DevOps, and architects for seamless execution.
• Own project delivery from start to finish — manage scope, timelines, resources, and risk.
• Manage key client relationships, gather requirements, and ensure alignment throughout the project lifecycle.
• Drive the creation, review, and execution of Statements of Work (SOWs) — defining deliverables, dependencies, and budgets.
• Conduct feasibility analysis for incoming client requirements and coordinate with tech leads for impact estimation.
• Ensure on-time delivery of project milestones, proactively tracking progress and resolving blockers.
• Maintain high quality standards — enforce review processes, testing gates, and best practices throughout development.
• Oversee deployment planning and execution, ensuring production rollouts happen smoothly, securely, and with minimal downtime.
• Mitigate risks, manage dependencies, and escalate roadblocks when needed — always staying ahead of delivery curveballs.
• Apply Agile methodologies to drive iterative development and continuous improvement.
• Manage stakeholder expectations and keep communication channels active, transparent, and forward-looking.
• Mentor junior PMs and contribute to refining delivery frameworks within the organization.
Must-Have Skills:
• 8+ years of project management experience, with at least 5 years in a trading platform environment.
• Proven track record in client-facing roles, including requirement gathering, expectation management, and solution alignment.
• Strong understanding of SOW structuring, commercial terms, and delivery contracts.
• Ability to analyse client requirements for technical feasibility and prioritize features based on business value and effort.
• Familiarity with trading protocols (FIX), market data flows, OMS/EMS, and post-trade workflows.
• Excellent communication, stakeholder management, and leadership skills — you can manage up, down, and across.
Nice-to-Have Skills:
• Certifications like PMP, CSM, PMI-ACP, or Agile Coach credentials are a plus.
Job Title: MERN Stack Developer (Node.js & React.js Expert)
Experience: 5 – 7 Years
Location: Bangalore
Work Mode : Hybrid
Employment Type: Full Time
Job Summary:
We are seeking a highly skilled MERN Stack Developer with a strong background in Node.js and React.js to join our dynamic team. The ideal candidate should have 5–7 years of hands-on experience building scalable, secure, and high-performance web applications. You will play a critical role in the architecture, design, development, and deployment of end-to-end solutions using the MERN stack.
Key Responsibilities:
- Design, develop, and maintain full-stack applications using MongoDB, Express.js, React.js, and Node.js
- Build RESTful APIs and ensure integration with front-end components
- Optimize components for maximum performance across a vast array of web-capable devices and browsers
- Write clean, modular, and reusable code with proper documentation and testing
- Troubleshoot and debug issues across the stack
- Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products
- Participate in code reviews, architectural discussions, and continuous improvement of development processes
- Ensure secure coding practices and compliance with best standards
- Manage deployments and maintain CI/CD pipelines
Required Skills and Qualifications:
- 5–7 years of professional experience in full-stack web development with a focus on Node.js and React.js
- Deep understanding of JavaScript, ES6+, and asynchronous programming
- Strong experience in building scalable backend services using Node.js and Express
- Proficient in building rich UI components using React.js, Redux/Context API, and modern front-end tooling (Webpack, Babel, etc.)
- Experience with MongoDB and Mongoose, and understanding of NoSQL database design
- Hands-on experience with API design, JWT/Auth, and third-party integrations
- Familiarity with Git, Docker, and modern DevOps practices
- Knowledge of performance optimization and security best practices
- Experience with testing frameworks (Jest, Mocha, Cypress, etc.)
Nice to Have:
- Exposure to TypeScript
- Experience with GraphQL
- Familiarity with cloud platforms like AWS, Azure, or GCP
- Knowledge of CI/CD tools like Jenkins, GitHub Actions, etc.
Note : Immediate & Serving Notice period candidates are prefered
Primary Responsibilities
- Understand current state architecture, including pain points.
- Create and document future state architectural options to address specific issues or initiatives using Machine Learning.
- Innovate and scale architectural best practices around building and operating ML workloads by collaborating with stakeholders across the organization.
- Develop CI/CD & ML pipelines that help to achieve end-to-end ML model development lifecycle from data preparation and feature engineering to model deployment and retraining.
- Provide recommendations around security, cost, performance, reliability, and operational efficiency and implement them
- Provide thought leadership around the use of industry standard tools and models (including commercially available models and tools) by leveraging experience and current industry trends.
- Collaborate with the Enterprise Architect, consulting partners and client IT team as warranted to establish and implement strategic initiatives.
- Make recommendations and assess proposals for optimization.
- Identify operational issues and recommend and implement strategies to resolve problems.
Must have:
- 3+ years of experience in developing CI/CD & ML pipelines for end-to-end ML model/workloads development
- Strong knowledge in ML operations and DevOps workflows and tools such as Git, AWS CodeBuild & CodePipeline, Jenkins, AWS CloudFormation, and others
- Background in ML algorithm development, AI/ML Platforms, Deep Learning, ML Operations in the cloud environment.
- Strong programming skillset with high proficiency in Python, R, etc.
- Strong knowledge of AWS cloud and its technologies such as S3, Redshift, Athena, Glue, SageMaker etc.
- Working knowledge of databases, data warehouses, data preparation and integration tools, along with big data parallel processing layers such as Apache Spark or Hadoop
- Knowledge of pure and applied math, ML and DL frameworks, and ML techniques, such as random forest and neural networks
- Ability to collaborate with Data scientist, Data Engineers, Leaders, and other IT teams
- Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
- Willing to flex daily work schedule to allow for time-zone differences for global team communications
- Strong interpersonal and communication skills
Lead Data Engineer
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
Job responsibilities
· You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems
· You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges
· You will collaborate with Data Scientists in order to design scalable implementations of their models
· You will pair to write clean and iterative code based on TDD
· Leverage various continuous delivery practices to deploy, support and operate data pipelines
· Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
· Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
· Create data models and speak to the tradeoffs of different modeling approaches
· On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product
· Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
· Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications Technical skills
· You are equally happy coding and leading a team to implement a solution
· You have a track record of innovation and expertise in Data Engineering
· You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations
· You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
· You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
· Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
· You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
· You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
· Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems
Professional skills
· Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers
· You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
· An interest in coaching others, sharing your experience and knowledge with teammates
· You enjoy influencing others and always advocate for technical excellence while being open to change when needed
60 Decibels is a tech-powered impact measurement company that makes it easy to listen to the people who matter most. We've been in business as an independent entity since early 2019, when we spun out of the global Impact Investor Acumen.
We believe that the best way to understand social impact is by talking to the people experiencing that impact. It sounds obvious when you say it, but that is not the typical practice for many impact investors, corporations and foundations working to create social change.
We collect social impact data directly from beneficiaries (customers / employees / suppliers) using our network of 1000+ trained research assistants in 75+ countries. We do it quickly and without the fuss typically associated with measuring social impact. Our research assistants speak directly to customers to understand their lived experience; and our team turns all this data into benchmarked social performance reports, with accompanying insights, to help our clients demonstrate and improve social performance.
By making impact measurement simple, scalable, and comparable, we not only enable organizations to improve their products and services; we also help transform what it means to credibly measure impact, ensuring that the voices of those who matter most are always part of the story.
About the role:
You’ll be an early engineer in a newly formed engineering team. This is a unique opportunity for you to learn, build, demonstrate, and subsequently own diverse components of our tech stack and evolve as we scale our business. As a core engineer in a small team, the role demands a high degree of self-drive and ownership, on a path to achieving outsized impact.
Your Primary Responsibility Will Be:
To Advance Our Integrated Data Capture And Insights Platform (Ruby/React/PostgreSQL) And Associated Tooling (Python)
Specifically, your responsibilities will include:
- Work with a diverse multidisciplinary team across Engineering, Product & Operations, to translate product specs into clean, functional, production-ready code.
- Participate actively in defining of the systems architecture vision to better support our team’s needs
- Grow our technical capacity by mentoring other engineers and interviewing candidates
- Collaborate with team members to identify systems, practices & technologies that suit our needs the best
- Seek, learn, adopt and advocate industry best practices. Contribute towards the engineering culture
- Troubleshooting coding problems quickly and efficiently to ensure a productive workplace
About You: First and foremost, you bring compassion and dedication to this work because it matters to you.
You are a pragmatic and product-driven engineer who is interested in solving user problems and delivering value while taking into account tradeoffs between Business and Tech. You have a bias towards action: you get your hands dirty and actively tackle problems in a way that leads to the best outcomes and brings teams together. You successfully balance flexibility and rigour, using informed judgment to make decisions. You model critical thinking and introspection, taking strategic risks and growing from mistakes. You are decisive and bold, have a growth mindset, are an excellent communicator, and know the value of being a part of an effective team
Minimum Qualification:
- At least 4+ years of experience in software engineering building SaaS platforms, products and APIs
- At least 1+ years in your current role as a senior engineer
- Strong proficiency in Ruby on Rails (primary) and Python (additional)
- Strong proficiency in data modelling, RDBMS (Postgres preferred) and NoSQL databases.
- Proficient in software design, modularity, testability and software quality
- Strong problem-solving and decision-making skills
Additional desired qualifications:
- Preferred prior experience in the following stack: Python, Ruby On Rails, React, Postgres, AWS
- Experience in working on natural language processing, and ML for text and audio.
- Fast self-learner, with aptitude and interest in learn new technologies, languages & frameworks
- Experience working on data-pipelines, and data transformation systems.
- Experience setting up, managing and ensuring best practices on public clouds like AWS, GCP, or Azure (we use AWS)
Working with 60 Decibels
We are a fun, international and highly-motivated team who believes that team members should have the opportunity to expand their skills and career in a supportive environment. We offer a competitive salary, the opportunity to work flexibly and in a fun, supportive working environment. If this sounds like the role for you, get in touch!
60 Decibels is deeply committed to having a workplace that is inclusive and anti-discriminatory. We believe that our team must embody the compassion, listening, and sense of shared humanity that is so central to our goal as an organization. We are proud to be an Equal Opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation age, marital status, veteran status, or disability.
As a growing company, we are building towards a more universally accessible workplace for our employees. At this time, we do use some cloud-based technologies that are not compatible with screen readers and other assistive devices. We would be happy to discuss accessibility at 60 Decibels in greater depth during the recruitment process.
About our team and our culture: we are a fun and hardworking global team that is full of smart, mission-driven folks who combine an entrepreneurial spirit with a commitment to make a positive change in the world.
We consistently hear from our clients that the best thing about 60 Decibels is the people. To get a feel for our slightly nerdy, not-take-ourselves-too-seriously vibe, check out our monthly newsletter, The Volume.
Compensation: 60 Decibels offers a competitive salary and benefits package and the opportunity to work in a flexible, fun and supportive environment. The salary range will be adjusted according to costs of living in our country offices.
Extra Perks: We have an unlimited leave policy and 12 monthly recharge days, on the first Friday of each month. We are a globally distributed team and we give team members opportunities to cross-pollinate and visit our different offices.
Want to get to know us a little better?
> Sign up to receive The Volume, our monthly collection of things worth reading.
> Visit our website at 60decibels.com.
> Read about our team values here.
Essential Requirements:
- 5+ years of Software development experience
- Hands on development experience using Java technology stack with focus on architecture and design.
- Hands-on in Java Programming, J2EE, Spring or SpringBoot, Hibernate, RestAPI, Data Structures, Design Patterns, Oracle Database, PL/SQL
- Experience in application servers with prime focus on Tomcat.
- Experience in messaging systems such as RabbitMQ.
- Experience working in Linux/Unix environments. Must be hands on with Object Oriented concepts along with passion for design patterns and applicability.
- Relevant experience in Java frameworks like Spring Microservices, Spring-Boot, Hibernate, JPA etc.
- Understanding of developer testing practices and the differences between unit, functional, and system tests,
- Should have working experience in a CI/CD environment where build & test is automated.
- Should have working experience in tools like Maven, Jenkins, Bamboo etc.
- Should have used testing Frameworks such as JUnit, Selenium
- Ability to quickly learn and apply new concepts or principles
- Ability to work effectively as part of global team
- Experience working in an agile environment.
- Experience in Azure and AWS Development and Deployment, Active Directory, Containerization.
About us:
Strata is a commercial real estate investment platform that offers investors the opportunity to invest in pre-leased commercial assets such as offices, warehouses, and retail spaces across India.
We are one of the fastest-growing PropTech platforms in India and are backed by Elevation Capital, Mayfield, Kotak Investment Advisors, Gruhas (a venture by Nikhil Kamath of Zerodha) and DLF Family Office.
Our headquarter is in Bangalore, India.
We are a small yet close-knit team of 35+ people.
About you:
You have a knack for product and strongly equipped with engineering skills to make it a reality. You don’t jump into implementation unless you have clearly understood the problem/requirements and have a written design. You don’t hesitate asking questions and giving critical reviews while respecting others’ opinions. You are a fearless engineer, and not afraid to fail, not on production though ;) You have a strong sense of ownership. You have a startup experience. You don’t feel annoyed if required to work on off days in case of production incidents (no development).
Your responsibilities:
You will primarily work on the backend maintaining the existing platform and writing new enhancements and features. Apart from your own features, as a senior member of the team you will be expected to be actively involved in overall design discussions and peer code reviews. You will factor in extensibility, maintainability, scalability and security in designs and code. You will ensure that best engineering practices are followed. You will support your team members when they are facing challenges, and mentor them when appropriate. You will strive for overall engineering excellence.
Must-have Skills:
- 4-8 years of total experience
- 2+ years of experience in Django and DRF. You must be pro at it. The team will be looking up to you for making most out of DRF.
- Designing and building scalable web applications
- Good understanding of REST principles
- SQL database design and queries
- Working knowledge of AWS
Good-To-Have Skills:
- Angular or any other JavaScript frontend framework
- Designing microservices
- DevOps experience
FAQ's.
1. Your technology ingredients?
Programming languages:
Our backend is written in Python using Django framework. And the frontend is in AngularJS. Currently there is no mobile app.
Infrastructure:
We are an AWS shop and use their services for most needs, Docker for containerization and ECS as orchestrator, RDS as database, S3 for storage and SQS as messaging backend.
Other tools:
BitBucket for Git and CI/CD pipeline, Trello for project management; and Google Meets and Slack for communication.
2. Your hiring plan?
Currently the engineering team comprises of 3 engineers, and we plan to grow it to 8 by the end of 2021.
3. Your Interview process?
We will try to finish the process within two weeks. In case we can’t go ahead with your candidature, we will clearly and immediately let you know. But in case of competing applications, we may request you to wait for a week or two.
- 15 minutes introductory call to discuss requirements and expectations
- 30 minutes online coding assignment on a screen sharing session
- 1 hour technical interview covering problem solving, code review and aptitude
- 15 minutes call with co-founder for final discussion
In case of inability to come to a conclusion from either side, we may go for an additional round.
4. Your WFH and WFO policy?
Due to Covid-19 the team is working from home and will continue till March-2022. After that we will work from the Bangalore or Pune office (yet to be finalised). You should be open to relocate to any of the cities.
5. Your funding status?
In March 2020, Strata raised $1.5 million in seed funding; and $6 million in Series-A in July 2021.







