2+ Clustering Jobs in Mumbai | Clustering Job openings in Mumbai
Apply to 2+ Clustering Jobs in Mumbai on CutShort.io. Explore the latest Clustering Job opportunities across top companies like Google, Amazon & Adobe.

About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for a skilled MongoDB Database Administrator (DBA) to support and maintain client database systems within our services organization. The MongoDB DBA will be responsible for managing day-to-day database operations, ensuring optimal performance, security, and availability of database environments. This role requires working closely with clients, stakeholders, and teams to resolve issues, implement changes, and deliver database solutions that meet business and technical requirements.
Location - Onsite Navi Mumbai
Employment Type: Full-time
Key Responsibilities:
- Install, configure, and manage MongoDB EA on-premise databases for applications, ensuring high availability and optimal performance.
- Implement and maintain MongoDB database clusters, including replica sets and sharded clusters, ensuring data distribution and fault tolerance.
- Perform regular database performance tuning and optimization to maximize query efficiency and reduce response time.
- Implement backup and recovery strategies to ensure data availability and minimize downtime.
- Monitor and troubleshoot MongoDB database issues, including performance bottlenecks, query optimization, and replication problems.
- Execute and support version upgrades and migrations.
- Document database configurations,and troubleshooting steps to maintain a comprehensive knowledge base.
- Provide guidance and troubleshooting support on MongoDB
- Provide regular updates to stakeholders on issue status.
Requirements:
- Proven work experience as a MongoDB Database Administrator, with a focus on setting up and managing MongoDB databases.
- Strong understanding of MongoDB architecture, including replica sets, sharded clusters, and data distribution.
- Proficiency in MongoDB database performance tuning and optimization techniques.
- Experience with MongoDB backup and recovery strategies, including point-in-time recovery and disaster recovery procedures.
- Knowledge of MongoDB security features and best practices for securing MongoDB databases.
- Familiarity with MongoDB monitoring and diagnostic tools, such as MongoDB Cloud Manager, Ops Manager, or third-party solutions.
- Strong problem-solving and troubleshooting skills, with the ability to identify and resolve complex database issues.
- Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams.
- MongoDB certifications (e.g., MongoDB Certified DBA) are highly desirable.
- Bachelor’s degree in Computer Science, Business, or related field with 5+ years of relevant experience.


Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote