About Pluto Seven Business Solutions Pvt Ltd
Pluto7 is Technology Consulting company providing services on Google Cloud Platform. We are also Google Cloud's Top 5 Breakthrough partner for Machine Learning and Artificial Intelligence.
Our core offerings are:- a. Marketing ML b. Demand ML c. Supply ML d. Value Chain ML e. Pricing ML.We work magic with Google Cloud Platform and that allows our customers to see what they could never see before.
Similar jobs
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Role : Sr Data Scientist / Tech Lead – Data Science
Number of positions : 8
Responsibilities
- Lead a team of data scientists, machine learning engineers and big data specialists
- Be the main point of contact for the customers
- Lead data mining and collection procedures
- Ensure data quality and integrity
- Interpret and analyze data problems
- Conceive, plan and prioritize data projects
- Build analytic systems and predictive models
- Test performance of data-driven products
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements (please read carefully)
- Very strong in statistics fundamentals. Not all data is Big Data. The candidate should be able to derive statistical insights from very few data points if required, using traditional statistical methods.
- Msc-Statistics/ Phd.Statistics
- Education – no bar, but preferably from a Statistics academic background (eg MSc-Stats, MSc-Econometrics etc), given the first point
- Strong expertise in Python (any other statistical languages/tools like R, SAS, SPSS etc are just optional, but Python is absolutely essential). If the person is very strong in Python, but has almost nil knowledge in the other statistical tools, he/she will still be considered a good candidate for this role.
- Proven experience as a Data Scientist or similar role, for about 7-8 years
- Solid understanding of machine learning and AI concepts, especially wrt choice of apt candidate algorithms for a use case, and model evaluation.
- Good expertise in writing SQL queries (should not be dependent upon anyone else for pulling in data, joining them, data wrangling etc)
- Knowledge of data management and visualization techniques --- more from a Data Science perspective.
- Should be able to grasp business problems, ask the right questions to better understand the problem breadthwise /depthwise, design apt solutions, and explain that to the business stakeholders.
- Again, the last point above is extremely important --- should be able to identify solutions that can be explained to stakeholders, and furthermore, be able to present them in simple, direct language.
http://www.altimetrik.com/">http://www.altimetrik.com
https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end">https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end
https://www.youtube.com/watch?v=e40r6kJdC8c">https://www.youtube.com/watch?v=e40r6kJdC8c
Roles and Responsibilities
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
About Us
Mindtickle provides a comprehensive, data-driven solution for sales readiness and enablement that fuels revenue growth and brand value for dozens of Fortune 500 and Global 2000 companies and hundreds of the world’s most recognized companies across technology, life sciences, financial services, manufacturing, and service sectors.
With purpose-built applications, proven methodologies, and best practices designed to drive effective sales onboarding and ongoing readiness, mindtickle enables company leaders and sellers to continually assess, diagnose and develop the knowledge, skills, and behaviors required to engage customers and drive growth effectively. We are funded by great investors, like – Softbank, Canaan partners, NEA, Accel Partners, and others.
Job Brief
We are looking for a rockstar researcher at the Center of Excellence for Machine Learning. You are responsible for thinking outside the box, crafting new algorithms, developing end-to-end artificial intelligence-based solutions, and rightly selecting the most appropriate architecture for the system(s), such that it suits the business needs, and achieves the desired results under given constraints.
Credibility:
- You must have a proven track record in research and development with adequate publication/patenting and/or academic credentials in data science.
- You have the ability to directly connect business problems to research problems along with the latest emerging technologies.
Strategic Responsibility:
- To perform the following: understanding problem statements, connecting the dots between high-level business statements and deep technology algorithms, crafting new systems and methods in the space of structured data mining, natural language processing, computer vision, speech technologies, robotics or Internet of things etc.
- To be responsible for end-to-end production level coding with data science and machine learning algorithms, unit and integration testing, deployment, optimization and fine-tuning of models on cloud, desktop, mobile or edge etc.
- To learn in a continuous mode, upgrade and upskill along with publishing novel articles in journals and conference proceedings and/or filing patents, and be involved in evangelism activities and ecosystem development etc.
- To share knowledge, mentor colleagues, partners, and customers, take sessions on artificial intelligence topics both online or in-person, participate in workshops, conferences, seminars/webinars as a speaker, instructor, demonstrator or jury member etc.
- To design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance.
- To collaborate within the product streams and team to bring best practices and leverage world-class tech stack.
- To set up every essentials (tracking / alerting) to make sure the infrastructure / software built is working as expected.
- To search, collect and clean Data for analysis and setting up efficient storage and retrieval pipelines.
Personality:
- Requires excellent communication skills – written, verbal, and presentation.
- You should be a team player.
- You should be positive towards problem-solving and have a very structured thought process to solve problems.
- You should be agile enough to learn new technology if needed.
Qualifications:
- B Tech / BS / BE / M Tech / MS / ME in CS or equivalent from Tier I / II or Top Tier Engineering Colleges and Universities.
- 6+ years of strong software (application or infrastructure) development experience and software engineering skills (Python, R, C, C++ / Java / Scala / Golang).
- Deep expertise and practical knowledge of operating systems, MySQL and NoSQL databases(Redis/couchbase/mongodb/ES or any graphDB).
- Good understanding of Machine Learning Algorithms, Linear Algebra and Statistics.
- Working knowledge of Amazon Web Services(AWS).
- Experience with Docker and Kubernetes will be a plus.
- Experience with Natural Language Processing, Recommendation Systems, or Search Engines.
Our Culture
As an organization, it’s our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks, great learning opportunities & growth.
Our culture reflects the globally diverse backgrounds of our employees along with our commitment to our customers, each other, and a passion for excellence.
To know more about us, feel free to go through these videos:
1. Sales Readiness Explained: https://www.youtube.com/watch?v=XyMJj9AlNww&t=6s
2. What We Do: https://www.youtube.com/watch?v=jv3Q2XgnkBY
3. Ready to Close More Deals, Faster: https://www.youtube.com/watch?v=nB0exreVU-s
To view more videos, please access the below-mentioned link:
https://www.youtube.com/c/mindtickle/videos
Mindtickle is proud to be an Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.
Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers.
- Performing extensive analysis on SQL, Google Analytics & Excel from a product standpoint to provide quick recommendations to the management
- Establishing scalable, efficient and automated processes to deploy data analytics on large data sets across platforms
What you need to have:
- B.Tech /B.E.; Any Graduation
- Strong background in statistical concepts & calculations to perform analysis/ modeling
- Proficient in SQL and other BI tools like Tableau, Power BI etc.
- Good knowledge of Google Analytics and any other web analytics platforms (preferred)
- Strong analytical and problem solving skills to analyze large quantum of datasets
- Ability to work independently and bring innovative solutions to the team
- Experience of working with a start-up or a product organization (preferred)
- Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
- Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
- Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
- Build data pipelines that clean, transform, and aggregate data from disparate sources
- Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
- Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
- Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
- Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.
Job Qualifications:
- Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
- 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
- Technical expertise with data models, data mining.
- Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
- Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
- Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
- Hands-on knowledge in SQL and No-SQL database design.
- Having knowledge in CI/CD for the building and hosting of the solutions.
- Having AWS certification is an added advantage.
- Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
- A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists
This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 4 year in the industry.
As a Data Science Lead, you will be working on creating industry first analytical and propensity models to
help discover the information hidden in vast amounts of data, and make smarter decisions to deliver
even better customer experience. Your primary focus will be in applying data mining techniques, doing
statistical analysis, and building high quality prediction systems integrated with our products.
➢ Working with business and leadership teams to gathering and analyse structured and unstructured data
➢ Data mining using state-of-the-art methods
➢ Enhancing data collection procedures to include information that is relevant for building analytic
systems
➢ Processing, cleansing, and verifying the integrity of data used for analysis
➢ Doing ad-hoc analysis and presenting results in a clear manner
➢ Creating automated anomaly detection systems and constant tracking of its performance
➢ Creation and evolution of an efficient BI pipeline into a multi-faceted pipeline to support various
modelling needs.
What we are looking for:
➢ 5-8 years of relevant experience, preferably in financial services industry.
➢ A bachelors / master’s degree in the field of Statistics, Mathematics, Computer Science or
Management from Tier 1 Institutes.
➢ Data warehousing experience will be a plus.
➢ Good conceptual understanding of statistics and probability.
➢ Experience in developing dashboards and reports using BI tools.
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.
Data Science@DataWeave
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.
How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!
What do we offer?
- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!
- Ability to see the impact of your work and the value you're adding to our customers almost immediately.
- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.
Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.
We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred).
Key problem areas
- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.
- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.
- Ensemble approaches for all the above problems using multiple text and image based techniques.
Relevant set of skills
- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.
- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.
- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.
- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.
- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.
- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Role and responsibilities
- Understand the business problems we are solving. Build data science capability that align with our product strategy.
- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
- Build robust clustering and classification models in an iterative manner that can be used in production.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Take end to end ownership of the projects you are working on. Work with minimal supervision.
- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.