Cutshort logo
Pluto Seven Business Solutions Pvt Ltd's logo

Data Scientist

Sindhu Narayan's profile picture
Posted by Sindhu Narayan
2 - 7 yrs
₹4L - ₹20L / yr
Bengaluru (Bangalore)
Skills
Statistical Modeling
Data Science
TensorFlow
Python
Machine Learning (ML)
Deep Learning
Data Analytics
Google Cloud Storage
Scikit-Learn
Regression analysis
Data Scientist : Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, and IoT tailored solutions to accelerate business transformation.We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries. We are a Google premium partner in AI & ML, which means you'll have the opportunity to work and collaborate with folks from Google. Are you an innovator, have a passion to work with data and find insights, have the inquisitive mind with the constant yearning to learn new ideas; then we are looking for you.As a Pluto7 Data Scientist engineer, you will be one of the key members of our innovative artificial intelligence and machine learning team. You are expected to be unfazed with large volumes of data, love to apply various models, use technology to process and filter data for analysis. Responsibilities: Build and Optimize Machine Learning models. Work with large/complex datasets to solve difficult and non-routine analysis problems, applying advanced analytical methods as needed. Build and prototype data pipelines for analysis at scale. Work cross-functionally with Business Analysts and Data Engineers to help develop cutting edge and innovative artificial intelligence and machine learning models. Make recommendations for selections on machine learning models. Drive accuracy levels to the next stage of the given ML models. Experience in developing visualisation and User Good exposure in exploratory data analysis Strong experience in Statistics and ML algorithms. Minimum qualifications: 2+ years of relevant work experience in ML and advanced data analytics(e.g., as a Machine Learning Specialist / Data scientist ). Strong Experience using machine learning and artificial intelligence frameworks such as Tensorflow, sci-kit learn, Keras using python. Good in Python/R/SAS programming. Understanding of Cloud platforms like GCP, AWS, or other. Preferred qualifications: Work experience in building data pipelines to ingest, cleanse and transform data. Applied experience with machine learning on large datasets and experience translating analysis results into business recommendations. Demonstrated skills in selecting the right statistical tools given a data analysis problem. Demonstrated effective written and verbal communication skills. Demonstrated willingness to both teach others and learn new techniques Work location : Bangalore
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Pluto Seven Business Solutions Pvt Ltd

Founded :
2017
Type
Size :
20-100
Stage :
Raised funding
About

Pluto7 is Technology Consulting company providing services on Google Cloud Platform. We are also Google Cloud's Top 5 Breakthrough partner for Machine Learning and Artificial Intelligence.

 

Our core offerings are:- a. Marketing ML b. Demand ML c. Supply ML d. Value Chain ML e. Pricing ML.We work magic with Google Cloud Platform and that allows our customers to see what they could never see before.

Read more
Photos
Company featured pictures
Connect with the team
Profile picture
Sindhu Narayan
Profile picture
Jhanani Parthasarathy
Profile picture
Deepthi Yarlagadda
Profile picture
Nagalenoj H
Profile picture
Arkanil Dutta
Profile picture
Arsh .
Profile picture
Ankita Shukla
Profile picture
Akhilesh J
Company social profiles
bloglinkedintwitterfacebook

Similar jobs

Acuity Knowledge Partners
at Acuity Knowledge Partners
2 candid answers
1 video
Gangadhar S
Posted by Gangadhar S
Bengaluru (Bangalore)
4 - 9 yrs
₹16L - ₹40L / yr
Python
Amazon Web Services (AWS)
CI/CD
MongoDB
MLOps
+1 more

Job Responsibilities:

1. Develop/debug applications using Python.

2. Improve code quality and code coverage for existing or new program.

3. Deploy and Integrate the Machine Learning models.

4. Test and validate the deployments.

5. ML Ops function.


Technical Skills

1. Graduate in Engineering or Technology with strong academic credentials

2. 4 to 8 years of experience as a Python developer.

3. Excellent understanding of SDLC processes

4. Strong knowledge of Unit testing, code quality improvement

5. Cloud based deployment and integration of applications/micro services.

6. Experience with NoSQL databases, such as MongoDB, Cassandra

7. Strong applied statistics skills

8. Knowledge of creating CI/CD pipelines and touchless deployment.

9. Knowledge about API, Data Engineering techniques.

10. AWS

11. Knowledge of Machine Learning and Large Language Model.


Nice to Have

1. Exposure to financial research domain

2. Experience with JIRA, Confluence

3. Understanding of scrum and Agile methodologies

4. Experience with data visualization tools, such as Grafana, GGplot, etc

Read more
Red.Health
at Red.Health
2 candid answers
Mayur Bellapu
Posted by Mayur Bellapu
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Job Description: Data Engineer

We are looking for a curious Data Engineer to join our extremely fast-growing Tech Team at StanPlus

 

About RED.Health (Formerly Stanplus Technologies)

Get to know the team:

Join our team and help us build the world’s fastest and most reliable emergency response system using cutting-edge technology.

Because every second counts in an emergency, we are building systems and flows with 4 9s of reliability to ensure that our technology is always there when people need it the most. We are looking for distributed systems experts who can help us perfect the architecture behind our key design principles: scalability, reliability, programmability, and resiliency. Our system features a powerful dispatch engine that connects emergency service providers with patients in real-time

.

Key Responsibilities

●     Build Data ETL Pipelines

●     Develop data set processes

●     Strong analytic skills related to working with unstructured datasets

●     Evaluate business needs and objectives

●     Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery

●     Interpret trends and patterns

●     Work with data and analytics experts to strive for greater functionality in our data system

●     Build algorithms and prototypes

●     Explore ways to enhance data quality and reliability

●     Work with the Executive, Product, Data, and D   esign teams, to assist with data-related technical issues and support their data infrastructure needs.

●     Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.

 

Key Requirements

●     Proven experience as a data engineer, software developer, or similar of at least 3 years.

●     Bachelor's / Master’s degree in data engineering, big data analytics, computer engineering, or related field.

●     Experience with big data tools: Hadoop, Spark, Kafka, etc.

●     Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

●     Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

●     Experience with Azure, AWS cloud services: EC2, EMR, RDS, Redshift

●     Experience with BigQuery

●     Experience with stream-processing systems: Storm, Spark-Streaming, etc.

●     Experience with languages: Python, Java, C++, Scala, SQL, R, etc.

●     Good hands-on with Hive, Presto.

 


Read more
A fast growing Big Data company
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
Python
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Cloudera
at Cloudera
2 recruiters
Sushmitha Rengarajan
Posted by Sushmitha Rengarajan
Bengaluru (Bangalore)
3 - 20 yrs
₹1L - ₹44L / yr
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
Data Structures
+7 more

 

Cloudera Data Warehouse Hive team looking for a passionate senior developer to join our growing engineering team. This group is targeting the biggest enterprises wanting to utilize Cloudera’s services in a private and public cloud environment. Our product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning opportunities.A Day in the LifeOver the past 10+ years, Cloudera has experienced tremendous growth making us the leading contributor to Big Data platforms and ecosystems and a leading provider for enterprise solutions based on Apache Hadoop. You will work with some of the best engineers in the industry who are tackling challenges that will continue to shape the Big Data revolution.  We foster an engaging, supportive, and productive work environment where you can do your best work. The team culture values engineering excellence, technical depth, grassroots innovation, teamwork, and collaboration.
You will manage product development for our CDP components, develop engineering tools and scalable services to enable efficient development, testing, and release operations.  You will be immersed in many exciting, cutting-edge technologies and projects, including collaboration with developers, testers, product, field engineers, and our external partners, both software and hardware vendors.Opportunity:Cloudera is a leader in the fast-growing big data platforms market. This is a rare chance to make a name for yourself in the industry and in the Open Source world. The candidate will responsible for Apache Hive and CDW projects. We are looking for a candidate who would like to work on these projects upstream and downstream. If you are curious about the project and code quality you can check the project and the code at the following link. You can start the development before you join. This is one of the beauties of the OSS world.Apache Hive

 

Responsibilities:

•Build robust and scalable data infrastructure software

•Design and create services and system architecture for your projects

•Improve code quality through writing unit tests, automation, and code reviews

•The candidate would write Java code and/or build several services in the Cloudera Data Warehouse.

•Worked with a team of engineers who reviewed each other's code/designs and held each other to an extremely high bar for the quality of code/designs

•The candidate has to understand the basics of Kubernetes.

•Build out the production and test infrastructure.

•Develop automation frameworks to reproduce issues and prevent regressions.

•Work closely with other developers providing services to our system.

•Help to analyze and to understand how customers use the product and improve it where necessary. 

Qualifications:

•Deep familiarity with Java programming language.

•Hands-on experience with distributed systems.

•Knowledge of database concepts, RDBMS internals.

•Knowledge of the Hadoop stack, containers, or Kubernetes is a strong plus. 

•Has experience working in a distributed team.

•Has 3+ years of experience in software development.

 

Read more
Delhi, Gurugram, Noida
5 - 8 yrs
₹12L - ₹15L / yr
Data Analytics
Quantitative analyst
data intelligence
Analytical Skills

Supporting today’s data driven business world, our client acts as a full-stack data intelligence platform which leverages granular and deep data from various sources, thus helping the decision-makers at the executive level. Their solutions include supply chain optimization, building footprints, track construction hotspots, real estate, and lots more.


Their work embeds geospatial analytics, location intelligence and predictive modelling in the foundations of economic modelling and evaluation theory to build data intelligence layers for their clients which include governments, multilateral institutions, and private organizations.

 

Headquartered in New Delhi, our client works as a team of economists, data scientists, geo-spatial analysts, etc. Their decision-support system includes Big-Data, predictive modeling, forecasting, socio economic dataset and many more.


As a Senior Manager– Data Intelligence, you will be responsible for contributing to all stages of projects– conceptualizing, preparing work plans, overseeing analytical work, driving teams to meet targets and ensuring quality standards.

What you will do:

  • Thoroughly understanding the data processing pipeline and troubleshooting/problem-solving the technical team
  • Acting as SPOC for client communications across the portfolio of projects undertaken by the organization
  • Contributing to different aspects of organizational growth– team building, process building, strategy and business development

 

Desired Candidate Profile

What you need to have:

  • Post-graduate degree in relevant subjects such as Economics/ Engineering/ Quantitative Social Sciences/ Management etc
  • At least 5 years of relevant work experience
  • Vast experience in managing multiple client projects
  • Strong data/quantitative analysis skills are a must
  • Prior experience working in data analytics teams
  • Credible experience of data platforms & languages in the past    
Read more
Bengaluru (Bangalore), Mumbai, Delhi
5 - 8 yrs
₹30L - ₹40L / yr
Data Science
Data Scientist
  • You hold an MS/Ph.D. degree in a STEM domain and 5+ years in a relevant position
  • You share your ideas and continuously improve yourself and the team around you.
  • Experienced in building and scaling data teams across multiple locations and domains.
  • You have a good understanding of evolving an organization’s culture based on analytics and data insights
  • Natural and comfortable leader, have excellent problem-solving, organizational, and analytical skills
  • Passionate about data improving business and engineering practices like continuous delivery, traceability, and observability
  • Strong communication skills, high integrity, and great attention to detail

You’ll get to work with:

  • Consumer-facing, as well as core platform, finance, and distribution business units
  • Marketing and product teams, across to our engineering teams
  • Modern infrastructure (Kubernetes, AWS, GCP)

What we offer

  • We offer you a chance to be part of a truly amazing journey in a company that sets very high targets and works hard to achieve them. You will be able to work with smart, motivated, and engaged co-workers from all over the world, in an intense and very energetic environment. This leads to you having a tangible impact on the way that we operate and expand our business.

Some of the highlights of the package include:

  • Strong technical culture of continuous innovation and improvement
  • Chance to become a shareholder of Gelato!
  • Flexible festive holidays, swap days off according to your values and beliefs.
  • Work at one of our hub city offices or even remotely
  • And much more!

 

Read more
MNC
at MNC
Agency job
via Fragma Data Systems by Priyanka U
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
PySpark
Python
Spark
Roles and Responsibilities:

• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must-Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Dataweave Pvt Ltd
at Dataweave Pvt Ltd
32 recruiters
Megha M
Posted by Megha M
Bengaluru (Bangalore)
0 - 1 yrs
Best in industry
Data engineering
Internship
Python
Looking for the Candiadtes , good in coding
scraping , and problem skills
Read more
Streetmark
Agency job
via STREETMARK Info Solutions by Mohan Guttula
Remote, Bengaluru (Bangalore), Chennai
3 - 9 yrs
₹3L - ₹20L / yr
SCCM
PL/SQL
APPV
Stani's Python Editor
AWS Simple Notification Service (SNS)
+3 more

Hi All,

We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.


Strong Knowledge of SCCM, App V, and Intune infrastructure.

Powershell/VBScript/Python,

Windows Installer

Knowledge of Windows 10 registry

Application Repackaging

Application Sequencing with App-v

Deploying and troubleshooting applications, packages, and Task Sequences.

Security patch deployment and remediation

Windows operating system patching and defender updates

 

Thanks,
Mohan.G

Read more
Artivatic
at Artivatic
1 video
3 recruiters
Layak Singh
Posted by Layak Singh
Bengaluru (Bangalore)
2 - 7 yrs
₹5L - ₹12L / yr
OpenCV
Machine Learning (ML)
Deep Learning
Python
Artificial Intelligence (AI)
+1 more
About Artivatic :Artivatic is technology startup that uses AI/ML/Deeplearning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 20+ team focus on technology. Artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access and health benefits with alternative data sources to increase their productivity, efficiency, automation power and profitability, hence improving their way of doing business more intelligently & seamlessly. Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more. We have raised US $300K earlier and built products successfully and also done few PoCs successfully with some top enterprises in Insurance, Banking & Health sector. Currently, 4 months away from generating continuous revenue.Skills : - We at artivatic are seeking for passionate, talented and research focused computer engineer with strong machine learning and computer vision background to help build industry-leading technology with a focus in document text extraction and parsing using OCR across different languages.Qualifications :- Bachelors or Master degree in Computer Science, Computer vision or related field with specialization in Image Processing or machine learning.- Research experience in Deep Learning models for Image processing or OCR related field is preferred.- Publication record in Deep Learning models for Computer Vision conferences/journals is a plus.Required Skills :- Excellent skills developing in Python in Linux environment. Programming skills with multi-threaded GPU Cuda computing and API Solutions.- Experience applying machine learning and computer vision principles to real-world data and working in Scanned and Documented Images.- Good knowledge of Computer Science, math and statistics fundamentals (algorithms and data structures, meshing, sampling theory, linear algebra, etc.)- Knowledge of data science technologies such as Python, Pandas, Scipy, Numpy, matplotlib, etc.- Broad Computer Vision knowledge - Construction, Feature Detection, Segmentation, Classification; Machine/Deep Learning - Algorithm Evaluation, Preparation, Analysis, Modeling and Execution.- Familiarity with OpenCV, Dlib, Yolo, Capslule Network or similar and Open Source AR platforms and products- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed in software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in- Document and Text Extraction.- Image recognition, Object Identification and Visual Recognition - Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.- Be responsible for measuring and optimizing the quality of your algorithmsExperience : 3 Years+ Location : Sony World Signal, Koramangala 4th Block, Bangalore
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos