Data Scientist

at Public Vibe

DP
Posted by Dhaneesha Dominic
icon
Hyderabad
icon
1 - 3 yrs
icon
₹1L - ₹3L / yr
icon
Full time
Skills
Java
Data Science
Python
Natural Language Processing (NLP)
Scala
Hadoop
Spark
kafka
Hi Candidates, Greetings From Publicvibe !!! We are Hiring NLP Engineers/ Data scientists in between 0.6 to 2.5 Years of Experience for our Hyderabad location, if anyone looking out for opportunities or Job change, reach out to us. Regards, Dhaneesha Dominic.
Read more

About Public Vibe

PublicVibe - Local news, jobs, deals and events from your locality
Read more
Founded
2016
Type
Product
Size
20-100 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Machine Learning Engineer

at Docsumo

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
OCR
icon
Remote only
icon
4 - 6 yrs
icon
₹15L - ₹30L / yr

About Us :

Docsumo is Document AI software that helps enterprises capture data and analyze customer documents. We convert documents such as invoices, ID cards, and bank statements into actionable data. We are work with clients such as PayU, Arbor and Hitachi and backed by Sequoia, Barclays, Techstars, and Better Capital.

 

As a Senior Machine Learning you will be working directly with the CTO to develop end to end API products for the US market in the information extraction domain.

 

Responsibilities :

  • You will be designing and building systems that help Docsumo process visual data i.e. as PDF & images of documents.
  • You'll work in our Machine Intelligence team, a close-knit group of scientists and engineers who incubate new capabilities from whiteboard sketches all the way to finished apps.
  • You will get to learn the ins and outs of building core capabilities & API products that can scale globally.
  • Should have hands-on experience applying advanced statistical learning techniques to different types of data.
  • Should be able to design, build and work with RESTful Web Services in JSON and XML formats. (Flask preferred)
  • Should follow Agile principles and processes including (but not limited to) standup meetings, sprints and retrospectives.

 

Skills / Requirements :

  • Minimum 3+ years experience working in machine learning, text processing, data science, information retrieval, deep learning, natural language processing, text mining, regression, classification, etc.
  • Must have a full-time degree in Computer Science or similar (Statistics/Mathematics)
  • Working with OpenCV, TensorFlow and Keras
  • Working with Python: Numpy, Scikit-learn, Matplotlib, Panda
  • Familiarity with Version Control tools such as Git
  • Theoretical and practical knowledge of SQL / NoSQL databases with hands-on experience in at least one database system.
  • Must be self-motivated, flexible, collaborative, with an eagerness to learn
 
Read more
Job posted by
Vaidehi Tipnis

Architect - Analytics / K8s

at Product Development

Agency job
via Purple Hirez
Analytics
Data Analytics
Kubernetes
PySpark
Python
kubeflow
icon
Hyderabad
icon
12 - 20 yrs
icon
₹15L - ₹50L / yr

Job Description

We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.

Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.

 

Skills

  • Bachelors/Masters/Phd in CS or equivalent industry experience
  • Demonstrated expertise of building and shipping cloud native applications
  • 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
  • Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • 5+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins

Responsibilities

  • Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
  • Create custom Operators for Kubernetes, Kubeflow
  • Develop data ingestion processes and ETLs
  • Assist in dev ops operations
  • Design and Implement APIs
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
Job posted by
Aditya K

ML Ops Engineer

at Top Management Consulting Company

DevOps
Microsoft Windows Azure
gitlab
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Docker
Kubernetes
Jenkins
GitHub
Git
Python
MySQL
PostgreSQL
SQL server
Oracle
Terraform
argo
airflow
kubeflow
Machine Learning (ML)
icon
Gurugram, Bengaluru (Bangalore), Chennai
icon
2 - 9 yrs
icon
₹9L - ₹27L / yr
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
Job posted by
Naveed Mohd
SAP ABAP
SAP HANA
RPAS
Machine Learning (ML)
Python
Javascript
Eclipse (IDE)
GitHub
Jenkins
Tableau
PowerBI
SAP
icon
Remote, Mumbai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Pune
icon
4.5 - 12 yrs
icon
₹4L - ₹18L / yr
JD..SAP ABAP S4 HANA CONSULTANT

  • Design thinking to really understand the business problem
  • Understanding new ways to deliver (agile, DT)
  • Being able to do a functional design across S/4HANA and SCP). An understanding of the possibilities around automation/RPA (which should include UIPath, Blueprism, Contextor) and how these can be identified and embedded in business processes
  • Following on from this, the same is true for AI and ML: What is available in SAP standard, how can these be enhanced/developed further, how these technologies can be embedded in the business process. There is no point in understanding the standard process, or the AI and ML components, we will need a new type of hybrid SAP practitioner.
Read more
Job posted by
Sanjay Biswakarma
Spark
Apache Kafka
PySpark
Internet of Things (IOT)
Real time media streaming
icon
Remote, Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹6L - ₹15L / yr

JD for IOT DE:

 

The role requires experience in Azure core technologies – IoT Hub/ Event Hub, Stream Analytics, IoT Central, Azure Data Lake Storage, Azure Cosmos, Azure Data Factory, Azure SQL Database, Azure HDInsight / Databricks, SQL data warehouse.

 

You Have:

  • Minimum 2 years of software development experience
  • Minimum 2 years of experience in IoT/streaming data pipelines solution development
  • Bachelor's and/or Master’s degree in computer science
  • Strong Consulting skills in data management including data governance, data quality, security, data integration, processing, and provisioning
  • Delivered data management projects with real-time/near real-time data insights delivery on Azure Cloud
  • Translated complex analytical requirements into the technical design including data models, ETLs, and Dashboards / Reports
  • Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
  • Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
  • Successfully delivered large scale IOT data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
  • Experience in handling telemetry data with Spark Streaming, Kafka, Flink, Scala, Pyspark, Spark SQL.
  • Hands-on experience on containers and Dockers
  • Exposure to streaming protocols like MQTT and AMQP
  • Knowledge of OT network protocols like OPC UA, CAN Bus, and similar protocols
  • Strong knowledge of continuous integration, static code analysis, and test-driven development
  • Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
  • Must have excellent analytical and problem-solving skills
  • Delivered change management initiatives focused on driving data platforms adoption across the enterprise
  • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
     

Roles & Responsibilities
 

You Will:

  • Translate functional requirements into technical design
  • Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core Azure services needed to fulfill the technical design
  • Design, Develop and Deliver data integration interfaces in ADF and Azure Databricks
  • Design, Develop and Deliver data provisioning interfaces to fulfill consumption needs
  • Deliver data models on Azure platform, it could be on Azure Cosmos, SQL DW / Synapse, or SQL
  • Advise clients on ML Engineering and deploying ML Ops at Scale on AKS
  • Automate core activities to minimize the delivery lead times and improve the overall quality
  • Optimize platform cost by selecting the right platform services and architecting the solution in a cost-effective manner
  • Deploy Azure DevOps and CI CD processes
  • Deploy logging and monitoring across the different integration points for critical alerts

 

Read more
Job posted by
Priyanka U
Scala
Spark
Data Warehouse (DWH)
Business Intelligence (BI)
Apache Spark
SQL
azure
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹3L - ₹17L / yr
Dear All,
we are looking for candidates who have good experiance with
BI/DW Experience of 3 - 6 years with Spark, Scala, SQL expertise
and Azure.
Azure background is needed.
     * Spark hands on : Must have
     * Scala hands on : Must have
     * SQL expertise : Expert
     * Azure background : Must have
     * Python hands on : Good to have
     * ADF, Data Bricks: Good to have
     * Should be able to communicate effectively and deliver technology
implementation end to end
Looking for candidates who can join 15 to 30 Days and who will avaailable immeiate.


Regards
Gayatri P
Fragma Data Systems
Read more
Job posted by
gayatri parhi

Data Scientist 2

at CommerceIQ

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Data Science
Data Scientist
R Programming
Python
Machine Learning (ML)
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹20L - ₹35L / yr

CommerceIQ is Hiring Data Scientist (3-5 yrs)

 

At CommerceIQ, we are building the world’s most sophisticated E-commerce Channel Optimization software to help brands leverage Machine Learning, Analytics and Automation to grow their E-commerce business on all channels, globally.

Using CommerceIQ as a single source of truth, customers have driven 40% increase in incremental sales, 20% improvement in profitability and 32% reduction in out of stock rates on Amazon.

 

What You’ll Be Doing

As a Senior Data Scientist, you will work closely with Engineering/Product/Operations teams to build state-of-the-art ML based solutions for B2B SaaS products. This entails not only leveraging advanced techniques for predictions, time-series forecasting, topic modelling, optimisation but deep understanding of business and product too.

  • Apply excellent problem solving skills to deconstruct and formulate solutions from first-principles
  • Work on data science roadmap and build the core engine of our flagship CommerceIQ product
  • Collaborate with product and engineering to design product strategy, identify key metrics to drive and support with proof of concept
  • Perform rapid prototyping of experimental solutions and develop robust, sustainable and scalable production systems
  • Work with large scale ecommerce data of the biggest brands on amazon
  • Apply out-of-the-box, advanced algorithms to complex problems in real-time systems
  • Drive productization of techniques to be made available to a wide range of customers
  • You would be working with and mentoring fellow team members on the owned charter

What we are looking for -

  • Bachelor’s or Masters in Computer Science or Maths/Stats from a reputed college with 4+ years of experience in solving data science problems that have driven value to customers
  • Good depth and breadth in machine learning (theory and practice), optimization methods, data mining, statistics and linear algebra. Experience in NLP would be an advantage
  • Hands-on programming skills and ability to write modular and scalable code in Python/R. Knowledge of SQL is required
  • Familiarity with distributed computing architecture like Spark, Map-Reduce paradigm and Hadoop will be an added advantage
  • Strong spoken and written communication skills, able to explain complex ideas in a simple, intuitive manner, write/maintain good technical documentation on projects
  • Experience with building ML data products in an engineering organization interfacing with other teams and departments to deliver impact
  • We are looking for candidates who are curious and self-starters; obsess over customer problems to deliver maximum value to them.
  • Data scientist, Machine Learning, data science, data analyst

Job Type: Full-time

Experience:

  • Data Scientist: 3 years (Required)

Application Question:

  • Looking for product based industry experience from tier 1 /tier 2 colleges (NIT ,BIT, IIT,IIIT, BITS, Strong Profiles)
Read more
Job posted by
Abhijit Ravuri

Data Engineer

at Streetmark

SCCM
PL/SQL
APPV
Stani's Python Editor
AWS Simple Notification Service (SNS)
Amazon Web Services (AWS)
Python
Microsoft App-V
icon
Remote, Bengaluru (Bangalore), Chennai
icon
3 - 9 yrs
icon
₹3L - ₹20L / yr

Hi All,

We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.


Strong Knowledge of SCCM, App V, and Intune infrastructure.

Powershell/VBScript/Python,

Windows Installer

Knowledge of Windows 10 registry

Application Repackaging

Application Sequencing with App-v

Deploying and troubleshooting applications, packages, and Task Sequences.

Security patch deployment and remediation

Windows operating system patching and defender updates

 

Thanks,
Mohan.G

Read more
Job posted by
Mohan Guttula

Analyst - Business Analytics

at LatentView Analytics

Founded 2006  •  Products & Services  •  100-1000 employees  •  Profitable
Business Intelligence (BI)
Analytics
SQL server
Data Visualization
Tableau
Business-IT alignment
Communication Skills
Science
Problem solving
Python
Looker
EDX
icon
Chennai
icon
1 - 4 yrs
icon
₹2L - ₹10L / yr
Title: Analyst - Business AnalyticsExperience: 1 - 4 YearsLocation: ChennaiOpen Positions: 17Job Description:Roles & Responsibilities:- Designing and implementing analytical projects that drive business goals and decisions leveraging structured and unstructured data.- Generating a compelling story from insights and trends in a complex data environment.- Working shoulder-to-shoulder with business partners to come up with creative approaches to solve the business problem.- Creating dashboards for business heads by exploring available data assets.Qualifications:- Overall 1+ Years of Business Analytics experience with strong communication skills.- Bachelor or Master degree in computer science is preferred.- Excellent problem solving and client orientation skills.Skills Required:- Ability to program in Advanced SQL is must.- Hands-on experience in Modeling tools such as R or Python- Experience in Visualization tools such as Power BI, Tableau, Looker, etc., would be a big plus.- Analytics certifications from recognized platforms would be a plus - Udemy, Coursera, EDX,etc.
Read more
Job posted by
Kannikanti madhuri

Senior Software Engineer

at LimeTray

Founded 2013  •  Product  •  100-500 employees  •  Profitable
Machine Learning (ML)
Python
Cassandra
MySQL
Apache Kafka
RabbitMQ
Java
icon
NCR (Delhi | Gurgaon | Noida)
icon
4 - 6 yrs
icon
₹15L - ₹18L / yr
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
Read more
Job posted by
tanika monga
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Public Vibe?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort