Similar jobs
Founded 2018  •  Product  •  100-500 employees  •  Profitable
Natural Language Processing (NLP)
Python
PyTorch
TensorFlow
Bengaluru (Bangalore)
2 - 7 yrs
₹12L - ₹30L / yr
Hypersonix.ai is disrupting the Analytics space with AI, ML and NLP capabilities to drive real-time business insights with a conversational user experience, enabling decisioning at the speed of thought. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals.

We are looking for passionate NLP engineer to join our core product team. You will work closely with a set of talented engineers, product managers, designers and data scientists to bring innovative ideas to life.

Requirements:

1. Bachelors/Masters in Computer Science
2. Must have published at least one scientific paper in NLP in any recognized conference/journal.
- Preferred if the sub-domain of publication is in conversational interfaces such as dialogue or natural language generation.
3. Prior experience in research labs (academic or corporate)
4. 1+ years experience in software development
5. Experienced in machine learning tooling such as Python, PyTorch, Tensorflow/Keras, sci-kit learn etc

Job Description:

1. Must be up-to-date with state-of-the-art research in NLP/conversational interfaces/dialogue management/natural language generation.
2. Must be able to evaluate multiple research approaches to a particular problem, perform extensive literature survey and choose the right approach.
3. Must be able to implement state-of-the-art algorithms from papers published at leading conferences.
4. Must be able to develop PoCs using state-of-the-art research papers, including implementing algorithms, identifying relevant datasets, collating datasets if not readily available, identifying biases in datasets, training, validation and testing.
4. Must follow standard software engineering practices to develop PoCs.
Read more
Job posted by
Gowshini Maheswaran
Apply for job
Founded 1987  •  Product  •  500-1000 employees  •  Profitable
Data Science
Python
SQL
R Language
Data mining
Remote, Mumbai
3 - 7 yrs
₹5L - ₹15L / yr

Role : 

  • Understand and translate statistics and analytics to address business problems
  • Responsible for helping in data preparation and data pull, which is the first step in machine learning
  • Should be able to do cut and slice data to extract interesting insights from the data
  • Model development for better customer engagement and retention
  • Hands on experience in relevant tools like SQL(expert), Excel, R/Python
  • Working on strategy development to increase business revenue

 


Requirements:

  • Hands on experience in relevant tools like SQL(expert), Excel, R/Python
  • Statistics: Strong knowledge of statistics
  • Should able to do data scraping & Data mining
  • Be self-driven, and show ability to deliver on ambiguous projects
  • An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment
  • Should have worked on Business Projects for an organization, Ex: customer acquisition, Customer retention.
Read more
Job posted by
Vivek Manna
Apply for job
Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Artificial Intelligence (AI)
Deep Learning
Artificial Neural Network (ANN)
Python
Machine Learning (ML)
PyTorch
TensorFlow
CUDA
Keras
computer vision
CNN
Transfer Learning
Object Detection
Bengaluru (Bangalore)
3 - 7 yrs
₹7L - ₹20L / yr

The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc. ) and traditional Image Processing (OpenCV etc. ). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.

 

Requirements:

  • 2 - 4 years of relevant experience in solving complex real-world problems at scale via deep learning, computer vision or AI
  • Python, cuDNN, Tensorflow/PyTorch/Keras (or similar Deep Learning frameworks).
  • CNNs, RNNs, Transfer learning (for image classification, segmentation, object detection etc).
  • Image Processing techniques using OpenCV or other white-box image feature extraction algorithms.
  • End to end deployment of deep learning models.
Read more
Job posted by
Human Resources
Apply for job
Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
Data Warehouse (DWH)
Big Data
Hadoop
Pune
2 - 6 yrs
₹3L - ₹15L / yr

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
Read more
Job posted by
Nikita Aher
Apply for job
Founded 1998  •  Services  •  100-1000 employees  •  Profitable
Artificial Intelligence (AI)
Machine Learning (ML)
Python
Natural Language Processing (NLP)
TensorFlow
Remote only
4 - 9 yrs
₹4L - ₹20L / yr
Responsibilities:

- API Integration and Handling Big data

- Data Gathering, Accumulating, Processing, Mining, Analysis, Querying and Visualization

- Pre-Processing Text Data

- Model Error Analysis and Debugging

- Develop relational database

- Hyper parameter optimization, regularization, feature engineering

- Applying AI/ML techniques

- Collaborate

- Documenting Lessons Learnt

Requirement :

- 4 or more years of hands-on experience in a similar role

- SciKit Learn, NLTK, spaCy, Stanford CoreNLP

- TensorFlow

- Python

- Jupyter Notebook

- Multi-Lingual NLP

- Understanding of tools like Buffer and Hootsuite will be a plus

Details :

- Strong programming skills and data warehousing skills

- In-depth knowledge of computational frameworks

- Strong problem-solving skills with an emphasis on product development

- Experience using statistical computer languages (Python, R, etc.) to manage data and draw insights from large data sets

- Experience working with and creating data architecture diagrams

- Knowledge of a variety of ML techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks

- Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc.

- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, and proper usage, etc.) and experience with applications

- Experience in visualizing/presenting data for stakeholders

- Excellent written and verbal communication skills for coordinating across teams

- Ready to learn and master new technologies and techniques

- Time management and multitasking skills
Read more
Job posted by
Matellio HR
Apply for job
Founded 2018  •  Product  •  20-100 employees  •  Raised funding
Business Analysis
MS-Excel
SQL
Data Analytics
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹14L / yr

What we're looking for

Wingman is seeking a smart, outgoing and driven individual who has a passion for data analytics and focus on creating an impact on business strategy with data-driven insights 🤝 📈

Who you will work with

This is a founder's office role where you will work directly with the founders, chief of staff and functional heads on various projects across the organisation.

Responsibilities

As part of a close-knit team, you will collaborate across teams to identify and fulfil analytics needs of a fast growing team 😍

This includes the following:

  • Be the owner of all things data and the one source of truth for key business metrics.
  • Analyse trends across user behaviour on the product and define what metrics signify customer success.
  • Own the data funnel from traffic to deal conversion and work closely with marketing & sales to improve conversion rates.
  • Take up ambitious and in-depth analytics projects to support strategic initiatives.
  • Collaborate across functions to match business requirements to data insights.
  • Build a data-driven process for planning and implement it across teams.

Requirements

  • Minimum 1-3 years of experience in data analytics.
  • Must have worked in-house at any startup(s) or consulting firm for at least 1 year.
  • Must be curious to explore various aspects of a business and understand requirements well.
  • Must be in love with Excel, an expert at SQL and interested in data analysis techniques.
  • Good to have experience in R and Python.
  • Exceptional communication (written + verbal) and interpersonal skills are a must.
  • Has a strong bias for action and is a quick learner.
  • Remote at the moment, based out of Bangalore when office reopens.

What's in it for you

This is your front-row ticket to building and scaling a startup poised for incredible growth. ✌️

  • Opportunity to be involved on the ground floor of a growing startup experiencing all facets
  • Unique opportunity to work with a pre-Series A startup with significant capital to scale fast
  • Directly work with co-founders
  • Directly impact our company's brand and growth
  • Join a great workplace & culture (also recognized as Top 25 places to work). You can get a behind-the-scenes look into our culture here or on our website here.

About Us

Wingman is on a mission to make sales conversations better for sales and revenue teams everywhere. 🚀

We use the power of AI to analyze sales calls and emails, identify trends and give insights that helps companies:

  • Understand the voice of customers at scale
  • Improve their sales processes
  • Provide personalized, data-backed and real-time sales coaching

Our customers are mostly sales teams of tech companies based in the US.

Wingman was founded in 2018 by Shruti, Murali and Srikar who have 10+ years of experience each in the tech and business worlds. We are part of the prestigious YCombinator accelerator in Silicon Valley (S19) and have since raised $2.3M in capital.

Want to know more about what it's like working with us? Get a sneak peak here.

Read more
Job posted by
Rabya Khan
Apply for job
at Our Client company is into Computer Software. (EC1)
Agency job
SQL
ETL
Snowflake
DWH
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Author data services using a variety of programming languages
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Snowflake Cloud Datawarehouse as well as SQL and Azure ‘big data’ technologies
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and Azure regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Work in an Agile environment with Scrum teams.
  • Ensure data quality and help in achieving data governance.

Basic Qualifications

  • 3+ years of experience in a Data Engineer or Software Engineer role
  • Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Experience using the following software/tools:
  • Experience with “Snowflake Cloud Datawarehouse”
  • Experience with Azure cloud services: ADLS, ADF, ADLA, AAS
  • Experience with data pipeline and workflow management tools
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes/Snowflake
  • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • Experience supporting and working with cross-functional teams in a dynamic environment.
Read more
Job posted by
Fiona RKS
Apply for job
at Our Client company is into Computer Software. (EC1)
Agency job
Data Science
Computer Vision
Docker
Kubernetes
NLP Concepts
Machine Learning
Kube flow
Pyspark
Nosql
Bengaluru (Bangalore)
15 - 17 yrs
₹30L - ₹40L / yr
  • The opportunity to take on some of the world’s most meaningful challenges, helping customers achieve clean water, safe food, abundant energy, and healthy environments
  • The ability to make an impact and shape your career with a company that is passionate about growth
  • The support of an organization that believes it is vital to include and engage diverse people, perspectives, and ideas to achieve our best
  • Actively engage with internal business teams to understand their challenges and deliver robust, data-driven solutions.
  • Work alongside global counterparts to solve data-intensive problems using standard analytical frameworks and tools.
  • Be encouraged and expected to innovate and be creative in your data analysis, problem-solving, and presentation of solutions.
  • Network and collaborate with a broad range of internal business units to define and deliver joint solutions.
  • Work alongside customers to leverage cutting-edge technology (machine learning, streaming analytics, and ‘real’ big data) to creatively solve problems and disrupt existing business models.

Responsibilities

  • Work alongside IT & engineering groups to leverage cutting edge data science and machine learning technology & methodologies to building algorithms, statistical models, and analytical solutions creatively and efficiently
  • Design, architect, and implement interactive dashboards and reports. Build compelling, clear, and powerful visualizations of data model performance and results.
  • Take responsibility for the model development life cycle. Set up the environment, design, code, test, repository management, and deployments as applicable and as required
  • Work alongside global counterparts to solve data-intensive problems using standard analytical frameworks and tools.
  • Work with global business teams to help develop the data strategy and advanced analytics pipeline
  • Actively engage, network, collaborate with internal business teams to understand their challenges and deliver robust, data-driven solutions.
  • Create, maintain and document a robust set of metrics to monitor day-to-day bug detection and long-term performance tracking.

 

Essential Skills

  • Multivariate techniques & predictive modeling – cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
  • Large scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
  • Working with large datasets (several Gigs), using cloud-hosted leading industry platforms or proprietary data science modeling/model deployment software
  • Proven ability at designing and building scalable data analytics solutions at an enterprise level and high-performance analytical models using Python, ML libraries, Jupyter/Anaconda
  • Understanding of significance testing, sampling, descriptive statistics & multivariate statistics
  • Hands-on work experience in at least two of these industry-leading data science platforms: Rapidminer, Knowledge Seeker, KNIME, RStudio, SAS Analytics Pro, SPSS Predictive Analytics
  • Experience in exploring data & writing analytics algorithms in Python/R, on data science workbench/platforms on AWS, Google Cloud Platform, Microsoft Azure cloud.
  • MS Azure Suite - Azure ML Studio, Azure Data Factory, Power BI/Power Apps
  • Hands-on experience building out reports on Visualization tools (any of) Tableau, Qlik, Power BI.
  • Hands-on experience working different (industry-leading) databases: Oracle, SQL Server, DB2, MySQL, Teradata

 

Preferred Skills

  • Strong experience with machine learning algorithms and their deployment
  • Experience with web analytics, social media analytics, analytics application of cloud computing/mobility infra; working with parquet files.
  • Deep Learning, Computer Vision, NLP Concepts and frameworks.
  • Docker/Kubernetes, Kubeflow, PySpark, NoSQL
  • Application of data analytics to manufacturing industries or chemical industries 

 

Essential background of the candidate

  • 15+ years of corporate work experience with a minimum of 10 years of professional experience in data science, statistical modeling, data engineering, and predictive analytics assignments 
  • A Bachelor’s degree in Data Analytics, Math, Statistics, Computer Science, or related fields with an emphasis on data analysis
  • A strong understanding of data analytics best practices, standards, and guidelines; and an experience with applying the same to produce high visibility enterprise solutions/products 
  • A problem-solving mindset with the ability to understand business challenges and how to apply your analytics expertise to solve them.
  • Innovative and creative in data analysis, problem-solving, and presentation of solutions.
  • Strong Communication, Interpersonal, leadership skills
  • Ability to present complex mathematical solutions in a simple manner that most people will understand
  • Good verbal, writing & presentation skills
  • Ability to structure and deliver presentations clearly, crisply, and compellingly
  • Ability to champion recommendations and influence decisions at the senior management level
  • Ability to establish effective cross-functional partnerships and relationships at all levels in a highly collaborative environment
  • Strong experience in handling multi-national client engagements
Read more
Job posted by
Manjunath Multirecruit
Apply for job
Founded 2017  •  Products & Services  •  100-1000 employees  •  Bootstrapped
R Programming
Python
Predictive modelling
Statistical Analysis
Data Analytics
SQL
Remote, Chennai, Bengaluru (Bangalore), Mumbai
3 - 6 yrs
₹12L - ₹20L / yr

Ganit Inc. is the fastest growing Data Science & AI company in Chennai.

Founded in 2017, by 3 industry experts who are alumnus of IITs/SPJIMR with each of them having 17+ years of experience in the field of analytics.

We are in the business of maximising Decision Making Power (DMP) for companies by providing solutions at the intersection of hypothesis based analytics, discovery based AI and IoT. Our solutions are a combination of customised services and functional product suite.

We primarily operate as a US-based start-up and have clients across US, Asia-Pacific, Middle-East and have offices in USA - New Jersey & India - Chennai.

 

Started with 3 people, the company is fast growing with 100+ employees

 

1. What do we expect from you

 

- Should posses minimum 2 years of experience of data analytics model development and deployment

- Skills relating to core Statistics & Mathematics.

- Huge interest in handling numbers

- Ability to understand all domains in businesses across various sectors

- Natural passion towards numbers, business, coding, visualisation

 

2. Necessary skill set:

 

- Proficient in R/Python, Advanced Excel, SQL

- Should have worked with Retail/FMCG/CPG projects solving analytical problems in Sales/Marketing/Supply Chain functions

- Very good understanding of algorithms, mathematical models, statistical techniques, data mining, like Regression models, Clustering/ Segmentation, time series forecasting, Decision trees/Random forest, etc.

- Ability to choose the right model for the right data and translate that into code in R, Python, VBA (Proven capabilities)

- Should have handled large datasets and with through understanding of SQL

- Ability to handle a team of Data Analysts

 

3. Good to have skill set:

 

- Microsoft PowerBI / Tableau / Qlik View / Spotfire

 

4. Job Responsibilities:

 

- Translate business requirements into technical requirements

- Data extraction, preparation and transformation

- Identify, develop and implement statistical techniques and algorithms that address business challenges and adds value to the organisation

- Create and implement data models

- Interact with clients for queries and delivery adoption

 

5. Screening Methodology

 

- Problem Solving round (Telephonic Conversation)

- Technical discussion round (Telephonic Conversation)

- Final fitment discussion (Video Round

 

 

Read more
Job posted by
Kavitha J
Apply for job
Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Python
MySQL
Big Data
Google Cloud Storage
API
SQL Query Analyzer
Relational Database (RDBMS)
Agile/Scrum
Bengaluru (Bangalore)
- yrs
₹6L - ₹18L / yr
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
Read more
Job posted by
Sindhu Narayan
Apply for job
Did not find a job you were looking for?
Search
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.