Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io
Bengaluru (Bangalore)
5 - 10 years
{{::renderSalaryString({min: 600000, max: 1500000, duration: '', currency: 'INR', equity: true})}}

Skills

Apache Kafka
Apache Spark
Python
Hadoop
Elastic Search
Kibana
Cisco Certified Network Associate (CCNA)

Job description

About Aptus Data LAbs

undefined

Founded

2014

Type

Products & Services

Size

51-250 employees

Stage

View company

Why apply to jobs via CutShort

No long forms
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Discover employers in your network
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Make your network count
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
{{2101133 | number}}
Matches delivered
{{3712187 | number}}
Network size
{{6212 | number}}
Companies hiring

Similar jobs

Data Engineer_1

Founded 2019
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai, Navi Mumbai
Experience icon
3 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

Build data systems and pipelines using Apache Flink (or similar) pipelines.Understand various raw data input formats, build consumers on Kafka/ksqldb for them and ingest large amounts of raw data into Flink and Spark.Conduct complex data analysis and report on results.Build various aggregation streams for data and convert raw data into various logical processing streams.Build algorithms to integrate multiple sources of data and create a unified data model from all the sources.Build a unified data model on both SQL and NO-SQL databases to act as data sink.Communicate the designs effectively with the fullstack engineering team for development.Explore machine learning models that can be fitted on top of the data pipelines.Mandatory Qualifications Skills:Deep knowledge of Scala and Java programming languages is mandatoryStrong background in streaming data frameworks (Apache Flink, Apache Spark) is mandatoryGood understanding and hands on skills on streaming messaging platforms such as KafkaFamiliarity with R, C and Python is an assetAnalytical mind and business acumen with strong math skills (e.g. statistics, algebra)Problem-solving aptitudeExcellent communication and presentation skills

Job posted by
apply for job
apply for job
Sonali Kamani picture
Sonali Kamani
Job posted by
Sonali Kamani picture
Sonali Kamani
Apply for job
apply for job

Senior Data Scientist (Health Metrics)

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote only
Experience icon
5 - 20 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Introduction The Biostrap platform extracts many metrics related to health, sleep, and activity.  Many algorithms are designed through research and often based on scientific literature, and in some cases they are augmented with or entirely designed using machine learning techniques.  Biostrap is seeking a Data Scientist to design, develop, and implement algorithms to improve existing metrics and measure new ones.  Job Description As a Data Scientist at Biostrap, you will take on projects to improve or develop algorithms to measure health metrics, including: Research: search literature for starting points of the algorithm Design: decide on the general idea of the algorithm, in particular whether to use machine learning, mathematical techniques, or something else. Implement: program the algorithm in Python, and help deploy it.   The algorithms and their implementation will have to be accurate, efficient, and well-documented. Requirements A Master’s degree in a computational field, with a strong mathematical background.  Strong knowledge of, and experience with, different machine learning techniques, including their theoretical background.   Strong experience with Python Experience with Keras/TensorFlow, and preferably also with RNNs Experience with AWS or similar services for data pipelining and machine learning.   Ability and drive to work independently on an open problem. Fluency in English.

Job posted by
apply for job
apply for job
Reinier van Mourik picture
Reinier van Mourik
Job posted by
Reinier van Mourik picture
Reinier van Mourik
Apply for job
apply for job

Senior Data Scientist

Founded 1997
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Dubai
Experience icon
7 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 2500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

High Level Scope of Work :   Work with AI / Analytics team to priorities MACHINE LEARNING Identified USE CASES for Development and Rollout Meet and understand current retail / Marketing Requirements and how AI/ML solution will address and automate the decision process. Develop AI/ML Programs using DATAIKU Solution & Python or open source tech with focus to deliver high Quality and accurate ML prediction Model Gather additional and external data sources to support the AI/ML Model as desired . Support the ML Model and FINE TUNEit to ensure high accuracy all the time. Example of use cases (Customer Segmentation , Product Recommendation, Price Optimization, Retail Customer Personalization Offers, Next Best Location for Business Est, CCTV Computer Vision, NLP and Voice Recognition Solutions) Required technology expertise : Deep Knowledge & Understanding on MACHINE LEARNING ALGORITHMS (Supervised / Unsupervised Learning / Deep Learning Models) Hands on EXP for at least 5+ years with PYTHON and R STATISTICS PROGRAMMING Languages Strong Database Development knowledge using SQL and PL/SQL Must have EXP using Commercial Data Science Solution particularly DATAIKU and (Altryx, SAS, Azure ML, Google ML, Oracle ML is a plus) Strong hands on EXP with BIG DATA Solution Architecture and Optimization for AI/ML Workload. Data Analytics and BI Tools Hand on EXP particularly (Oracle OBIEE and Power BI) Have implemented and Developed at least 3 successful AI/ML Projects with tangible Business Outcomes In retail Focused Industry Have at least 5+ Years EXP in Retail Industry and Customer Focus Business. Ability to communicate with Business Owner & stakeholders to understand their current issues and provide MACHINE LEARNING Solution accordingly. Qualifications Bachelor Degree or Master Degree in Data Science, Artificial Intelligent, Computer Science Certified as DATA SCIENTIST or MACHINE LEARNING Expert.

Job posted by
apply for job
apply for job
Mahendrand Deepak picture
Mahendrand Deepak
Job posted by
Mahendrand Deepak picture
Mahendrand Deepak
Apply for job
apply for job

Senior ETL Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools. Experience with Data Management & data warehouse development Star schemas, Data Vaults, RDBMS, and ODS Change Data capture Slowly changing dimensions Data governance Data quality Partitioning and tuning Data Stewardship Survivorship Fuzzy Matching Concurrency Vertical and horizontal scaling ELT, ETL Spark, Hadoop, MPP, RDBMS Experience with Dev/OPS architecture, implementation and operation Hand's on working knowledge of Unix/Linux Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue. Complex ETL program design coding Experience in Shell Scripting, Batch Scripting. Good communication (oral & written) and inter-personal skills Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval. Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery. Propose good design & solutions and adherence to the best Design & Standard practices. Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks. Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques. Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies. Work with functional business analysts to ensure that application programs are functioning as defined.  Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence. Technologies (Select based on requirement) Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory Utilities for bulk loading and extracting Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala J/ODBC, JSON Data Virtualization Data services development Service Delivery - REST, Web Services Data Virtualization Delivery – Denodo   ELT, ETL Cloud certification Azure Complex SQL Queries   Data Ingestion, Data Modeling (Domain), Consumption(RDMS)

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Data Engineer

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
via slice
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

About slice slice is a fintech startup focused on India’s young population. We aim to build a smart, simple, and transparent platform to redesign the financial experience for millennials and bring success and happiness to people’s lives. Growing with the new generation is what we dream about and all that we want. We believe that personalization combined with an extreme focus on superior customer service is the key to build long-lasting relations with young people. About team/role In this role, you will have the opportunity to create a significant impact on our business & most importantly our customers through your technical expertise on data as we take on challenges that can reshape the financial experience for the next generation. If you are a highly motivated team player with a knack for problem solving through technology, then we have a perfect job for you. What you’ll do Work closely with Engineering and Analytics teams to assist in Schema Designing, Normalization of Databases, Query optimization etc. Work with AWS cloud services: S3, EMR, Glue, RDS Create new and improve existing infrastructure for ETL workflows from a wide variety of data sources using SQL, NoSQL and AWS big data technologies Manage and monitor performance, capacity and security of database systems and regularly perform server tuning and maintenance activities Debug and troubleshoot database errors Identify, design and implement internal process improvements; optimising data delivery, re-designing infrastructure for greater scalability, data archival Qualification: 2+ years experience working as a Data Engineer Experience with a scripting language -  PYTHON preferably Experience with Spark and Hadoop technologies. Experience with AWS big data tools is a plus. Experience with SQL and NoSQL databases technologies like Redshift, MongoDB, Postgres/MySQL, bigQuery, Casandra. Experience on Graph DB (Neo4j and OrientDB) and Search DB (Elastic Search) is a plus. Experience in handling ETL JOBS

Job posted by
apply for job
apply for job
Gunjan Sheth picture
Gunjan Sheth
Job posted by
Gunjan Sheth picture
Gunjan Sheth
Apply for job
apply for job

Data Scientist

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
2 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Key Responsibilities: Partnering with clients and internal business owners (product, marketing, edit, etc.) to understand needs and develop models and products for Kaleidofin business line. Good understanding of the underlying business and workings of cross functional teams for successful execution Design and develop analyses based on business requirement needs and challenges. Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, hyperparameter tuning, ensembling etc. Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required. Identify and use appropriate investigative and analytical technologies to interpret and verify results. Apply and learn a wide variety of tools and languages to achieve results Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs. Collaborate with the team to improve the effectiveness of business decisions using data and machine learning/predictive modeling. Innovate on projects by using new modeling techniques or tools. Utilize effective project planning techniques to break down complex projects into tasks and ensure deadlines are kept. Communicate findings to team and leadership to ensure models are well understood and incorporated into business processes.   Skills: 2+ year experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms. Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, Boosting trees, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, ensemble methods. Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis. Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts. Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE) Experience in risk and credit scoring domains preferred

Job posted by
apply for job
apply for job
Pragya Gupta picture
Pragya Gupta
Job posted by
Pragya Gupta picture
Pragya Gupta
Apply for job
apply for job

ML & NLP Engineer

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

About Artivatic : Artivatic is a technology startup that uses AI/ML/Deep learning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 25+ team focus on technology. The artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access, and health benefits with alternative data sources to increase their productivity, efficiency, automation power, and profitability, hence improving their way of doing business more intelligently & seamlessly.  - Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, healthcare, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more.   Job description We at artivatic are seeking for passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. The ideal candidate will have research/implementation experience in modeling and developing NLP tools and have experience working with machine learning/deep learning algorithms. Roles and responsibilities Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing. Developing NLP based tools and solutions end to end. Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.Be responsible for measuring and optimizing the quality of your algorithms Requirements Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc Good understanding of Rule-based, Statistical and probabilistic NLP techniques. Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc. Good understanding of Machine learning and Deep learning algorithms. Good knowledge of Data Structures and Algorithms. Strong programming skills in Python/Java/Scala/C/C++. Strong problem solving and logical skills. A go-getter kind of attitude with the willingness to learn new technologies. Well versed in software design paradigms and good development practices. Basic Qualifications Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language - Processing, Machine Learning or Deep Learning. Publication record in conferences/journals is a plus. 2+ years of working/research experience building NLP based solutions is preferred. If you feel that you are the ideal candidate & can bring a lot of values to our culture & company's vision, then please do apply. If your profile matches as per our requirements, you will hear from one of our team members. We are looking for someone who can be part of our Team not Employee. Job Perks Insurance, Travel compensation & others

Job posted by
apply for job
apply for job
Layak Singh picture
Layak Singh
Job posted by
Layak Singh picture
Layak Singh
Apply for job
apply for job

Data Scientist

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 900000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

WyngCommerce is building state of the art AI software for the Global Consumer Brands & Retailers to enable best-in-class customer experiences. Our vision is to democratize machine learning algorithms for our customers and help them realize dramatic improvements in speed, cost and flexibility. Backed by a clutch of prominent angel investors & having some of the category leaders in the retail industry as clients, we are looking to hire for our data science team. The data science team at WyngCommerce is on a mission to challenge the norms and re-imagine how retail business should be run across the world. As a Junior Data Scientist in the team, you will be driving and owning the thought leadership and impact on one of our core data science problems. You will work collaboratively with the founders, clients and engineering team to formulate complex problems, run Exploratory Data Analysis and test hypotheses, implement ML-based solutions and fine tune them with more data. This is a high impact role with goals that directly impact our business. Your Role & Responsibilities: - Implement data-driven solutions based on advanced ML and optimization algorithms to address business problems - Research, experiment, and innovate ML/statistical approaches in various application areas of interest and contribute to IP - Partner with engineering teams to build scalable, efficient, automated ML-based pipelines (training/evaluation/monitoring) - Deploy, maintain, and debug ML/decision models in production environment - Analyze and assess data to ensure high data quality and correctness of downstream processes - Communicate results to stakeholders and present data/insights to participate in and drive decision making Desired Skills & Experiences: - Bachelors or Masters in a quantitative field from a top tier college - 1-2 years experience in a data science / analytics role in a technology / analytics company - Solid mathematical background (especially in linear algebra & probability theory) - Familiarity with theoretical aspects of common ML techniques (generalized linear models, ensembles, SVMs, clustering algos, graphical models, etc.), statistical tests/metrics, experiment design, and evaluation methodologies - Demonstrable track record of dealing with ambiguity, prioritizing needs, bias for iterative learning, and delivering results in a dynamic environment with minimal guidance - Hands-on experience in at least one of the following: (a) Anomaly Detection, (b) Time Series Analysis, (c) Product Clustering, (d) Demand Forecasting, (e) Intertemporal Optimization - Good programming skills (fluent in Java/Python/SQL) with experience of using common ML toolkits (e.g., sklearn, tensor flow, keras, nltk) to build models for real world problems - Computational thinking and familiarity with practical application requirements (e.g., latency, memory, processing time) - Excellent written and verbal communication skills for both technical and non-technical audiences - (Plus Point) Experience of applying ML / other techniques in the domain of supply chain - and particularly in retail - for inventory optimization, demand forecasting, assortment planning, and other such problems - (Nice to have) Research experience and publications in top ML/Data science conferences

Job posted by
apply for job
apply for job
Ankit Jain picture
Ankit Jain
Job posted by
Ankit Jain picture
Ankit Jain
Apply for job
apply for job

Data Scientist

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1600000, duration: "undefined", currency: "INR", equity: false})}}

About Vedantu --------------------------- If you have ever dreamed about being in the driver’s seat of a revolution, THIS is the place for you. Vedantu is an Ed-Tech startup which is into Live Online Tutoring. Recently raised Series B funding of $11M Job Description We are looking for a Data Scientist who will support our product, sales, leadership and marketing teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product, sales and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data analysis methods, building and implementing models and using/creating appropriate algorithms. Desired Skills 1. Experience using statistical computer languages (R, Python,etc.) to manipulate data and draw insights from large data sets. 2. Process, cleanse, and verify the integrity of data used for analysis. 3. Comfortable manipulating and analyzing complex, high-volume, high-dimensionality data from varying, heterogeneous sources 4. Experience with messy real-world data -- handling missing/incomplete/inaccurate data 5. Understanding of a broad set of Algorithms and Applied Math. 6. Good at problem solving, probability and statistics and knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage) and experience with applications. 7. Knowledge of data scraping is preferable 8. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks) and their real-world advantages/drawbacks. 9. Experience with big data tools (Hadoop, Hive, MapReduce) a plus.

Job posted by
apply for job
apply for job
Supreet Singh picture
Supreet Singh
Job posted by
Supreet Singh picture
Supreet Singh
Apply for job
apply for job

Senior Software Engineer

Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via zeotap
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 4000000, duration: "undefined", currency: "INR", equity: false})}}

Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0

Job posted by
apply for job
apply for job
Projjol Banerjea picture
Projjol Banerjea
Job posted by
Projjol Banerjea picture
Projjol Banerjea
Apply for job
apply for job
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Want to apply for this role at Aptus Data LAbs?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done