Head Data Scientist
at We’re creating the infrastructure to enable crypto's safe
Responsibilities
-
Building out and manage a young data science vertical within the organization
-
Provide technical leadership in the areas of machine learning, analytics, and data sciences
-
Work with the team and create a roadmap to solve the company’s requirements by solving data-mining, analytics, and ML problems by Identifying business problems that could be solved using Data Science and scoping it out end to end.
-
Solve business problems by applying advanced Machine Learning algorithms and complex statistical models on large volumes of data.
-
Develop heuristics, algorithms, and models to deanonymize entities on public blockchains
-
Data Mining - Extend the organization’s proprietary dataset by introducing new data collection methods and by identifying new data sources.
-
Keep track of the latest trends in cryptocurrency usage on open-web and dark-web and develop counter-measures to defeat concealment techniques used by criminal actors.
-
Develop in-house algorithms to generate risk scores for blockchain transactions.
-
Work with data engineers to implement the results of your work.
-
Assemble large, complex data sets that meet functional / non-functional business requirements.
-
Build, scale and deploy holistic data science products after successful prototyping.
-
Clearly articulate and present recommendations to business partners, and influence future plans based on insights.
Preferred Experience
-
>8+ years of relevant experience as a Data Scientist or Analyst. A few years of work experience solving NLP problems or other ML problems is a plus
-
Must have previously managed a team of at least 5 data scientists or analysts or demonstrate that they have prior experience in scaling a data science function from the ground
-
Good understanding of python, bash scripting, and basic cloud platform skills (on GCP or AWS)
-
Excellent communication skills and analytical skills
What you’ll get
-
Work closely with the Founders in helping grow the organization to the next level alongside some of the best and brightest talents around you
-
An excellent culture, we encourage collaboration, growth, and learning amongst the team
-
Competitive salary and equity
-
An autonomous and flexible role where you will be trusted with key tasks.
-
An opportunity to have a real impact and be part of a company with purpose.
Similar jobs
Job roles and responsibilities:
- Design, develop, test, deploy, maintain and improve ML models/infrastructure and software that uses these models
- Experience writing software in one or more languages such as Python, Scala, R, or similar with strong competencies in data structures, algorithms, and software design
- Experience working with recommendation engines, data pipelines, or distributed machine learning
- Experience working with deep learning frameworks (such as TensorFlow, Keras, Torch, Caffe, Theano)
- Knowledge of data analytics concepts, including bigdata, data warehouse technical architectures, ETL and reporting/analytic tools and environments
- Participate in cutting edge research in artificial intelligence and machine learning applications
- Contribute to engineering efforts from planning and organization to execution and delivery to solve complex, real world engineering problems
- Working knowledge on different Algorithms and Machine Learning techniques like, Linear & Logistic Regression analysis, Segmentation, Decisions trees, Cluster analysis and factor analysis, Time Series Analysis, K-Nearest Neighbour, K-Means algorithm, Random Forests Algorithm, NLP (Natural language processing), Sentimental analysis, various Artificial Neural Networks, Convolution Neural Nets (CNN), Bidirectional Recurrent Neural Networks (BRNN)
- Demonstrated excellent communication, presentation, and problem-solving skills
Technical Skills Required:
- GCP Native AI/ML services like Vision, NLP, Document AI, Dialogflow, CCAI, BQ etc.,
- Proficiency with a deep learning framework such as TensorFlow or Keras, etc.,
- Proficiency with Python and basic libraries for machine learning such as scikit-learn and pandas, jupyter notebook
- Expertise in visualizing and manipulating big datasets
- Ability to select hardware to run an ML model with the required latency
- Good to have MLOps and Kubeflow knowledge
- GCP ML Engineer Certification
Data Scientist-Mumbai/Pune
at TSG Global Services Private Limited
We are a 20-year old IT Services company from Kolkata working in India and abroad. We primarily work as SSP(Software Solutions Partner) and serve some of the leading business houses in the country in various software project implementations specially on SAP and Oracle platform and also working on Govt & Semi Govt projects as outsourcing partner all over PAN India.
Can be anywhere in India ( Mumbai ,Pune and Kolkata is
preferable)
JD
Machine Learning/Deep Learning experience above 3
years
- Clear and structured thinking and communication
Keywords: Machine Learning, Deep Learning, AI,
Regression, Classification, Clustering, NLP, CNN, RNN,
LSTM, AutoML, k-NN, Naive Bayes, SVM, Decision
Forests
- Understand granular requirement, underlying business
problem and convert to low level design
- Develop analytic process chain with pre-processing,
training, testing, boosting etc.
- Develop the technical deliverable in mcube
(Python/Spark ML/R, H2O/Tensorflow) as per design
- Ensure quality of deliverable (coding standards, data
quality, data reconciliation)
- Proactively reach out for risks to Technical Lead
- Machine Learning, Deep Learning, Regression,
Classification, Clustering, NLP,CNN, RNN
- Expertise in data analysis and analytic programming
(Python/R/ SparkML/Tensorflow)
- Experience in multiple data processing technology
(preferably Pentaho or Spark)
- Basic knowledge in effort estimation, Clear and
structured thinking and communication
- Expertise in testing accuracy of deliverables (model)
- Exposure to Data Modelling and Analysis
- Exposure to information delivery (model outcome
communication)
Qualification:
M.S. / M.Tech/ B.Tech / B.E. (in this order of preference)
- Masters course in Data Science after technical (engineering/
science) degree
Senior Data Engineer
Job Description - Sr Azure Data Engineer
Roles & Responsibilities:
- Hands-on programming in C# / .Net,
- Develop serverless applications using Azure Function Apps.
- Writing complex SQL Queries, Stored procedures, and Views.
- Creating Data processing pipeline(s).
- Develop / Manage large-scale Data Warehousing and Data processing solutions.
- Provide clean, usable data and recommend data efficiency, quality, and data integrity.
Skills
- Should have working experience on C# /.Net.
- Proficient with writing SQL queries, Stored Procedures, and Views
- Should have worked on Azure Cloud Stack.
- Should have working experience ofin developing serverless code.
- Must have MANDATORILY worked on Azure Data Factory.
Experience
- 4+ years of relevant experience
In 2018-19, the mobile games market in India generated over $600 million in revenues. With close to 450 people in its Mumbai and Bangalore offices, Games24x7 is India’s largest mobile games business today and is very well positioned to become the 800-pound gorilla of what will be a $2 billion market by 2022. While Games24x7 continues to invest aggressively in its India centric mobile games, it is also diversifying its business by investing in international gaming and other tech opportunities.
Summary of Role
Position/Role Description :
The candidate will be part of a team managing databases (MySQL, MongoDB, Cassandra) and will be involved in designing, configuring and maintaining databases.
Job Responsibilities:
• Complete involvement in the database requirement starting from the design phase for every project.
• Deploying required database assets on production (DDL, DML)
• Good understanding of MySQL Replication (Master-slave, Master-Master, GTID-based)
• Understanding of MySQL partitioning.
• A better understanding of MySQL logs and Configuration.
• Ways to schedule backup and restoration.
• Good understanding of MySQL versions and their features.
• Good understanding of InnoDB-Engine.
• Exploring ways to optimize the current environment and also lay a good platform for new projects.
• Able to understand and resolve any database related production outages.
Job Requirements:
• BE/B.Tech from a reputed institute
• Experience in python scripting.
• Experience in shell scripting.
• General understanding of system hardware.
• Experience in MySQL is a must.
• Experience in MongoDB, Cassandra, Graph db will be preferred.
• Experience with Pecona MySQL tools.
• 6 - 8 years of experience.
Job Location: Bengaluru
Preferred Education & Experience:
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪ Data Analysis & Data Modeling
▪ Database Design & Implementation
▪ Database Performance Tuning & Optimization
▪ PL/pgSQL & SQL -
5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).
-
5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.
-
Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels
-
Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.
-
Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values
-
Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
-
Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.
Data Scientist
applied research.
● Understand, apply and extend state-of-the-art NLP research to better serve our customers.
● Work closely with engineering, product, and customers to scientifically frame the business problems and come up with the underlying AI models.
● Design, implement, test, deploy, and maintain innovative data and machine learning solutions to accelerate our business.
● Think creatively to identify new opportunities and contribute to high-quality publications or patents.
Desired Qualifications and Experience
● At Least 1 year of professional experience.
● Bachelors in Computer Science or related fields from the top colleges.
● Extensive knowledge and practical experience in one or more of the following areas: machine learning, deep learning, NLP, recommendation systems, information retrieval.
● Experience applying ML to solve complex business problems from scratch.
● Experience with Python and a deep learning framework like Pytorch/Tensorflow.
● Awareness of the state of the art research in the NLP community.
● Excellent verbal and written communication and presentation skills.
Essenvia is an cloud based SaaS platform that helps medical device companies reduce the time and cost of bringing Medical Devices to market. It’s product suite includes collaborative multiuser platform to prepare regulatory submissions, document management system, streamline the Medical Device regulatory pathway.
We are looking for a savvy Machine learning Engineer to join our team based out of Bangalore. The hire will be responsible for creating and managing proprietary data set for machine learning algorithms using various conventional and non-conventional data sources. The Engineer will support initiatives and will ensure optimal data delivery architecture for machine learning models. The right candidate will be excited by the prospect of becoming a key member in designing the data architecture to support our next generation of products, must be self-driven and able to work on tight time line in start-up culture
Responsibilities
---------------------
Extract key information from various data sources
Process documents using OCR and extract key entities
Extract blocks of relevant texts using pattern recognition
Prepare structured and unstructured data pipeline for machine learning models
Assemble large, complex data sets using various data sets.
Mandatory Skills
---------------------
Knowledge of algorithm and data structure
Programming Knowledge in Python and Java
Knowledge of Text mining/ Text extraction/ Regex matching
Knowledge of OCR
Experience in data cleaning, ETL, pipeline building and model-maintenance using Airflow
Knowledge Elastic search, Neo4j and GraphQL
Desirable Skills
-----------------
Knowledge of NLP
Knowledge of preparing and using custom corpora
Prior experience in medical science datasets
Exposure to Deep Learning applications and tools like TensorFlow, Theano is preferred
Senior Data Scientist
at Kaleidofin
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
Hi All,
We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.
Strong Knowledge of SCCM, App V, and Intune infrastructure.
Powershell/VBScript/Python,
Windows Installer
Knowledge of Windows 10 registry
Application Repackaging
Application Sequencing with App-v
Deploying and troubleshooting applications, packages, and Task Sequences.
Security patch deployment and remediation
Windows operating system patching and defender updates
Thanks,
Mohan.G
Data Scientist
at Simplifai Cognitive Solutions Pvt Ltd
Responsibilities for Data Scientist/ NLP Engineer
Work with customers to identify opportunities for leveraging their data to drive business
solutions.
• Develop custom data models and algorithms to apply to data sets.
• Basic data cleaning and annotation for any incoming raw data.
• Use predictive modeling to increase and optimize customer experiences, revenue
generation, ad targeting and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Deployment of ML model in production.
Qualifications for Junior Data Scientist/ NLP Engineer
• BS, MS in Computer Science, Engineering, or related discipline.
• 3+ Years of experience in Data Science/Machine Learning.
• Experience with programming language Python.
• Familiar with at least one database query language, such as SQL
• Knowledge of Text Classification & Clustering, Question Answering & Query Understanding,
Search Indexing & Fuzzy Matching.
• Excellent written and verbal communication skills for coordinating acrossteams.
• Willing to learn and master new technologies and techniques.
• Knowledge and experience in statistical and data mining techniques:
GLM/Regression, Random Forest, Boosting, Trees, text mining, NLP, etc.
• Experience with chatbots would be bonus but not required