Prismforce (www.prismforce.com) is a US Head quartered vertical SAAS product company , with development teams in India. We are Series-A funded venture , backed by Tier 1 VC and targeted towards tech/IT services industry and tech talent organizations in enterprises, solving their most critical sector specific problems in Talent Supply Chain. The product suite is powered by artificial intelligence designed to accelerate business impact e.g. improved profitability and agility , by digitizing core vertical workflows underserved by custom applications and typical ERP offerings.
We are looking for Data Scientists to build data products to be the core of SAAS company disrupting the Skill market.In this role you should be highly analytical with a keen understanding of Data, Machine Learning, Deep Learning, Analysis, Algorithms, Products, Maths, and Statistics. The hands-on individual would be playing multiple roles of being a Data Scientist , Data Engineer , Data Analysts , Efficient Coder and above all Problem Solver.
Location: Mumbai / Bangalore / Pune / Kolkata
Responsibilities:
- Identify relevant data sources - a combination of data sources to make it useful.
- Build the automation of the collection processes.
- Pre-processing of structured and unstructured data.
- Handle large amounts of information to create the input to analytical Models.
- Build predictive models and machine-learning algorithms Innovate Machine-Learning , Deep-Learning algorithms.
- Build Network graphs , NLP , Forecasting Models Building data pipelines for end-to-end solutions.
- Propose solutions and strategies to business challenges. Collaborate with product development teams and communicate with the Senior Leadership teams.
- Participate in Problem solving sessions
Requirements:
- Bachelor's degree in a highly quantitative field (e.g. Computer Science , Engineering , Physics , Math , Operations Research , etc) or equivalent experience.
- Extensive machine learning and algorithmic background with a deep level understanding of at least one of the following areas: supervised and unsupervised learning methods , reinforcement learning , deep learning , Bayesian inference , Network graphs , Natural Language Processing Analytical mind and business acumen
- Strong math skills (e.g. statistics , algebra)
- Problem-solving aptitude Excellent communication skills with ability to communicate technical information.
- Fluency with at least one data science/analytics programming language (e.g. Python , R , Julia).
- Start-up experience is a plus Ideally 5-8 years of advanced analytics experience in startups/marquee com
Similar jobs
generation.
o 3+ years of software engineering experience.
o Advanced knowledge of Python, with 2+ years in a production environment.
o Experience with practical applications of deep learning.
o Experience with agile, test-driven development, continuous integration, and automated testing.
o Experience with productionizing machine learning models and integrating into web- services.
o Experience with the full software development life cycle, including requirements collection, design, implementation, testing, and operational support.
o Excellent verbal and written communication, teamwork, decision making and influencing
skills.
o Hustle. Thrives in an evolving, fast paced, ambiguous work environment.
Requirements
Experience:
- 5-8 years of working experience in ML/Data science preferably from remote sensing background.
- Experience in leading a team of both data scientist and machine learning engineers to solve challenging problems preferably in the Infrastructure and Utilities domain using geospatial & remote sensing data.
- Statistical knowledge along with great proficiency in python
- Strong understanding and implementation experience of predictive modeling algorithms such as regressions, time series, neural networks, clustering, decision trees and heuristic models, with familiarity dealing with tradeoffs between model performance and business needs
- Experience combining user research and data science methodologies across multiple products within the business unit.
- Must have delivered multiple data science product(s) in production.
Minimum qualification:
- Advanced degree (MSc/PhD) in Computer Science, Economics, Engineering, Operations Research, Physics or Mathematics/Statistics preferred.
Competencies:
- Fantastic communication skills that enable you to work cross-functionally with business folks, product managers, technical experts, building solid relationships with a diverse set of stakeholders.
- The ability to convey complex solutions to a less technical person.
- Vast analytical problem-solving capabilities & experience.
- Bias for action.
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
Machine Learning & Deep Learning – Strong
Experienced in TensorFlow, PyTorch, ONNX, Object Detection, Pretrained Models like YOLO, SSD, Faster RCNN, etc…
Python – Strong
NumPy, Pandas, OpenCV
Problem Solving - strong
C++ - average
It will be good if candidate have working experience in C++ in any domain
Note :: Looking for Immediate to 30 days of Notice Period
About Us :
Docsumo is Document AI software that helps enterprises capture data and analyze customer documents. We convert documents such as invoices, ID cards, and bank statements into actionable data. We are work with clients such as PayU, Arbor and Hitachi and backed by Sequoia, Barclays, Techstars, and Better Capital.
As a Senior Machine Learning you will be working directly with the CTO to develop end to end API products for the US market in the information extraction domain.
Responsibilities :
- You will be designing and building systems that help Docsumo process visual data i.e. as PDF & images of documents.
- You'll work in our Machine Intelligence team, a close-knit group of scientists and engineers who incubate new capabilities from whiteboard sketches all the way to finished apps.
- You will get to learn the ins and outs of building core capabilities & API products that can scale globally.
- Should have hands-on experience applying advanced statistical learning techniques to different types of data.
- Should be able to design, build and work with RESTful Web Services in JSON and XML formats. (Flask preferred)
- Should follow Agile principles and processes including (but not limited to) standup meetings, sprints and retrospectives.
Skills / Requirements :
- Minimum 3+ years experience working in machine learning, text processing, data science, information retrieval, deep learning, natural language processing, text mining, regression, classification, etc.
- Must have a full-time degree in Computer Science or similar (Statistics/Mathematics)
- Working with OpenCV, TensorFlow and Keras
- Working with Python: Numpy, Scikit-learn, Matplotlib, Panda
- Familiarity with Version Control tools such as Git
- Theoretical and practical knowledge of SQL / NoSQL databases with hands-on experience in at least one database system.
- Must be self-motivated, flexible, collaborative, with an eagerness to learn
About the Role:
As a Speech Engineer you will be working on development of on-device multilingual speech recognition systems.
- Apart from ASR you will be working on solving speech focused research problems like speech enhancement, voice analysis and synthesis etc.
- You will be responsible for building complete pipeline for speech recognition from data preparation to deployment on edge devices.
- Reading, implementing and improving baselines reported in leading research papers will be another key area of your daily life at Saarthi.
Requirements:
- 2-3 year of hands-on experience in speech recognitionbased projects
- Proven experience as a Speech engineer or similar role
- Should have experience of deployment on edge devices
- Candidate should have hands-on experience with open-source tools such as Kaldi, Pytorch-Kaldi and any of the end-to-end ASR tools such as ESPNET or EESEN or DeepSpeech Pytorch
- Prior proven experience in training and deployment of deep learning models on scale
- Strong programming experience in Python,C/C++, etc.
- Working experience with Pytorch and Tensorflow
- Experience contributing to research communities including publications at conferences and/or journals
- Strong communication skills
- Strong analytical and problem-solving skills
Job Description
We are looking for a highly capable machine learning engineer to optimize our deep learning systems. You will be evaluating existing deep learning (DL) processes, do hyperparameter tuning, performing statistical analysis (logging and evaluating model’s performance) to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.
You will be working with technologies like AWS Sagemaker, TensorFlow JS, TensorFlow/ Keras/TensorBoard to create Deep Learning backends that powers our application.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in Deep Learning role. A first-class machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software. To do this job successfully, you need exceptional skills in DL and programming.
Responsibilities
-
Consulting with managers to determine and refine machine learning objectives.
-
Designing deep learning systems and self-running artificial intelligence (AI) software to
automate predictive models.
-
Transforming data science prototypes and applying appropriate ML algorithms and
tools.
-
Carry out data engineering subtasks such as defining data requirements, collecting,
labeling, inspecting, cleaning, augmenting, and moving data.
-
Carry out modeling subtasks such as training deep learning models, defining
evaluation metrics, searching hyperparameters, and reading research papers.
-
Carry out deployment subtasks such as converting prototyped code into production
code, working in-depth with AWS services to set up cloud environment for training,
improving response times and saving bandwidth.
-
Ensuring that algorithms generate robust and accurate results.
-
Running tests, performing analysis, and interpreting test results.
-
Documenting machine learning processes.
-
Keeping abreast of developments in machine learning.
Requirements
-
Proven experience as a Machine Learning Engineer or similar role.
-
Should have indepth knowledge of AWS Sagemaker and related services (like S3).
-
Extensive knowledge of ML frameworks, libraries, algorithms, data structures, data
modeling, software architecture, and math & statistics.
-
Ability to write robust code in Python & Javascript (TensorFlow JS).
-
Experience with Git and Github.
-
Superb analytical and problem-solving abilities.
-
Excellent troubleshooting skills.
-
Good project management skills.
-
Great communication and collaboration skills.
-
Excellent time management and organizational abilities.
-
Bachelor's degree in computer science, data science, mathematics, or a related field;
Master’s degree is a plus.