Data Scientist-
We are looking for an experienced Data Scientists to join our engineering team and
help us enhance our mobile application with data. In this role, we're looking for
people who are passionate about developing ML/AI in various domains that solves
enterprise problems. We are keen on hiring someone who loves working in fast paced start-up environment and looking to solve some challenging engineering
problems.
As one of the earliest members in engineering, you will have the flexibility to design
the models and architecture from ground up. As any early-stage start-up, we expect
you to be comfortable wearing various hats, and be proactive contributor in building
something truly remarkable.
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
About Fintech lead,
Similar jobs
Carsome’s Data Department is on the lookout for a Data Scientist/Senior Data Scientist who has a strong passion in building data powered products.
Data Science function under the Data Department has a responsibility for standardisation of methods, mentoring team of data science resources/interns, including code libraries and documentation, quality assurance of outputs, modeling techniques and statistics, leveraging a variety of technologies, open-source languages, and cloud computing platform.
You will get to lead & implement projects such as price optimization/prediction, enabling iconic personalization experiences for our customer, inventory optimization etc.
Job Descriptions
- Identifying and integrating datasets that can be leveraged through our product and work closely with data engineering team to develop data products.
- Execute analytical experiments methodically to help solve various problems and make a true impact across functions such as operations, finance, logistics, marketing.
- Identify, prioritize, and design testing opportunities that will inform algorithm enhancements.
- Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models and clean and validate data for uniformity and accuracy.
- Unlock insights by analyzing large amounts of complex website traffic and transactional data.
- Implement analytical models into production by collaborating with data analytics engineers.
Technical Requirements
- Expertise in model design, training, evaluation, and implementation ML Algorithm expertise K-nearest neighbors, Random Forests, Naive Bayes, Regression Models. PyTorch, TensorFlow, Keras, deep learning expertise, tSNE, gradient boosting expertise, regression implementation expertise, Python, Pyspark, SQL, R, AWS Sagemaker /personalize etc.
- Machine Learning / Data Science Certification
Experience & Education
- Bachelor’s in Engineering / Master’s in Data Science / Postgraduate Certificate in Data Science.
Data Scientist – Program Embedded
Job Description:
We are seeking a highly skilled and motivated senior data scientist to support a big data program. The successful candidate will play a pivotal role in supporting multiple projects in this program covering traditional tasks from revenue management, demand forecasting, improving customer experience to testing/using new tools/platforms such as Copilot Fabric for different purpose. The expected candidate would have deep expertise in machine learning methodology and applications. And he/she should have completed multiple large scale data science projects (full cycle from ideation to BAU). Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. This is a data science role directly embedded into the program/projects, stake holder management and collaborations with patterner are crucial to the success on this role (on top of the deep expertise).
What we are looking for:
- Highly efficient in Python/Pyspark/R.
- Understand MLOps concepts, working experience in product industrialization (from Data Science point of view). Experience in building product for live deployment, and continuous development and continuous integration.
- Familiar with cloud platforms such as Azure, GCP, and the data management systems on such platform. Familiar with Databricks and product deployment on Databricks.
- Experience in ML projects involving techniques: Regression, Time Series, Clustering, Classification, Dimension Reduction, Anomaly detection with traditional ML approaches and DL approaches.
- Solid background in statistics, probability distributions, A/B testing validation, univariate/multivariate analysis, hypothesis test for different purpose, data augmentation etc.
- Familiar with designing testing framework for different modelling practice/projects based on business needs.
- Exposure to Gen AI tools and enthusiastic about experimenting and have new ideas on what can be done.
- If they have improved an internal company process using an AI tool, that would be great (e.g. process simplification, manual task automation, auto emails)
- Ideally, 10+ years of experience, and have been on independent business facing roles.
- CPG or retail as a data scientist would be nice, but not number one priority, especially for those who have navigated through multiple industries.
- Being proactive and collaborative would be essential.
Some projects examples within the program:
- Test new tools/platforms such as Copilo, Fabric for commercial reporting. Testing, validation and build trust.
- Building algorithms for predicting trend in category, consumptions to support dashboards.
- Revenue Growth Management, create/understand the algorithms behind the tools (can be built by 3rd parties) we need to maintain or choose to improve. Able to prioritize and build product roadmap. Able to design new solutions and articulate/quantify the limitation of the solutions.
- Demand forecasting, create localized forecasts to improve in store availability. Proper model monitoring for early detection of potential issues in the forecast focusing particularly on improving the end user experience.
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Roles & Responsibilities:
Work on implementation of real-time and batch data pipelines for disparate data sources.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
- Identify improvement areas in the current data system and implement optimizations.
- Work on specific areas of data governance including metadata management and data quality management.
- Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
- Develop Proof-of-Concepts to validate new technology solutions or advancements.
- Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
- Work on building intelligent systems using various AI/ML algorithms.
Desired Experience/Skill:
- Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
- Experience with private and public cloud architectures with pros/cons.
- Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
- Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
- Knowledge of Kafka, Redis is preferred
- Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.
Responsibilities:
- Design, Create and maintain on premise and cloud based data integration pipelines.
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
Duration : Full Time
Location : Vishakhapatnam, Bangalore, Chennai
years of experience : 3+ years
Job Description :
- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.
- Strong communications (written and verbal) along with being a good team player.
- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)
- 2+ years of strong experience with SQL and Python (Data Engineering focused).
- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.
- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Senior Engineer – Artificial Intelligence / Computer Vision
(Business Unit – Autonomous Vehicles & Automotive - AVA)
We are seeking an exceptional, experienced senior engineer with deep expertise in Computer Vision, Neural Networks, 3D Scene Understanding and Sensor Data Processing. The expectation is to lead a growing team of engineers to help them build and deliver customized solutions for our clients. A solid engineering as well as team management background is a must.
About MulticoreWare Inc
MulticoreWare Inc is a software and solutions development company with top-notch talent and skill in a variety of micro-architectures, including multi-thread, multi-core, and heterogeneous hardware platforms. It works in sectors including High Performance Computing (HPC), Media & AI Analytics, Video Solutions, Autonomous Vehicle and Automotive software, all of which are rapidly expanding. The Autonomous Vehicles & Automotive business unit specializes in delivering optimized solutions for sophisticated sensor fusion intelligence and the design of algorithms & implementation of software to be deployed on a variety of automotive grade hardware platforms.
Role Responsibilities
● Lead a team to solve the problems in a perception / autonomous-systems scope and turn ideas into code & products
● Drive all technical elements of development, such as project requirements definition, design, implementation, unit testing, integration, and software delivery
● Implementing cutting edge AI solutions on embedded platforms and optimizing them for performance. Hardware architecture aware algorithm design and development
● Contribute to the vision and long-term strategy of the business unit
Required Qualifications (Must Have)
● 3 - 7 years of experience with real world system building, including design, coding (C++/Python) and evaluation/testing (C++/Python)
● Solid experience in 2D / 3D Computer Vision algorithms, Machine Learning and Deep Learning fundamentals – Theory & Practice. Hands-on experience with Deep Learning frameworks like Caffe, TensorFlow or PyTorch
● Expert level knowledge in any of the courses related Signal Data Processing / Autonomous or Robotics software development (Perception, Localization, Prediction, Planning), multi-object tracking, sensor fusion algorithms and familiarity on Kalman filters, particle filters, clustering methods etc.
● Good project management and execution capabilities, as well as good communication and coordination ability
● Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or related fields
Preferred Qualifications (Nice-to-Have)
● GPU architecture and CUDA programming experience, as well as knowledge of AI inference optimization using Quantization, Compression (or) Model Pruning
● Track record of research excellence with prior publication on top-tier conferences and journals
About Us |
|
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
|
- Extract and present valuable information from data
- Understand business requirements and generate insights
- Build mathematical models, validate and work with them
- Explain complex topics tailored to the audience
- Validate and follow up on results
- Work with large and complex data sets
- Establish priorities with clear goals and responsibilities to achieve a high level of performance.
- Work in an agile and iterative manner on solving problems
- Evaluate different options proactively and the ability to solve problems in an innovative way. Develop new solutions or combine existing methods to create new approaches.
- Good understanding of Digital & analytics
- Strong communication skills, orally and in writing
Job Overview:
As a Data Scientist, you will work in collaboration with our business and engineering people, on creating value from data. Often the work requires solving complex problems by turning vast amounts of data into business insights through advanced analytics, modeling, and machine learning. You have a strong foundation in analytics, mathematical modeling, computer science, and math - coupled with a strong business sense. You proactively fetch information from various sources and analyze it for better understanding of how the business performs. Furthermore, you model and build AI tools that automate certain processes within the company. The solutions produced will be implemented to impact business results.
Primary Responsibilities:
- Develop an understanding of business obstacles, create solutions based on advanced analytics and draw implications for model development
- Combine, explore, and draw insights from data. Often large and complex data assets from different parts of the business.
- Design and build explorative, predictive- or prescriptive models, utilizing optimization, simulation, and machine learning techniques
- Prototype and pilot new solutions and be a part of the aim of ‘productizing’ those valuable solutions that can have an impact at a global scale
- Guides and coaches other chapter colleagues to help solve data/technical problems at an operational level, and in methodologies to help improve development processes
- Identifies and interprets trends and patterns in complex data sets to enable the business to make data-driven decisions
Position Name: Software Developer
Required Experience: 3+ Years
Number of positions: 4
Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science).
Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting
Compensation - As per industry standards.
Joining - Immediate joining is preferrable.
Required Skills:
- Strong Experience in Python and web frameworks like Django, Tornado and/or Flask
- Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib
- Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot
- Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift.
- Experience in building APIs, Multi-threading for tasks on Linux platform
- Exposure to finance and capital markets will be added advantage.
- Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
- Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application.
- Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
- Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3
Company Description:
Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.
Website: http://www.virtuaresearch.com" target="_blank">www.virtuaresearch.com
1)Machine learning development using Python or Scala Spark
2)Knowledge of multiple ML algorithms like Random forest, XG boost, RNN, CNN, Transform learning etc..
3)Aware of typical challenges in machine learning implementation and respective applications
Good to have
1)Stack development or DevOps team experience
2)Cloud service (AWS, Cloudera), SAAS, PAAS
3)Big data tools and framework
4)SQL experience