11+ Mathematical modeling Jobs in Bangalore (Bengaluru) | Mathematical modeling Job openings in Bangalore (Bengaluru)
Apply to 11+ Mathematical modeling Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Mathematical modeling Job opportunities across top companies like Google, Amazon & Adobe.
Glance – An InMobi Group Company:
Glance is an AI-first Screen Zero content discovery platform, and it’s scaled massively in the last few months to one of the largest platforms in India. Glance is a lock-screen first mobile content platform set up within InMobi. The average mobile phone user unlocks their phone >150 times a day. Glance aims to be there, providing visually rich, easy to consume content to entertain and inform mobile users - one unlock at a time. Glance is live on more than 80 millions of mobile phones in India already, and we are only getting started on this journey! We are now into phase 2 of the Glance story - we are going global!
Roposo is part of the Glance family. It is a short video entertainment platform. All the videos created here are user generated (via upload or Roposo creation tools in camera) and there are many communities creating these videos on various themes we call channels. Around 4 million videos are created every month on Roposo and power Roposo channels, some of the channels are - HaHa TV (for comedy videos), News, Beats (for singing/ dance performances) along with a For You (personalized for a user) and Your Feed (for videos of people a user follows).
What’s the Glance family like?
Consistently featured among the “Great Places to Work” in India since 2017, our culture is our true north, enabling us to think big, solve complex challenges and grow with new opportunities. Glanciers are passionate and driven, creative and fun-loving, take ownership and are results-focused. We invite you to free yourself, dream big and chase your passion.
What can we promise?
We offer an opportunity to have an immediate impact on the company and our products. The work that you shall do will be mission critical for Glance and will be critical for optimizing tech operations, working with highly capable and ambitious peer groups. At Glance, you get food for your body, soul, and mind with daily meals, gym, and yoga classes, cutting-edge training and tools, cocktails at drink cart Thursdays and fun at work on Funky Fridays. We even promise to let you bring your kids and pets to work.
What you will be doing?
Glance is looking for a Data Scientist who will design and develop processes and systems to analyze high volume, diverse "big data" sources using advanced mathematical, statistical, querying, and reporting methods. Will use machine learning techniques and statistical analysis to predict outcomes and behaviors. Interacts with business partners to identify questions for data analysis and experiments. Identifies meaningful insights from large data and metadata sources; interprets and communicates insights and or prepares output from analysis and experiments to business partners.
You will be working with Product leadership, taking high-level objectives and developing solutions that fulfil these requirements. Stakeholder management across Eng, Product and Business teams will be required.
Basic Qualifications:
- Five+ years experience working in a Data Science role
- Extensive experience developing and deploying ML models in real world environments
- Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
- Exceptional familiarity with Python, Java, Spark or other open-source software with data science libraries
- Experience in advanced math and statistics
- Excellent familiarity with command line linux environment
- Able to understand various data structures and common methods in data transformation
- Experience deploying machine learning models and measuring their impact
- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
Preferred Qualifications
- Experience developing recommendation systems
- Experience developing and deploying deep learning models
- Bachelor’s or Master's Degree or PhD that included coursework in statistics, machine learning or data analysis
- Five+ years experience working with Hadoop, a NoSQL Database or other big data infrastructure
- Experience with being actively engaged in data science or other research-oriented position
- You would be comfortable collaborating with cross-functional teams.
- Active personal GitHub account.
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
A Customer Data Platform-led personalization and real-time marketing automation solution that delivers superior customer experiences resulting in increased conversions, retention, and growth for enterprises.
• 6+ years of data science experience.
• Demonstrated experience in leading programs.
• Prior experience in customer data platforms/finance domain is a plus.
• Demonstrated ability in developing and deploying data-driven products.
• Experience of working with large datasets and developing scalable algorithms.
• Hands-on experience of working with tech, product, and operation teams.
Technical Skills:
• Deep understanding and hands-on experience of Machine learning and Deep
learning algorithms. Good understanding of NLP and LLM concepts and fair
experience in developing NLU and NLG solutions.
• Experience with Keras/TensorFlow/PyTorch deep learning frameworks.
• Proficient in scripting languages (Python/Shell), SQL.
• Good knowledge of Statistics.
• Experience with big data, cloud, and MLOps.
Soft Skills:
• Strong analytical and problem-solving skills.
• Excellent presentation and communication skills.
• Ability to work independently and deal with ambiguity.
Continuous Learning:
• Stay up to date with emerging technologies.
Qualification.
A degree in Computer Science, Statistics, Applied Mathematics, Machine Learning, or any related field / B. Tech.
• Create and maintain data pipeline
• Build and deploy ETL infrastructure for optimal data delivery
• Work with various including product, design and executive team to troubleshoot data
related issues
• Create tools for data analysts and scientists to help them build and optimise the product
• Implement systems and process for data access controls and guarantees
• Distill the knowledge from experts in the field outside the org and optimise internal data
systems
Preferred qualifications/skills:
• 5+ years experience
• Strong analytical skills
____ 04
Freight Commerce Solutions Pvt Ltd.
• Degree in Computer Science, Statistics, Informatics, Information Systems
• Strong project management and organisational skills
• Experience supporting and working with cross-functional teams in a dynamic environment
• SQL guru with hands on experience on various databases
• NoSQL databases like Cassandra, MongoDB
• Experience with Snowflake, Redshift
• Experience with tools like Airflow, Hevo
• Experience with Hadoop, Spark, Kafka, Flink
• Programming experience in Python, Java, Scala
A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.
Responsibilities:
- Design, Create and maintain on premise and cloud based data integration pipelines.
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
- Cloud: GCP
- Must have: BigQuery, Python, Vertex AI
- Nice to have Services: Data Plex
- Exp level: 5-10 years.
- Preferred Industry (nice to have): Manufacturing – B2B sales
Our Client company is into Telecommunications. (SY1)
- Participate in full machine learning Lifecycle including data collection, cleaning, preprocessing to training models, and deploying them to Production.
- Discover data sources, get access to them, ingest them, clean them up, and make them “machine learning ready”.
- Work with data scientists to create and refine features from the underlying data and build pipelines to train and deploy models.
- Partner with data scientists to understand and implement machine learning algorithms.
- Support A/B tests, gather data, perform analysis, draw conclusions on the impact of your models.
- Work cross-functionally with product managers, data scientists, and product engineers, and communicate results to peers and leaders.
- Mentor junior team members
Who we have in mind:
- Graduate in Computer Science or related field, or equivalent practical experience.
- 4+ years of experience in software engineering with 2+ years of direct experience in the machine learning field.
- Proficiency with SQL, Python, Spark, and basic libraries such as Scikit-learn, NumPy, Pandas.
- Familiarity with deep learning frameworks such as TensorFlow or Keras
- Experience with Computer Vision (OpenCV), NLP frameworks (NLTK, SpaCY, BERT).
- Basic knowledge of machine learning techniques (i.e. classification, regression, and clustering).
- Understand machine learning principles (training, validation, etc.)
- Strong hands-on knowledge of data query and data processing tools (i.e. SQL)
- Software engineering fundamentals: version control systems (i.e. Git, Github) and workflows, and ability to write production-ready code.
- Experience deploying highly scalable software supporting millions or more users
- Experience building applications on cloud (AWS or Azure)
- Experience working in scrum teams with Agile tools like JIRA
- Strong oral and written communication skills. Ability to explain complex concepts and technical material to non-technical users
Commoditize data engineering. (X1)
This is the first senior person we are bringing for this role. This person will start with the training program but will go on to build a team and eventually also be responsible for the entire training program + Bootcamp.
We are looking for someone fairly senior and has experience in data + tech. At some level, we have all the technical expertise to teach you the data stack as needed. So it's not super important you know all the tools. However, having basic knowledge of the stack requirement. The training program covers 2 parts - Technology (our stack) and Process (How we work with clients). Both of which are super important.
- Full-time flexible working schedule and own end-to-end training
- Self-starter - who can communicate effectively and proactively
- Function effectively with minimal supervision.
- You can train and mentor potential 5x engineers on Data Engineering skillsets
- You can spend time on self-learning and teaching for new technology when needed
- You are an extremely proactive communicator, who understands the challenges of remote/virtual classroom training and the need to over-communicate to offset those challenges.
Requirements
- Proven experience as a corporate trainer or have passion for Teaching/ Providing Training
- Expertise in Data Engineering Space and have good experience in Data Collection, Data
- Ingestion, Data Modeling, Data Transformation, and Data Visualization technologies and techniques
- Experience Training working professionals on in-demand skills like Snowflake, debt, Fivetran, google data studio, etc.
- Training/Implementation Experience using Fivetran, DBT Cloud, Heap, Segment, Airflow, Snowflake is a big plus
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud