Natural Language Processing (NLP) Jobs in Bangalore (Bengaluru)
Experience: 2 or above years of experience
Responsibilities (but not limited to):
- Create data staging, transformation layers
- Prepare model-ready-data
- Create consumption layer of data/models by exposing them as service
- Maintain/Monitor and ensure scalability
Preferred Skills (but not limited to):
- Strong background in handling data, writing efficient SQL, python scripts, optimizing a query, loops, designing dataflow jobs, identifying the bottlenecks in a code and optimizing them, data structures, and design
- Strong background in deploying ML/Data as a service by writing APIs, monitoring, error handling, load balancing, access, and authentications
- Conversant with using API developments ( like GCP APIgee, FastAPI, Spring boot ),
- Have an understanding of Apache Airflow, Spark Streaming, SparkML
- Familiarity with development of javascript, jquery UI, UX design while keeping in mind the optimized load balancing and other front-end aspects
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
• 3+ years of experience
• Strong engineering skills: experience in OOP, Design Patterns, and time and space-efficient algorithms
• Prior experience building solutions on a public cloud (eg: AWS, Azure, GCP)
• Prior experience working on NLP/NLU solutions from model training to deployment
• Excellent working knowledge of Python, deep learning frameworks (pytorch, tensorflow), and experience working with Data Science packages within python
• Working knowledge of containerization and microservices based solution architecture
• Solid understanding of Machine and Deep Learning algorithms and techniques (transformer based architectures, different types of embedding models)
• Great team player, willing to wear many hats
• Flexibility to work across timezones
Responsibilities
- You will Work across engineering, operations, and AI teams to build and scale practical Machine Learning solutions which will improve the intelligence of Kelsey AI
- Train and maintain our deep learning models and define conversational flows for our chatbot. This will involve all steps of the data science lifecycle from data collection to model tuning
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
THE IDEAL CANDIDATE WILL
- Engage with executive level stakeholders from client's team to translate business problems to high level solution approach
- Partner closely with practice, and technical teams to craft well-structured comprehensive proposals/ RFP responses clearly highlighting Tredence’s competitive strengths relevant to Client's selection criteria
- Actively explore the client’s business and formulate solution ideas that can improve process efficiency and cut cost, or achieve growth/revenue/profitability targets faster
- Work hands-on across various MLOps problems and provide thought leadership
- Grow and manage large teams with diverse skillsets
- Collaborate, coach, and learn with a growing team of experienced Machine Learning Engineers and Data Scientists
ELIGIBILITY CRITERIA
- BE/BTech/MTech (Specialization/courses in ML/DS)
- At-least 7+ years of Consulting services delivery experience
- Very strong problem-solving skills & work ethics
- Possesses strong analytical/logical thinking, storyboarding and executive communication skills
- 5+ years of experience in Python/R, SQL
- 5+ years of experience in NLP algorithms, Regression & Classification Modelling, Time Series Forecasting
- Hands on work experience in DevOps
- Should have good knowledge in different deployment type like PaaS, SaaS, IaaS
- Exposure on cloud technologies like Azure, AWS or GCP
- Knowledge in python and packages for data analysis (scikit-learn, scipy, numpy, pandas, matplotlib).
- Knowledge of Deep Learning frameworks: Keras, Tensorflow, PyTorch, etc
- Experience with one or more Container-ecosystem (Docker, Kubernetes)
- Experience in building orchestration pipeline to convert plain python models into a deployable API/RESTful endpoint.
- Good understanding of OOP & Data Structures concepts
Nice to Have:
- Exposure to deployment strategies like: Blue/Green, Canary, AB Testing, Multi-arm Bandit
- Experience in Helm is a plus
- Strong understanding of data infrastructure, data warehouse, or data engineering
You can expect to –
- Work with world’ biggest retailers and help them solve some of their most critical problems. Tredence is a preferred analytics vendor for some of the largest Retailers across the globe
- Create multi-million Dollar business opportunities by leveraging impact mindset, cutting edge solutions and industry best practices.
- Work in a diverse environment that keeps evolving
- Hone your entrepreneurial skills as you contribute to growth of the organization
Responsibilities:
- Scope and define global strategy with agreed business plan to engage with named GSI partners for both sell to and sell with motion.
- Drive revenue based plan from both Sell to & Sell with motion with an identified set of GSI on global scale.
- Create execution frameworks in terms of joint value proposition, Go To Market and people engagement for ‘Sell With’ opportunities with GSI in identified verticals.
- Create, manage & mature ‘Sell to’ opportunities with GSI to their existing customers base for whom they may/may not be extending any kind of process services in the identified set of verticals.
- Create & Manage executive level relationships to align teams, resources and market opportunities from joint partnership standpoint.
- Conduct quarterly reviews on the status of joint partnership as against the business plan with all key stakeholders from both the organisations. Identify variance to business plan and create a follow through plan to bridge the gap.
- Maintain close monitoring on all agreements with partners and negotiate agreements related to NDAs, Teaming, Reseller, Joint Development, Integration, etc. between the two organisations
- Foster account-level relationships with Contiinex Sales and Partner Delivery teams, and discovering new opportunities for Contiinex market expansion.
- Create Centre of Excellence in identified set of GSI as part of the joint value proposition to showcase the joint strength of Contiinex and the partner service capabilities.
Focus Area:
- Thoroughly understand Contiinex platform capabilities and positioning in the market and have a good understanding of related and/or competitive products and solutions
- Understand the platform, its products and solutions sold by assigned partners and are able to articulate how Contiinex’s solutions can be integrated with those products and/or solutions.
- Are focused on top partners assigned to them in the market and acquiring significant new sales opportunities, & Line of Business Expansion
- Have a proven network of CxO level contacts or discission makers/influencers into target partners.
- Actively support Contiinex strategic indirect sales model by acting as “master orchestrator” of the sale in the ecosystem.
- Maintain and expand a strong internal network and have a proven ability to creatively pull teams together to meet and address assigned partners’ needs.
Qualifications
- 12+ years of enterprise partner management, expertise and experience in managing GSI.
- Deep experience in the enterprise software business preferably in the insurance, retail & banking industries.
- An expert in account management and to assure the partners achieve their strategic goals in a win-win relationship.
- Proven record in leading and driving global teams to achieve and exceed established goals and objectives.
- Demonstrated track record of career progression preferably at Insurance/Insurtech company/industry in a global capacity
- Solid expertise in mastering a complex technical sales and services product offering.
- Strong analytical skills with the ability to scrutinize and communicate metrics, key performance indicators to internal stakeholders and partners.
- Exceptional customer centric approach with internal and external customers.
CTC - 40% hike on your current salary
As an Associate Manager - Senior Data scientist you will solve some of the most impactful business problems for our clients using a variety of AI and ML technologies. You will collaborate with business partners and domain experts to design and develop innovative solutions on the data to achieve
predefined outcomes.
• Engage with clients to understand current and future business goals and translate business
problems into analytical frameworks
• Develop custom models based on an in-depth understanding of underlying data, data structures,
and business problems to ensure deliverables meet client needs
• Create repeatable, interpretable and scalable models
• Effectively communicate the analytics approach and insights to a larger business audience
• Collaborate with team members, peers and leadership at Tredence and client companies
Qualification:
1. Bachelor's or Master's degree in a quantitative field (CS, machine learning, mathematics,
statistics) or equivalent experience.
2. 5+ years of experience in data science, building hands-on ML models
3. Experience leading the end-to-end design, development, and deployment of predictive
modeling solutions.
4. Excellent programming skills in Python. Strong working knowledge of Python’s numerical, data
analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, Jupyter, etc.
5. Advanced SQL skills with SQL Server and Spark experience.
6. Knowledge of predictive/prescriptive analytics including Machine Learning algorithms
(Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks
7. Experience with Natural Language Processing (NLTK) and text analytics for information
extraction, parsing and topic modeling.
8. Excellent verbal and written communication. Strong troubleshooting and problem-solving skills.
Thrive in a fast-paced, innovative environment
9. Experience with data visualization tools — PowerBI, Tableau, R Shiny, etc. preferred
10. Experience with cloud platforms such as Azure, AWS is preferred but not required
Job Category: Software Development
Job Type: Full Time
Job Location: Bangalore
Gnani.ai aims to empower enterprises with AI based speech technology.
Gnani.ai is an AI-based Speech Recognition and NLP Startup that is working on voice-based solutions for large businesses. AI is the biggest innovation that is disrupting the market and we are at the heart of this disruption. Funded by one of the largest global conglomerates in the world, and backed a number of market leaders in the tech industry,
We are working with some of the largest companies in the banking, insurance, e-commerce and financial services sectors and we are not slowing down. With aggressive expansion plans, Gnani.ai aims to be the leader in the global market for voice-based solutions.
Gnani.ai is building the future for voice-based business solutions. If you are fascinated by AI and would like to work on the latest AI technologies in a high-intense, fast-growing and flexible work environment with immense growth opportunities, come and join us. We are looking for hard workers, who are ready to take on big challenges.
NLP Software Developer
Gnani.ai is looking to hire software developers with 0 to 2+ Years of experience, with a keen interest in designing and developing chat and voice bots. We are looking for an Engineer who can work with us in developing an NLP framework if you have the below skill set
Requirements :
- Proficient knowledge of Python
- Proficient understanding of code versioning tools, such as Git / SVN.
- Good knowledge of algorithms to find and implement tools for NLP tasks
- Knowledge of NLP libraries and frameworks
- Understanding of text representation techniques, algorithms, statistics
- Syntactic & Semantic Parsing
- Knowledge/work experience on No-SQL database Mongo.
- Good knowledge of Docker container technologies.
- Strong communication skills
Responsibilities :
- Develop NLP systems according to requirements
- Maintain NLP libraries and frameworks
- Design and develop natural language processing systems
- Define appropriate datasets for language learning
- Use effective text representations to transform natural language into useful features
- Train the developed model and run evaluation experiments
- Find and implement the right algorithms and tools for NLP tasks
- Perform statistical analysis of results and refine models
- Constantly keep up to date with the field of machine learning
- Implement changes as needed and analyze bugs
Good To Have :
Start up experience is a plus
JOB DESCRIPTION
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within Company like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
QUALIFICATIONS
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA with 15+ years of full time education
ADDITIONAL INFORMATION
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Position: Senior Speech Recognition Engineer
- Experience : 6+ Years
- Salary : As per market standards
- Location: Bangalore
- Work Mode: Only WFO
- Notice Period: immediate to 30 Days of notice
2-5 yrs of proven experience in ML, DL, and preferably NLP.
Preferred Educational Background - B.E/B.Tech, M.S./M.Tech, Ph.D.
𝐖𝐡𝐚𝐭 𝐰𝐢𝐥𝐥 𝐲𝐨𝐮 𝐰𝐨𝐫𝐤 𝐨𝐧?
𝟏) Problem formulation and solution designing of ML/NLP applications across complex well-defined as well as open-ended healthcare problems.
2) Cutting-edge machine learning, data mining, and statistical techniques to analyse and utilise large-scale structured and unstructured clinical data.
3) End-to-end development of company proprietary AI engines - data collection, cleaning, data modelling, model training / testing, monitoring, and deployment.
4) Research and innovate novel ML algorithms and their applications suited to the problem at hand.
𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐰𝐞 𝐥𝐨𝐨𝐤𝐢𝐧𝐠 𝐟𝐨𝐫?
𝟏) Deeper understanding of business objectives and ability to formulate the problem as a Data Science problem.
𝟐) Solid expertise in knowledge graphs, graph neural nets, clustering, classification.
𝟑) Strong understanding of data normalization techniques, SVM, Random forest, data visualization techniques.
𝟒) Expertise in RNN, LSTM, and other neural network architectures.
𝟓) DL frameworks: Tensorflow, Pytorch, Keras
𝟔) High proficiency with standard database skills (e.g., SQL, MongoDB, Graph DB), data preparation, cleaning, and wrangling/munging.
𝟕) Comfortable with web scraping, extracting, manipulating, and analyzing complex, high-volume, high-dimensionality data from varying sources.
𝟖) Experience with deploying ML models on cloud platforms like AWS or Azure.
9) Familiarity with version control with GIT, BitBucket, SVN, or similar.
𝐖𝐡𝐲 𝐜𝐡𝐨𝐨𝐬𝐞 𝐮𝐬?
𝟏) We offer Competitive remuneration.
𝟐) We give opportunities to work on exciting and cutting-edge machine learning problems so you contribute towards transforming the healthcare industry.
𝟑) We offer flexibility to choose your tools, methods, and ways to collaborate.
𝟒) We always value and believe in new ideas and encourage creative thinking.
𝟓) We offer open culture where you will work closely with the founding team and have the chance to influence the product design and execution.
𝟔) And, of course, the thrill of being part of an early-stage startup, launching a product, and seeing it in the hands of the users.
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
Expert in Machine Learning (ML) & Natural Language Processing (NLP).
Expert in Python, Pytorch and Data Structures.
Experience in ML model life cycle (Data preparation, Model training and Testing and ML Ops).
Strong experience in NLP, NLU and NLU using transformers & deep learning.
Experience in federated learning is a plus
Experience with knowledge graphs and ontology.
Responsible for developing, enhancing, modifying, optimizing and/or maintaining applications, pipelines and codebase in order to enhance the overall solution.
Experience working with scalable, highly-interactive, high-performance systems/projects (ML).
Design, code, test, debug and document programs as well as support activities for the corporate systems architecture.
Working closely with business partners in defining requirements for ML applications and advancements of solution.
Engage in specifications in creating comprehensive technical documents.
Experience / Knowledge in designing enterprise grade system architecture for solving complex problems with a sound understanding of object-oriented programming and Design Patterns.
Experience in Test Driven Development & Agile methodologies.
Good communication skills - client facing environment.
Hunger for learning, self-starter with a drive to technically mentor cohort of developers. 16. Good to have working experience in Knowledge Graph based ML products development; and AWS/GCP based ML services.
Duties and Responsibilities:
Research and Develop Innovative Use Cases, Solutions and Quantitative Models
Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
Provide R&D support to productize research prototypes.
Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
Team Management Skills
Academic Qualification
7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
Innovative and creative in data analysis, problem solving and presentation of solutions.
Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
Strong experience in handling multi-national client engagements
Good verbal, writing & presentation skills
Core Expertise
Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like SQL
Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests.
Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques.
Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
Capability for continuous learning and knowledge acquisition.
Mentor colleagues for growth and success
Strong Software Engineering Background
Hands-on experience with data science tools
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
Hi,
We are hiring for Data Scientist for Bangalore.
Req Skills:
- NLP
- ML programming
- Spark
- Model Deployment
- Experience processing unstructured data and building NLP models
- Experience with big data tools pyspark
- Pipeline orchestration using Airflow and model deployment experience is preferred
With 30B+ medical and pharmacy claims covering 300M+ US patients, Compile Data helps life science companies generate actionable insights across different stages of a drug's lifecycle. Through context driven record-linking and machine-learning algorithms, Compile's platform transforms messy and disparate datasets into an intuitive graph of healthcare providers and all their activities.
Responsibilities:
- Help build intelligent systems to cleanse and record-link healthcare data from over 200 sources
- Build tools and ML modules to generate insights from hard to analyse healthcare data, and help solve various business needs of large pharma companies
- Mentoring and growing a data science team
Requirements:
- 4-8 years of experience in building ML models, preferably in healthcare
- Worked with NN and ML algorithms, solved problems using panel and transactional data
- Experience working on record-linking problems and NLP approaches towards text normalization and standardization is a huge plus
- Proven experience as an ML Lead, worked in Python or R; with experience in developing big-data ML solutions at scale and integration with production software systems
- Ability to craft context around key business requirements and present ideas in business and user friendly language
As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.
Basic Qualifications
• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
About the Role:
As a Speech Engineer you will be working on development of on-device multilingual speech recognition systems.
- Apart from ASR you will be working on solving speech focused research problems like speech enhancement, voice analysis and synthesis etc.
- You will be responsible for building complete pipeline for speech recognition from data preparation to deployment on edge devices.
- Reading, implementing and improving baselines reported in leading research papers will be another key area of your daily life at Saarthi.
Requirements:
- 2-3 year of hands-on experience in speech recognitionbased projects
- Proven experience as a Speech engineer or similar role
- Should have experience of deployment on edge devices
- Candidate should have hands-on experience with open-source tools such as Kaldi, Pytorch-Kaldi and any of the end-to-end ASR tools such as ESPNET or EESEN or DeepSpeech Pytorch
- Prior proven experience in training and deployment of deep learning models on scale
- Strong programming experience in Python,C/C++, etc.
- Working experience with Pytorch and Tensorflow
- Experience contributing to research communities including publications at conferences and/or journals
- Strong communication skills
- Strong analytical and problem-solving skills
Key deliverables for the Data Science Engineer would be to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality prediction systems integrated with our products.
What will you do?
- You will be building and deploying ML models to solve specific business problems related to NLP, computer vision, and fraud detection.
- You will be constantly assessing and improving the model using techniques like Transfer learning
- You will identify valuable data sources and automate collection processes along with undertaking pre-processing of structured and unstructured data
- You will own the complete ML pipeline - data gathering/labeling, cleaning, storage, modeling, training/testing, and deployment.
- Assessing the effectiveness and accuracy of new data sources and data gathering techniques.
- Building predictive models and machine-learning algorithms to apply to data sets.
- Coordinate with different functional teams to implement models and monitor outcomes.
- Presenting information using data visualization techniques and proposing solutions and strategies to business challenges
We would love to hear from you if :
- You have 2+ years of experience as a software engineer at a SaaS or technology company
- Demonstrable hands-on programming experience with Python/R Data Science Stack
- Ability to design and implement workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python
- Familiarity with Big Data Platforms (Databricks, Hadoop, Hive), AWS Services (AWS, Sagemaker, IAM, S3, Lambda Functions, Redshift, Elasticsearch)
- Experience in Probability and Statistics, ability to use ideas of Data Distributions, Hypothesis Testing and other Statistical Tests.
- Demonstrable competency in Data Visualisation using the Python/R Data Science Stack.
- Preferable Experience Experienced in web crawling and data scraping
- Strong experience in NLP. Worked on libraries such as NLTK, Spacy, Pattern, Gensim etc.
- Experience with text mining, pattern matching and fuzzy matching
Why Tartan?
- Brand new Macbook
- Stock Options
- Health Insurance
- Unlimited Sick Leaves
- Passion Fund (Invest in yourself or your passion project)
- Wind Down
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
Job Description:
Role Summary:
The Robotics Process Automation Business Analyst helps define the business case for the proposed automation of the business processes by reviewing the current process, identifying the automation potential of the process, and the potential FTE takeout. The process architect working with the customer subject matter experts, and the technical architect designs the steps in the process that can be automated (with or without reengineering), and which serves as a basis for the development team to implement the robotics.
The business analyst will also review the design at the design stage, validates the developed automation to ensure it meets the intended design and the business benefits.
B1 – Data Scientist - Kofax Accredited Developers
Total Experience – 7-10 Years
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders , like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA with 15+ years of full time education
Job Description –Sr. Python Developer
Job Brief
The job requires Python experience as well as expertise with AI/ML. This Developer is expected to have strong technical skills, to work closely with the other team members in development and managing key projects. Ability to work on a small team with minimal supervision, Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality
Job Requirement
- 4 plus Years of Python relevant experience
- Good at communication skills and Email etiquette
- Quick learner and should be a team player
- Experience in working on python framework
- Experience in Developing With Python & MySQL on LAMP/LEMP Stack
- Experience in Developing an MVC Application with Python
- Experience with Threading, Multithreading and pipelines
- Experience in Creating RESTful API’s With Python in JSON, XMLs
- Experience in Designing Relational Database using MySQL And Writing Raw SQL Queries
- Experience with GitHub Version Control
- Ability of Write Custom Python Code
- Excellent working knowledge of AI/ML based application
- Experience in OpenCV/TensorFlow/ SimpleCV/PyTorch
- Experience working in agile software development methodology
- Understanding of end-to-end ML project lifecycle
- Understanding of cross platform OS systems like Windows, Linux or UNIX with hands-on working experience
Responsibilities
- Participate in the entire development lifecycle, from planning through implementation, documentation, testing, and deployment, all the way to monitoring.
- Produce high quality, maintainable code with great test coverage
- Integration of user-facing elements developed by front-end developers
- Build efficient, testable, and reusable Python/AI/ML modules
- Solve complex performance problems and architectural challenges
- Help with designing and architecting the product
- Design and develop the web application modules or APIs
- Troubleshoot and debug applications.
Job Description
- Build state of the art langugae models to understand vernacular languages.
- Build and push machine learning models to optimise the results.
- Consume real-time data and build layers around that to leverage customer understanding.
- Our state-of-the-art models are ingesting and generating relevant search results for our customers weekly. Work directly with our current models to tune them to the abundant inflow of data as well as architecting new ones to further infuse AI into search workflows.
- Bee an integral part of Zevi, working on the core tech that makes our product what it is today. It doesn’t stop there though: As we collect more and more insights, get ready to shape the future of search.
Skills and Experience expected:
- Have at least 2 years of experience working with language models, building, fine-tuning, training them.
- Have closely read NLP publications and implemented some of them.
- Have designed and implemented a scalable ML infrastructure that is both secure and modular.
- Have pushed deep learning models in production.
- Have been responsible for breaking down and solving complex problems.
- Have developed engineering principles and designed processes/workflows.
- Have experience working in Python, sklearn, Pytorch, Tensorflow, and are an expert in at least one of those technologies.
What can you expect from Zevi ?
- Closely work with leading enterprise engineering teams.
- Be a part of highly motivated core team.
- Get access and contribute to all strategies being built by Zevi.
- Full ownership of your product line.
About Quizizz
Quizizz is one of the fastest-growing EdTech platforms in the world. Our team is on a mission to motivate every student and our learning platform is used by more than 75 million people per month in over 125 countries, including 80% of U.S. schools.
We have phenomenal investors, we’re profitable, and we’re committed to growing and improving every day. If you’re excited about international SaaS and want to build towards a mission that you can be proud of then Quizizz might be a good fit for you.
We currently have offices in India and the U.S. with incredible team members around the world and we hope you’ll join us.
Role
We are looking for an experienced Product Analyst. The role offers an exciting opportunity to shape the future of the product, significantly. The team is responsible for supporting all decisions being taken by other teams, to improve growth, engagement and revenue of the platform. Furthermore, the team sets up and maintains internal tools, apps, dashboards, processes and functions as arbiters of information within the organization.
The variety of tasks is immense and will give you the chance to play to your strengths. Tasks could include; improving search and recommendations, data mining, identifying potential customers, ad-hoc analyses, creating APIs for internal consumption et cetera.
Some of the challenges you will face include:
- Working cross-functionally with design, engineering, sales and marketing teams to aid in decision making.
- Analyze and conclude experiments of new product features.
- Creating, maintaining, and modifying internal dashboards, apps and reports being used, as part of the larger analytics function at Quizizz.
- Deep diving into data to extract insights that could help explain a certain phenomenon.
- Organizing the analytics warehouse, as and when new data is added.
Requirements:
- At least 2 years of industry experience, providing solutions to business problems in a cross-functional team.
- Versatility to communicate clearly with both technical and non-technical audiences.
- An SQL expert, and strong programming skills. (Python preferred)
- Mathematical thinking.
- Attention to Detail.
Good to have:
- Experience with Jupyter (/iPython) notebooks.
- Experience using a data visualization tool such as Tableau, Google Data Studio, Qlikview, Power BI, RShiny.
- Ability to create simple data apps/APIs. (we use flask or node.js)
- Knowledge of Natural Language Processing techniques.
- Data analytical and data engineering experience.
Benefits:
At Quizizz, we have built a world-class team of talented individuals. While we all care deeply about our work, we also ensure that we maintain a healthy work-life balance. Our policies are designed to ensure the well-being and comfort of our employees. Some of the benefits we offer include:
- Healthy work-life balance. Put in your 8 hours, and enjoy the rest of your day.
- Flexible leave policy. Take time off when you need it.
- Comprehensive health coverage of Rs. 6 lakhs, covering the employee and their parents, spouse and children. Pre-existing conditions are covered from day 1, and also benefits like free doctor consultations and more.
- Relocation support including travel and accommodation, and we'll also pay for a broker to find your home in Bangalore!
- Rs. 20,000 annual health and wellness allowance.
- Professional development support. We will reimburse you for relevant courses and books that you need to become a better professional.
- Delicious Meals including breakfast and lunch served at office, and a fully-stocked pantry for all your snacking needs.
closely with the Kinara management team to investigate strategically important business
questions.
Lead a team through the entire analytical and machine learning model life cycle:
Define the problem statement
Build and clean datasets
Exploratory data analysis
Feature engineering
Apply ML algorithms and assess the performance
Code for deployment
Code testing and troubleshooting
Communicate Analysis to Stakeholders
Manage Data Analysts and Data Scientists
1+ years of proven experience in ML/AI with Python
Work with the manager through the entire analytical and machine learning model life cycle:
⮚ Define the problem statement
⮚ Build and clean datasets
⮚ Exploratory data analysis
⮚ Feature engineering
⮚ Apply ML algorithms and assess the performance
⮚ Codify for deployment
⮚ Test and troubleshoot the code
⮚ Communicate analysis to stakeholders
Technical Skills
⮚ Proven experience in usage of Python and SQL
⮚ Excellent in programming and statistics
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium, Postman, Airflow, PySpark
Work Location - Bangalore
The Data Analytics Senior Analyst is a seasoned professional role. Applies in-depth disciplinary knowledge, contributing to the development of new techniques and the improvement of processes and work-flow for the area or function. Integrates subject matter and industry expertise within a defined area. Requires in-depth understanding of how areas collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the function and overall business. Evaluates moderately complex and variable issues with substantial potential impact, where development of an approach/taking of an action involves weighing various alternatives and balancing potentially conflicting situations using multiple sources of information. Requires good analytical skills in order to filter, prioritize and validate potentially complex and dynamic material from multiple sources. Strong communication and diplomacy skills are required. Regularly assumes informal/formal leadership role within teams. Involved in coaching and training of new recruits. Significant impact in terms of project size, geography, etc. by influencing decisions through advice, counsel and/or facilitating services to others in area of specialization. Work and performance of all teams in the area are directly affected by the performance of the individual.
Responsibilities:
- Build and enhance the software stack for modelling and data analytics
- Incorporate relevant data related algorithms in the products to solve business problems and improve them over time
- Automate repetitive data modelling and analytics tasks
- Keep up to date with available relevant technologies, to solve business problems
- Become a subject matter expert and closely work with analytics users to understand their need & provide recommendations/solutions
- Help define/share best practices for the business users and enforce/monitor that best practices are being incorporated for better efficiency (speed to market & system performance)
- Share daily/weekly progress made by the team
- Work with senior stakeholders & drive the discussions independently
- Mentor and lead a team of software developers on analytics related product development practices
Qualifications:
- 10-13 years of data engineering experience.
- Experience in working on machine-learning model deployment/scoring, model lifecycle management and model performance measurement.
- In-depth understanding of statistics and probability distributions, with experience of applying it in big-data software products for solving business problems
- Hands-on programming experience with big-data and analytics related product development using Python, Spark and Kafka to provide solutions for business needs.
- Intuitive with good interpersonal-skills, time-management and task-prioritization
- Ability to lead a technical team of software developers and mentor them on good software development practices.
- Ability to quickly grasp the business problem and nuances when put forth.
- Ability to quickly put together an execution plan and see it through till closure.
- Strong communication, presentation and influencing skills.
Education:
- Bachelor’s/University degree or equivalent experience
- Data Science or Analytics specialization preferred
Purpose of Job:
Responsible to lead a team of analysts to build and deploy predictive models to infuse core
business functions with deep analytical insights. The Senior Data Scientist will also work
closely with the Kinara management team to investigate strategically important business questions.
Job Responsibilities:
Lead a team through the entire analytical and machine learning model life cycle:
Define the problem statement
Build and clean datasets
Exploratory data analysis
Feature engineering
Apply ML algorithms and assess the performance
Code for deployment
Code testing and troubleshooting
Communicate Analysis to Stakeholders
Manage Data Analysts and Data Scientists
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with a focus on data science and
quantitative fields (CS, Engineering, Mathematics, Economics)
Work Experience: 5+ years in a professional role with 3+ years in ML/AI
Other Requirements: ⮚ Domain knowledge in Financial Services is a big plus
Skills & Competencies
Technical Skills
⮚ Aptitude in Math and Stats
⮚ Proven experience in the use of Python, SQL, DevOps
⮚ Excellent in programming (Python), stats tools, and SQL
⮚ Working knowledge of tools and utilities - AWS, Git, Selenium, Postman,Prefect, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Strong communications verbal and written
Company Name: Curl Tech
Location: Bangalore
Website: www.curl.tech
Company Profile: Curl Tech is a deep-tech firm, based out of Bengaluru, India. Curl works on developing Products & Solutions leveraging emerging technologies such as Machine Learning, Blockchain (DLT) & IoT. We work on domains such as Commodity Trading, Banking & Financial Services, Healthcare, Logistics & Retail.
Curl has been founded by technology enthusiasts with rich industry experience. Products and solutions that have been developed at Curl, have gone on to have considerable success and have in turn become separate companies (focused on that product / solution).
If you are looking for a job, that would challenge you and desire to work with an organization that disrupts entire value chain; Curl is the right one for you!
Designation: Data Scientist or Junior Data Scientist (according to experience)
Job Description:
Good with Machine Learning and Deep learning, good with programming and maths.
Details: The candidate will be working on many image analytics/ numerical data analytics projects. The work involves, data collection, building the machine learning models, deployment, client interaction and publishing academic papers.
Responsibilities:
-
The candidate will be working on many image analytics/numerical data projects.
-
Candidate will be building various machine learning models depending upon the requirements.
-
Candidate would be responsible for deployment of the machine learning models.
-
Candidate would be the face of the company in front of the clients and will have regular client interactions to understand that client requirements.
What we are looking for candidates with:
-
Basic Understanding of Statistics, Time Series, Machine Learning, Deep Learning, and their fundamentals and mathematical underpinnings.
-
Proven code proficiency in Python,C/C++ or any other AI language of choice.
-
Strong algorithmic thinking, creative problem solving and the ability to take ownership and do independent
research.
-
Understanding how things work internally in ML and DL models is a must.
-
Understanding of the fundamentals of Computer Vision and Image Processing techniques would be a plus.
-
Expertise in OpenCV, ML/Neural networks technologies and frameworks such as PyTorch, Tensorflow would be a
plus.
-
Educational background in any quantitative field (Computer Science / Mathematics / Computational Sciences and related disciplines) will be given preference.
Education: BE/ BTech/ B.Sc.(Physics or Mathematics)/Masters in Mathematics, Physics or related branches.
Chatbot Developer
We are a Conversational AI- Product Development company which is located in the USA, Bangalore.
We are looking for a Senior Chatbot /Javascript Developer to join the Avaamo PSG(delivery) team.
Responsibilities:
- Independent team member for analyzing requirements, designing, coding, and implementing Conversation AI products.
- Its a product expert work closely with IT Managers and Business Groups to gather requirements and translate those into the required technical solution.
- Drive solution implementation using the Conversational design approach.
- Develop, deploy and maintain customized extensions to the Avaamo platform-specific to customer requirements.
- Conduct training and technical guidance sessions for partner and customer development teams.
- Evaluating reported defects and the correction of prioritized defects.
- Travel onsite to customer locations for close support.
- Document how to's and implement best practices for Avaamo product solutions.
Requirements:
- Strong programming experience in javascript, HTML/CSS.
- Experience of creating and consuming REST APIs and SOAP services.
- Strong knowledge and awareness of Web Technologies and current web trends.
- Working knowledge of Security in Web applications and services.
- experience in using the NodeJS framework with good understanding of the underlying architecture.
- Experience of deploying web applications on Linux servers in production environment.
- Excellent communication skills.
Good to haves:
- Full stack experience UI and UX design experience or insights
- Working knowledge of AI, ML and NLP.
- Experience of enterprise systems integration like MS Dynamics CRM, Salesforce, ServiceNow, MS Active Directory etc.
- Experience of building Single Sign On in web/mobile applications.
- Ability to learn latest technologies and handle small engineering teams.
About us: Nexopay helps transforming digital payments and enabling instant financing for parents, across schools and colleges world-wide.
Responsibilities:
- Work with stakeholders throughout the organisation and across entities to identify opportunities for leveraging internal and external data to drive business impact
- Mine and analyze data to improve and optimise performance, capture meaningful insights and turn them into business advantages
- Assess the effectiveness and accuracy of new data sources and data gathering techniques
- Develop custom data models and algorithms to apply to data sets
- Use predictive modeling to predict outcomes and identify key drivers
- Coordinate with different functional teams to implement models and monitor outcomes
- Develop processes and tools to monitor and analyze model performance and data accuracy
Requirements:
- Experience in solving business problem using descriptive analytics, statistical modelling / machine learning
- 2+ years of strong working knowledge of SQL language
- Experience with visualization tools e. g., Tableau, Power BI
- Working knowledge on handling analytical projects end to end using industry standard tools (e. g., R, Python)
- Strong presentation and communication skills
- Experience in education sector is a plus
- Fluency in English
Machine Learning & Deep Learning – Strong
Experienced in TensorFlow, PyTorch, ONNX, Object Detection, Pretrained Models like YOLO, SSD, Faster RCNN, etc…
Python – Strong
NumPy, Pandas, OpenCV
Problem Solving - strong
C++ - average
It will be good if candidate have working experience in C++ in any domain
Note :: Looking for Immediate to 30 days of Notice Period
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis.
You will be responsible for:
Developing audio algorithms to detect key moments within popular online games, such as:
Streamer speaking, shouting, etc.
Gunfire, explosions, and other in-game audio events
Speech-to-text and sentiment analysis of the streamer’s narration
Leveraging baseline technologies such as TensorFlow and others -- and building models on top of them
Building neural network architectures for audio analysis as it pertains to popular games
Specifying exact requirements for training data sets, and working with analysts to create the data sets
Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
Solid understanding of AI frameworks and algorithms, especially pertaining to audio analysis, speech-to-text, sentiment analysis, and natural language processing
Experience using Python, TensorFlow and other AI tools
Demonstrated understanding of various algorithms for audio analysis, such as CNNs, LSTM for natural language processing, and others
Nice to have: some familiarity with AI-based audio analysis including sentiment analysis
Familiarity with AWS environments
Excited about working in a fast-changing startup environment
Willingness to learn rapidly on the job, try different things, and deliver results
Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Work Experience: 2 years to 10 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at www.sizzle.gg .
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams.
For this role, we're looking for someone that ideally loves to watch video gaming content on Twitch and YouTube. Specifically, you will help generate training data for all the AI we are building. This will include gathering screenshots, clips and other data from gaming videos on Twitch and YouTube. You will then be responsible for labeling and annotating them. You will work very closely with our AI engineers.
You will:
- Gather training data as specified by the management and engineering team
- Label and annotate all the training data
- Ensure all data is prepped and ready to feed into the AI models
- Revise the training data as specified by the engineering team
- Test the output of the AI models and update training data needs
You should have the following qualities:
- Willingness to work hard and hit deadlines
- Work well with people
- Be able to work remotely (if not in Bangalore)
- Interested in learning about AI and computer vision
- Willingness to learn rapidly on the job
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Data labeling, annotation, AI, computer vision, gaming
Work Experience: 0 years to 3 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at www.sizzle.gg .Should be highly Technical and hands-on experience in Artificial Intelligence and Machine learning and Python. Managing the successful delivery of projects by efficient planning and coordination.
KEY RESPONSIBILITIES OF THE POSITION :
- Create Technical Design for AI, Machine Learning, Deep Learning, NLP, NLU, NLG projects and implement the same in production.
- Solid understanding and experience of deep learning architectures and algorithms
- Working experience with AWS, most importantly AWS SageMaker, Aurora or MongoDB, Analytics and reporting.
- Experience solving problems in the industry using deep learning methods such as recurrent neural networks (RNN, LSTM), convolutional neural nets, auto-encoders, etc.
- Should have experience of 2-3 production implementations of machine learning projects.
- Knowledge of open-source libraries such as Keras, Tensor Flow, Pytorch
- Work with business analysts/consultants and other necessary teams to create a strong solution
- Should have in-depth understanding and experience of Data Science and Machine Learning projects using Python, R, etc. Skills in Java/C are a plus
- Should developing solutions using python in AI/ML projects
- Should be able to train and build a team of technical developers
- Desired to have experience as leads in designing and developing applications/tools using Microsoft technologies - ASP.Net, C#, HTML5, MVC
- Desired to have knowledge on any of the cloud solutions such as Azure or AWS
- Desired to have knowledge on any of container technology such as Docker
- Should be able to build strong relationships with project stakeholders
Keywords:
- Python
- Artificial Intelligence
- Machine Learning
- AWS
- Django
- NLP
Essenvia is an cloud based SaaS platform that helps medical device companies reduce the time and cost of bringing Medical Devices to market. It’s product suite includes collaborative multiuser platform to prepare regulatory submissions, document management system, streamline the Medical Device regulatory pathway.
We are looking for a savvy Machine learning Engineer to join our team based out of Bangalore. The hire will be responsible for creating and managing proprietary data set for machine learning algorithms using various conventional and non-conventional data sources. The Engineer will support initiatives and will ensure optimal data delivery architecture for machine learning models. The right candidate will be excited by the prospect of becoming a key member in designing the data architecture to support our next generation of products, must be self-driven and able to work on tight time line in start-up culture
Responsibilities
---------------------
Extract key information from various data sources
Process documents using OCR and extract key entities
Extract blocks of relevant texts using pattern recognition
Prepare structured and unstructured data pipeline for machine learning models
Assemble large, complex data sets using various data sets.
Mandatory Skills
---------------------
Knowledge of algorithm and data structure
Programming Knowledge in Python and Java
Knowledge of Text mining/ Text extraction/ Regex matching
Knowledge of OCR
Experience in data cleaning, ETL, pipeline building and model-maintenance using Airflow
Knowledge Elastic search, Neo4j and GraphQL
Desirable Skills
-----------------
Knowledge of NLP
Knowledge of preparing and using custom corpora
Prior experience in medical science datasets
Exposure to Deep Learning applications and tools like TensorFlow, Theano is preferred
Job Description
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders,like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Qualifications
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA with 15+ years of full time education
Additional information
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
- 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
- Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
- Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
- Comfortable working with deep-learning libraries (eg. PyTorch)
- Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
- A Masters candidate in machine learning.
- Can source candidates from Mu Sigma and Manthan.
Responsibilities Description:
Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.
Experience Requirements:
BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 2-4 years of experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience in the healthcare sector. Experience in Deep Learning strongly preferred.
Required Technical Skill Set:
- Full cycle of building machine learning solutions,
o Understanding of wide range of algorithms and their corresponding problems to solve
o Data preparation and analysis
o Model training and validation
o Model application to the problem
- Experience using the full open source programming tools and utilities
- Experience in working in end-to-end data science project implementation.
- 2+ years of experience with development and deployment of Machine Learning applications
- 2+ years of experience with NLP approaches in a production setting
- Experience in building models using bagging and boosting algorithms
- Exposure/experience in building Deep Learning models for NLP/Computer Vision use cases preferred
- Ability to write efficient code with good understanding of core Data Structures/algorithms is critical
- Strong python skills following software engineering best practices
- Experience in using code versioning tools like GIT, bit bucket
- Experience in working in Agile projects
- Comfort & familiarity with SQL and Hadoop ecosystem of tools including spark
- Experience managing big data with efficient query program good to have
- Good to have experience in training ML models in tools like Sage Maker, Kubeflow etc.
- Good to have experience in frameworks to depict interpretability of models using libraries like Lime, Shap etc.
- Experience with Health care sector is preferred
- MS/M.Tech or PhD is a plus
Responsibilities Description:
Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.
Experience Requirements:
BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 2-4 years of experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience in the healthcare sector. Experience in Deep Learning strongly preferred.
Required Technical Skill Set:
- Full cycle of building machine learning solutions,
o Understanding of wide range of algorithms and their corresponding problems to solve
o Data preparation and analysis
o Model training and validation
o Model application to the problem
- Experience using the full open source programming tools and utilities
- Experience in working in end-to-end data science project implementation.
- 2+ years of experience with development and deployment of Machine Learning applications
- 2+ years of experience with NLP approaches in a production setting
- Experience in building models using bagging and boosting algorithms
- Exposure/experience in building Deep Learning models for NLP/Computer Vision use cases preferred
- Ability to write efficient code with good understanding of core Data Structures/algorithms is critical
- Strong python skills following software engineering best practices
- Experience in using code versioning tools like GIT, bit bucket
- Experience in working in Agile projects
- Comfort & familiarity with SQL and Hadoop ecosystem of tools including spark
- Experience managing big data with efficient query program good to have
- Good to have experience in training ML models in tools like Sage Maker, Kubeflow etc.
- Good to have experience in frameworks to depict interpretability of models using libraries like Lime, Shap etc.
- Experience with Health care sector is preferred
- MS/M.Tech or PhD is a plus
engineering
2. Preferably should have done some project or internship related to the field
3. Knowledge of SQL is a plus
4. A deep desire to learn new things and be a part of a vibrant start-up.
5. You will have a lot of freehand and this comes with immense responsibility - so it
is expected that you will be willing to master new things that come along!
Job Description:
1. Design and build a pipeline to train models for NLP problems like Classification,
NER
2. Develop APIs that showcase our models' capabilities and enable third-party
integrations
3. Work across a microservices architecture that processes thousands of
documents per day.
B1 – Data Scientist - Kofax Accredited Developers
Requirement – 3
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA
One large human need is that of sharing thoughts and connecting with people of the same
Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and
Technology Team & Culture
Job Description
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.
Preference for candidates working in tech product companies
Glance – An InMobi Group Company:
Glance is an AI-first Screen Zero content discovery platform, and it’s scaled massively in the last few months to one of the largest platforms in India. Glance is a lock-screen first mobile content platform set up within InMobi. The average mobile phone user unlocks their phone >150 times a day. Glance aims to be there, providing visually rich, easy to consume content to entertain and inform mobile users - one unlock at a time. Glance is live on more than 80 millions of mobile phones in India already, and we are only getting started on this journey! We are now into phase 2 of the Glance story - we are going global!
Roposo is part of the Glance family. It is a short video entertainment platform. All the videos created here are user generated (via upload or Roposo creation tools in camera) and there are many communities creating these videos on various themes we call channels. Around 4 million videos are created every month on Roposo and power Roposo channels, some of the channels are - HaHa TV (for comedy videos), News, Beats (for singing/ dance performances) along with a For You (personalized for a user) and Your Feed (for videos of people a user follows).
What’s the Glance family like?
Consistently featured among the “Great Places to Work” in India since 2017, our culture is our true north, enabling us to think big, solve complex challenges and grow with new opportunities. Glanciers are passionate and driven, creative and fun-loving, take ownership and are results-focused. We invite you to free yourself, dream big and chase your passion.
What can we promise?
We offer an opportunity to have an immediate impact on the company and our products. The work that you shall do will be mission critical for Glance and will be critical for optimizing tech operations, working with highly capable and ambitious peer groups. At Glance, you get food for your body, soul, and mind with daily meals, gym, and yoga classes, cutting-edge training and tools, cocktails at drink cart Thursdays and fun at work on Funky Fridays. We even promise to let you bring your kids and pets to work.
What you will be doing?
Glance is looking for a Data Scientist who will design and develop processes and systems to analyze high volume, diverse "big data" sources using advanced mathematical, statistical, querying, and reporting methods. Will use machine learning techniques and statistical analysis to predict outcomes and behaviors. Interacts with business partners to identify questions for data analysis and experiments. Identifies meaningful insights from large data and metadata sources; interprets and communicates insights and or prepares output from analysis and experiments to business partners.
You will be working with Product leadership, taking high-level objectives and developing solutions that fulfil these requirements. Stakeholder management across Eng, Product and Business teams will be required.
Basic Qualifications:
- Five+ years experience working in a Data Science role
- Extensive experience developing and deploying ML models in real world environments
- Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
- Exceptional familiarity with Python, Java, Spark or other open-source software with data science libraries
- Experience in advanced math and statistics
- Excellent familiarity with command line linux environment
- Able to understand various data structures and common methods in data transformation
- Experience deploying machine learning models and measuring their impact
- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
Preferred Qualifications
- Experience developing recommendation systems
- Experience developing and deploying deep learning models
- Bachelor’s or Master's Degree or PhD that included coursework in statistics, machine learning or data analysis
- Five+ years experience working with Hadoop, a NoSQL Database or other big data infrastructure
- Experience with being actively engaged in data science or other research-oriented position
- You would be comfortable collaborating with cross-functional teams.
- Active personal GitHub account.
Someone who has strong industrial experience in NLP for a period of 2+ years. Experienced in applying different NLP techniques to problems such as text classification, text summarization, question &answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning Techniques. In-depth exposure to some of the Tools/Techniques: SpaCy, NLTK, Gensim, CoreNLP, NLU, NLG tools etc. Ability to design & develop practical analytical approach keeping the context of data quality & availability, feasibility, scalability, turnaround time aspects. Desirable to have demonstrated capability to integrate NLP technologies to improve chatbot experience. Exposure to frameworks like DialogFlow, RASA NLU, LUIS is preferred. Contributions to open-source software projects are an added advantage.
Experience in analyzing large amounts of user-generated content and process data in large-scale environments using cloud infrastructure is desirable