About Artivatic.ai
https://www.artivatic.ai/" target="_blank">Artivatic is a technology startup that uses AI/ML/Deep learning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 20+ team focus on technology. Artivatic building is cutting edge solutions to enable 750 Million plus people to get insurance, financial access, and health benefits with alternative data sources to increase their productivity, efficiency, automation power, and profitability, hence improving their way of doing business more intelligently & seamlessly. Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, automated decisions, monitoring, claims processing, sentiment/psychology behavior, auto insurance claims, travel insurance, disease prediction for insurance and more
Similar jobs
Your Day-to-Day
- Assist our Growth Strategists in analyzing the results of A/B experiments.
- Analysis on customer engagement rates, customer acquisition projects ( ex. Churn rate prediction, Attribution etc. )
- Analyse marketing channel performance and Deep-dive reports to stakeholders and Management
- Building statistical experimentation templates for faster A/B outputs
- Work on forecasting models and assisting senior Management in creating frameworks on growth models
- Local implementation of marketing analytics projects related to improving marketing channels effectiveness, customer segmentation, campaign optimization, etc
- Monitor campaigns against key performance indicators (KPIs), be fully aware of trends and analytics, success and risks in order to achieve business objectives.
- Communicate complex ideas into understandable reports/documentation and this will include leveraging on leading software tools such as Tableau.
Your Know-Know
- 3-5 years of experience in strategy / consulting / analytical / project management roles; experience in e-commerce, Start-ups or Unicorns(CARS24,OLA,SWIGGY,FLIPKART,OYO) or entrepreneur experience preferred + At Least 2 years of experience leading a team
- Top-notch academics from a Tier 1 college (IIM / IIT/ NIT)
- Must have SQL/PostgreSQL/Tableau Experience.
- Added advantage: Experience with Google Analytics, CRM (MoEngage, Braze,Leanplum).
- Preferred knowledge of statistical computer languages (Python / R, etc).
- Analytical mindset with ability to present data in a structured and informative way
- Enjoy a fast-paced environment and can align business objectives with product priorities
Location: Chennai
Education: BE/BTech
Experience: Minimum 3+ years of experience as a Data Scientist/Data Engineer
Domain knowledge: Data cleaning, modelling, analytics, statistics, machine learning, AI
Requirements:
- To be part of Digital Manufacturing and Industrie 4.0 projects across client group of companies
- Design and develop AI//ML models to be deployed across factories
- Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required
- Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks
- Prior experience in developing AI and ML models is required
- Experience with data from the Manufacturing Industry would be a plus
Roles and Responsibilities:
- Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics
- Multitasking, good communication necessary
- Entrepreneurial attitude
Additional Information:
- Travel: Must be willing to travel on shorter duration within India and abroad
- Job Location: Chennai
- Reporting to: Team Leader, Energy Management System
Experience 3 to 8 Years
Skill Set
- experience in algorithm development with a focus on signal processing, pattern recognition, machine learning, classification, data mining, and other areas of machine intelligence.
- Ability to analyse data streams from multiple sensors and develop algorithms to extract accurate and meaningful sport metrics.
- Should have a deeper understanding of IMU sensors and Biosensors like HRM, ECG
- A good understanding on Power and memory management on embedded platform
- Expertise in the design of multitasking, event-driven, real-time firmware using C and understanding of RTOS concepts
- Knowledge of Machine learning, Analytical and methodical approaches to data analysis and verification and Python
- Prior experience on fitness algorithm development using IMU sensor
- Interest in fitness activities and knowledge of human body anatomy
Job Description
We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Skills
- Bachelors/Masters/Phd in CS or equivalent industry experience
- Demonstrated expertise of building and shipping cloud native applications
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
NLP + Knowledge Graph Engineer
at award-winning California-headquartered analytics product co
Someone who has strong industrial experience in NLP for a period of 2+ years. Experienced in applying different NLP techniques to problems such as text classification, text summarization, question &answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning Techniques. In-depth exposure to some of the Tools/Techniques: SpaCy, NLTK, Gensim, CoreNLP, NLU, NLG tools etc. Ability to design & develop practical analytical approach keeping the context of data quality & availability, feasibility, scalability, turnaround time aspects. Desirable to have demonstrated capability to integrate NLP technologies to improve chatbot experience. Exposure to frameworks like DialogFlow, RASA NLU, LUIS is preferred. Contributions to open-source software projects are an added advantage.
Experience in analyzing large amounts of user-generated content and process data in large-scale environments using cloud infrastructure is desirable
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams.
For this role, we're looking for someone that ideally loves to watch video gaming content on Twitch and YouTube. Specifically, you will help generate training data for all the AI we are building. This will include gathering screenshots, clips and other data from gaming videos on Twitch and YouTube. You will then be responsible for labeling and annotating them. You will work very closely with our AI engineers.
You will:
- Gather training data as specified by the management and engineering team
- Label and annotate all the training data
- Ensure all data is prepped and ready to feed into the AI models
- Revise the training data as specified by the engineering team
- Test the output of the AI models and update training data needs
You should have the following qualities:
- Willingness to work hard and hit deadlines
- Work well with people
- Be able to work remotely (if not in Bangalore)
- Interested in learning about AI and computer vision
- Willingness to learn rapidly on the job
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Data labeling, annotation, AI, computer vision, gaming
Work Experience: 0 years to 3 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at www.sizzle.gg.Job Summary :
Independently handle the delivery of analytics assignments by mentoring a team of 3 - 10 people and delivering to exceed client expectations
Responsibilities :
- Co-ordinate with onsite company consultants to ensure high quality, on-time delivery
- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)
- Take part in organizational development activities to take company to the next level
Qualification, Skills & Prior Work Experience :
- Great analytical skills, detail-oriented approach
- Sound knowledge in MS Office tools like Excel, Power Point and data visualization tools like Tableau, PowerBI or such tools
- Strong experience in SQL, Python, SAS, SPSS, Statistica, R, MATLAB or such tools would be preferable
- Ability to adapt and thrive in the fast-paced environment that young companies operate in
- Priority for people with analytics work experience
- Programming skills- Java/Python/SQL/OOPS based programming knowledge
Job Location : Chennai, Work from Home will be provided until COVID situation improves
Note :
- Minimum one year experience needed
- Only 2019, 2020 and 2020 passed outs applicable
- Only above 70% aggregate throughout studies is applicable
- POST GRADUATION is must
Role : Sr Data Scientist / Tech Lead – Data Science
Number of positions : 8
Responsibilities
- Lead a team of data scientists, machine learning engineers and big data specialists
- Be the main point of contact for the customers
- Lead data mining and collection procedures
- Ensure data quality and integrity
- Interpret and analyze data problems
- Conceive, plan and prioritize data projects
- Build analytic systems and predictive models
- Test performance of data-driven products
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements (please read carefully)
- Very strong in statistics fundamentals. Not all data is Big Data. The candidate should be able to derive statistical insights from very few data points if required, using traditional statistical methods.
- Msc-Statistics/ Phd.Statistics
- Education – no bar, but preferably from a Statistics academic background (eg MSc-Stats, MSc-Econometrics etc), given the first point
- Strong expertise in Python (any other statistical languages/tools like R, SAS, SPSS etc are just optional, but Python is absolutely essential). If the person is very strong in Python, but has almost nil knowledge in the other statistical tools, he/she will still be considered a good candidate for this role.
- Proven experience as a Data Scientist or similar role, for about 7-8 years
- Solid understanding of machine learning and AI concepts, especially wrt choice of apt candidate algorithms for a use case, and model evaluation.
- Good expertise in writing SQL queries (should not be dependent upon anyone else for pulling in data, joining them, data wrangling etc)
- Knowledge of data management and visualization techniques --- more from a Data Science perspective.
- Should be able to grasp business problems, ask the right questions to better understand the problem breadthwise /depthwise, design apt solutions, and explain that to the business stakeholders.
- Again, the last point above is extremely important --- should be able to identify solutions that can be explained to stakeholders, and furthermore, be able to present them in simple, direct language.
https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end">https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end
AI/ML, NLP, Chatbot Developer
Responsibilities:
- Develop REST/JSON API’s Design code for high scale/availability/resiliency.
- Develop responsive web apps and integrate APIs using NodeJS.
- Presenting Chat efficiency reports to higher Management
- Develop system flow diagrams to automate a business function and identify impacted systems; metrics to depict the cost benefit analysis of the solutions developed.
- Work closely with business operations to convert requirements into system solutions and collaborate with development teams to ensure delivery of highly scalable and available systems.
- Using tools to classify/categorize the chat based on intents and coming up with F1 score for Chat Analysis
- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.
- Developing Conversational Flows in the chatbot
- Calculating Chat efficiency reports.
Good to Have:
- Monitors performance and quality control plans to identify performance.
- Works on problems of moderate and varied complexity where analysis of data may require adaptation of standardized practices.
- Works with management to prioritize business and information needs.
- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.
- Identifies, analyzes, and interprets trends or patterns in complex data sets.
- Ability to manage multiple assignments.
- Understanding of ChatBot Architecture.
- Experience of Chatbot training
Senior Systems Engineer – Big Data
at Couture.ai
Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning.
Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker
Knowledge on python would be desirable.
Experience with HDP Manager/clients and various dashboards.
Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking.
Experience with automation/configuration management using Chef, Ansible or an equivalent.
Strong experience with any Linux distribution.
Basic understanding of network technologies, CPU, memory and storage.
Database administration a plus.
Qualifications and Education Requirements
2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions and
dashboards running on Big Data technologies such as Hadoop/Spark.
Bachelor degree or equivalent in Computer Science or Information Technology or related fields.