- Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
- Verifying data quality, and/or ensuring it via data cleaning.
- Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
- To design and develop Machine Learning systems and schemes.
- To perform statistical analysis and fine-tune models using test results.
- To train and retrain ML systems and models as and when necessary.
- To deploy ML models in production and maintain the cost of cloud infrastructure.
- To develop Machine Learning apps according to client and data scientist requirements.
- To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.
- Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase.
- Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
- Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark, Pandas, Numpy and related libraries.
- Expert in visualising and manipulating complex datasets.
- Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
- Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
- Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
- Big data Technologies such as Hadoop stack and Spark.
- Basic use of clouds (VM’s example EC2).
- Brownie points for Kubernetes and Task Queues.
- Strong written and verbal communications.
- Experience working in an Agile environment.
Energy efficiency is the cleanest, quickest and cheapest way to bring more than 300 million Indians out of energy poverty.
By eliminating waste in their own operations, building owners can save money, simplify operations, improve comfort and free up resources for the less fortunate. Smart Joules makes this process seamless and profitable from day one.
Designation – Deputy Manager - TS
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible.
Our ideal candidate
The role would be a client facing one, hence good communication skills are a must.
The candidate should have the ability to communicate complex models and analysis in a clear and precise manner.
The candidate would be responsible for:
- Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
- Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
- Understanding the math behind algorithms and choosing one over another
- Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy
Desired technical requirements
- Proficiency with Python and the ability to write production-ready codes.
- Experience in pyspark, machine learning and deep learning
- Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
- Familiarity with SQL or other databases.
Kapiva is a modern ayurvedic nutrition brand focused on bringing selectively sourced, natural foods to Indian consumers. Inculcating the wisdom of India's ancient food traditions, Kapiva's high-quality product range includes herbal juices, nutrition powders, ayurvedic gummies, healthy staples, and much more. Our products are top performers on online marketplaces such as Amazon, Flipkart, Big Basket and we're growing our presence offline in a big way (Nature’s Basket, Reliance Retail, Noble Plus, etc). We’re also funded by India’s best Consumer VC Fund – Fireside Ventures.
About the role:
We are looking for a motivated data analyst with sound experience in handling web/ digital analytics, to join us as part of the Kapiva D2C Business Team. This team is primarily responsible for driving sales and customer engagement on our website (www.kapiva.in). This channel has grown 5x in revenue over the last 12 months and is poised to grow another 5x over the next six. It represents a high-growth, important part of Kapiva’s overall e-commerce growth strategy.
The mandate here is to run an end-to-end sustainable e-commerce business, boost sales through marketing campaigns, and build a cutting edge product (website) that optimizes the customer’s journey as well as increases customer lifetime value.
The Data Analyst will support the business heads by providing data-backed insights in order to drive customer growth, retention and engagement. They will be required to set-up and manage reports, test various hypotheses and coordinate with various stakeholders on a day-to-day basis.
Strategy and planning:
- Work with the D2C functional leads and support analytics planning on a quarterly/ annual basis
- Identify reports and analytics needed to be conducted on a daily/ weekly/ monthly frequency
- Drive planning for hypothesis-led testing of key metrics across the customer funnel
- Interpret data, analyze results using statistical techniques and provide ongoing reports
- Analyze large amounts of information to discover trends and patterns
- Work with business teams to prioritize business and information needs
- Collaborate with engineering and product development teams to setup data infrastructure as needed
Reporting and communication:
- Prepare reports / presentations to present actionable insights that can drive business objectives
- Setup live dashboards reporting key cross-functional metrics
- Coordinate with various stakeholders to collect useful and required data
- Present findings to business stakeholders to drive action across the organization
- Propose solutions and strategies to business challenges
- Bachelor’s/ Masters in Mathematics, Economics, Computer Science, Information Management, Statistics or related field
- 0-2 years’ experience in an analytics role, preferably consumer business. Proven experience as a Data Analyst/ Data Scientist
- High proficiency in MS Excel and SQL
- Knowledge of one or more programming languages like Python/ R. Adept at queries, report writing and presenting findings
- Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy - working knowledge of statistics and statistical methods
- Ability to work in a highly dynamic environment across cross-functional teams; good at coordinating with different departments and managing timelines
- Exceptional English written/verbal communication
- A penchant for understanding consumer traits and behavior and a keen eye to detail
Good to have:
- Hands-on experience with one or more web analytics tools like Google Analytics, Mixpanel, Kissmetrics, Heap, Adobe Analytics, etc.
- Experience in using business intelligence tools like Metabase, Tableau, Power BI is a plus
- Experience in developing predictive models and machine learning algorithms
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
Skit (previously known as http://vernacular.ai/" target="_blank">Vernacular.ai) is an AI-first SaaS voice automation company. Its suite of speech and language solutions enable enterprises to automate their contact centre operations. With over 10 million hours of training data, its product - Vernacular Intelligent Voice Assistant (VIVA) can currently respond in 16+ languages, covering over 160+ dialects and replicating human-like conversations.
Skit currently serves a variety of enterprise clients across diverse sectors such as BFSI, F&B, Hospitality, Consumer Electronics and Travel & Tourism, including prominent clients like Axis Bank, Hathway, Porter and Barbeque Nation. It has been featured as one of the top-notch start-ups in the Cisco Launchpad’s Cohort 6 and is a part of the World Economic Forum’s Global Innovators Community. It has has also been listed in Forbes 30 Under 30 Asia start-ups 2021 for its remarkable industry innovation.
We are looking for ML Research Engineers to work on the following problems:
- Spoken Language Understanding and Dialog Management.
- Language semantics, parsing, and modeling across multiple languages.
- Speech Recognition, Speech Analytics and Voice Processing across multiple languages.
- Response Generation and Speech Synthesis.
- Active Learning, Monitoring and Observability mechanisms for deployments.
- Design, build and evaluate Machine Learning solutions.
- Perform experiments and statistical analyses to draw conclusions and take modeling decisions.
- Study, implement and extend state of the art systems.
- Take part in regular research reviews and discussions.
- Build, maintain and extend our open source solutions in the domain.
- Write well-crafted programs at all levels of the system. This includes the data pipelines, experiment prototypes, fast and scalable deployment models, and evaluation, visualization and monitoring systems.
- Practical Machine Learning experience as demonstrated by earlier works.
- Knowledge of and ability to use tools from theoretical and practical aspects of computer science. This includes, but is not limited to, probability, statistics, learning theory, algorithms, software architecture, programming languages, etc.
- Good programming skills and ability to work with programs at all levels of a finished Machine Learning product. We prefer language agnosticism since that exemplifies this point.
- Git portfolios and blogs are helpful as they let us better evaluate your work.
- What information we collect during our application and recruitment process and why we collect it;
- How we use that information; and
- How to access and update that information.
This policy covers the information you share with Skit (Cyllid Technologies Pvt. Ltd.) during the application or recruitment process including:
- Your name, address, email address, telephone number and other contact information;
- Your resume or CV, cover letter, previous and/or relevant work experience or other experience, education, transcripts, or other information you provide to us in support of an application and/or the application and recruitment process;
- Information from interviews and phone-screenings you may have, if any;
- Details of the type of employment you are or may be looking for, current and/or desired salary and other terms relating to compensation and benefits packages, willingness to relocate, or other job preferences;
- Details of how you heard about the position you are applying for;
- Reference information and/or information received from background checks (where applicable), including information provided by third parties;
- Information about your educational and professional background from publicly available sources, including online, that we believe is relevant to your application or a potential future application (e.g. your LinkedIn profile); and/or
- Information related to any assessment you may take as part of the interview screening process.
Your information will be used by Skit for the purposes of carrying out its application and recruitment process which includes:
- Assessing your skills, qualifications and interests against our career opportunities;
- Verifying your information and carrying out reference checks and/or conducting background checks (where applicable) if you are offered a job;
- Communications with you about the recruitment process and/or your application(s), including, in appropriate cases, informing you of other potential career opportunities at Skit;
- Creating and/or submitting reports as required under any local laws and/or regulations, where applicable;
- Making improvements to Skit's application and/or recruitment process including improving diversity in recruitment practices;
- Proactively conducting research about your educational and professional background and skills and contacting you if we think you would be suitable for a role with us.
- 6+ years of recent hands-on Java development
- Developing data pipelines in AWS or Google Cloud
- Great understanding of designing for performance, scalability, and reliability of data intensive application
- Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
- In-depth understanding of object oriented programming concepts and design patterns
- Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
- Understanding of full software development life cycle, agile development and continuous integration
- Experience in Agile methodologies including Scrum and Kanban
- Develop REST/JSON API’s Design code for high scale/availability/resiliency.
- Develop responsive web apps and integrate APIs using NodeJS.
- Presenting Chat efficiency reports to higher Management
- Develop system flow diagrams to automate a business function and identify impacted systems; metrics to depict the cost benefit analysis of the solutions developed.
- Work closely with business operations to convert requirements into system solutions and collaborate with development teams to ensure delivery of highly scalable and available systems.
- Using tools to classify/categorize the chat based on intents and coming up with F1 score for Chat Analysis
- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.
- Developing Conversational Flows in the chatbot
- Calculating Chat efficiency reports.
Good to Have:
- Monitors performance and quality control plans to identify performance.
- Works on problems of moderate and varied complexity where analysis of data may require adaptation of standardized practices.
- Works with management to prioritize business and information needs.
- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.
- Identifies, analyzes, and interprets trends or patterns in complex data sets.
- Ability to manage multiple assignments.
- Understanding of ChatBot Architecture.
- Experience of Chatbot training
· Advanced Spark Programming Skills
· Advanced Python Skills
· Data Engineering ETL and ELT Skills
· Expertise on Streaming data
· Experience in Hadoop eco system
· Basic understanding of Cloud Platforms
· Technical Design Skills, Alternative approaches
· Hands on expertise on writing UDF’s
· Hands on expertise on streaming data ingestion
· Be able to independently tune spark scripts
· Advanced Debugging skills & Large Volume data handling.
· Independently breakdown and plan technical Tasks