Data Analyst- Biglittle.ai

at Codejudge

DP
Posted by Vaishnavi M
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹20L - ₹25L / yr
icon
Full time
Skills
SQL
Python
Data architecture
Data mining
Data Analytics
Job description
  • The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
  • Assess the effectiveness and accuracy of new data sources and data gathering techniques.
  • Develop custom data models and algorithms to apply to data sets.
  • Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
  • Develop company A/B testing framework and test model quality.
  • Develop processes and tools to monitor and analyze model performance and data accuracy.

Roles & Responsibilities

  • Experience using statistical languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets.
  • Experience working with and creating data architectures.
  • Looking for someone with 3-7 years of experience manipulating data sets and building statistical models
  • Has a Bachelor's, Master's in Computer Science or another quantitative field
  • Knowledge and experience in statistical and data mining techniques :
  • GLM/Regression, Random Forest, Boosting, Trees, text mining,social network analysis, etc.
  • Experience querying databases and using statistical computer languages :R, Python, SQL, etc.
  • Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees,neural networks, etc.
  • Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc.
  • Experience visualizing/presenting data for stakeholders using: Periscope, Business Objects, D3, ggplot, etc.

About Codejudge

We help developers find great companies to work at. Candidates pre-screened by us are fast tracked for onsite interviews at top tech companies
Founded
2019
Type
Product
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at Top startup of India - News App

Agency job
via Jobdost
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
TensorFlow
Deep Learning
Python
PySpark
MongoDB
Hadoop
Spark
icon
Noida
icon
6 - 10 yrs
icon
₹35L - ₹65L / yr
This will be an individual contributor role and people from Tier 1/2 and Product based company can only apply.

Requirements-

● B.Tech/Masters in Mathematics, Statistics, Computer Science or another quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,Predictive modeling, Clustering, Deep Learning stack, NLP.
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark etc.
● Experience with databases: MongoDB
Job posted by
Shalaka ZawarRathi

Senior Executive / Assistant Manager - Web Analytics

at MyGlamm

Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Adobe
Google Analytics
Tableau
MS-Excel
Data mining
Web Analytics
Metrics
icon
Delhi
icon
2 - 4 yrs
icon
₹9L - ₹11L / yr
Roles and Responsibilities
Owning all performance & funnel metrics with respect to merchandising. Ensuring that such metrics are institutionalized across teams and running performance cadences with respective stakeholders.
Using a variety of tools to extract and analyze data generated by online user activity Reporting your findings with data visualizations that are easy to understand Determine key performance indicators for campaign tracking and analysis Generating, communicating insights and providing solutions that have demonstrable results
Qualifications & Experience
2-4 years of relevant experience Ability to work well under deadlines, pressure and to successfully meet these deadlines
Previous data mining and analysis experience**
Strong stakeholder management skills, Sharp Business and analytical Acumen Analytical understanding and know-how of e-commerce metrics Familiarity with major web analytics tools, such as Adobe Analytics/Google Analytics/Tableau Strong verbal and visual communication skills to present and explain insights.
Excellent attention to detail and accuracy.
An intermediate understanding of MS Excel is a must**
Job posted by
megha jadhav

Senior Software Engineer/Technical Lead - Data Fabric

at IDfy

Founded 2011  •  Products & Services  •  100-1000 employees  •  Raised funding
Data Warehouse (DWH)
Informatica
ETL
ETL architecture
Responsive Design
Apache Beam
InfluxDB
SQL
OLAP
icon
Mumbai
icon
3 - 10 yrs
icon
₹15L - ₹45L / yr

Who is IDfy?

 

IDfy is the Fintech ScaleUp of the Year 2021. We build technology products that identify people accurately. This helps businesses prevent fraud and engage with the genuine with the least amount of friction. If you have opened an account with HDFC Bank or ordered from Amazon and Zomato or transacted through Paytm and BharatPe or played on Dream11 and MPL, you might have already experienced IDfy. Without even knowing it. Well…that’s just how we roll. Global credit rating giant TransUnion is an investor in IDfy. So are international venture capitalists like MegaDelta Capital, BEENEXT, and Dream Incubator. Blume Ventures is an early investor and continues to place its faith in us. We have kept our 500 clients safe from fraud while helping the honest get the opportunities they deserve. Our 350-people strong family works and plays out of our offices in suburban Mumbai. IDfy has run verifications on 100 million people. In the next 2 years, we want to touch a billion users. If you wish to be part of this journey filled with lots of action and learning, we welcome you to be part of the team!

 

What are we looking for?

 

As a senior software engineer in Data Fabric POD, you would be responsible for producing and implementing functional software solutions. You will work with upper management to define software requirements and take the lead on operational and technical projects. You would be working with a data management and science platform which provides Data as a service (DAAS) and Insight as a service (IAAS) to internal employees and external stakeholders.

 

You are eager to learn technology-agnostic who loves working with data and drawing insights from it. You have excellent organization and problem-solving skills and are looking to build the tools of the future. You have exceptional communication skills and leadership skills and the ability to make quick decisions.

 

YOE: 3 - 10 yrs

Position: Sr. Software Engineer/Module Lead/Technical Lead

 

Responsibilities:

  • Work break-down and orchestrating the development of components for each sprint.
  • Identifying risks and forming contingency plans to mitigate them.
  • Liaising with team members, management, and clients to ensure projects are completed to standard.
  • Inventing new approaches to detecting existing fraud. You will also stay ahead of the game by predicting future fraud techniques and building solutions to prevent them.
  • Developing Zero Defect Software that is secured, instrumented, and resilient.
  • Creating design artifacts before implementation.
  • Developing Test Cases before or in parallel with implementation.
  • Ensuring software developed passes static code analysis, performance, and load test.
  • Developing various kinds of components (such as UI Components, APIs, Business Components, image Processing, etc. ) that define the IDfy Platforms which drive cutting-edge Fraud Detection and Analytics.
  • Developing software using Agile Methodology and tools that support the same.

 

Requirements:

  • Apache BEAM, Clickhouse, Grafana, InfluxDB, Elixir, BigQuery, Logstash.
  • An understanding of Product Development Methodologies.
  • Strong understanding of relational databases especially SQL and hands-on experience with OLAP.
  • Experience in the creation of data ingestion pipelines and ETL pipeline (Good to have Apache Beam or Apache Airflow experience).
  • Strong design skills in defining API Data Contracts / OOAD / Microservices / Data Models.

 

Good to have:

  • Experience with TimeSeries DBs (we use InfluxDB) and Alerting / Anomaly Detection Frameworks.
  • Visualization Layers: Metabase, PowerBI, Tableau.
  • Experience in developing software in the Cloud such as GCP / AWS.
  • A passion to explore new technologies and express yourself through technical blogs.
Job posted by
Stuti Srivastava
ETL
Data Warehouse (DWH)
ETL Developer
Relational Database (RDBMS)
Spark
Hadoop
SQL server
SSIS
ADF
Python
Java
talend
Azure Data Factory
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹8L - ₹13L / yr

 Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools.

Experience with Data Management & data warehouse development

Star schemas, Data Vaults, RDBMS, and ODS

Change Data capture

Slowly changing dimensions

Data governance

Data quality

Partitioning and tuning

Data Stewardship

Survivorship

Fuzzy Matching

Concurrency

Vertical and horizontal scaling

ELT, ETL

Spark, Hadoop, MPP, RDBMS

Experience with Dev/OPS architecture, implementation and operation

Hand's on working knowledge of Unix/Linux

Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue.

Complex ETL program design coding

Experience in Shell Scripting, Batch Scripting.

Good communication (oral & written) and inter-personal skills

Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval.

Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery.

Propose good design & solutions and adherence to the best Design & Standard practices.

Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks.

Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques.

Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies.

Work with functional business analysts to ensure that application programs are functioning as defined. 

Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence.

Technologies (Select based on requirement)

Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift

Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory

Utilities for bulk loading and extracting

Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala

J/ODBC, JSON

Data Virtualization Data services development

Service Delivery - REST, Web Services

Data Virtualization Delivery – Denodo

 

ELT, ETL

Cloud certification Azure

Complex SQL Queries

 

Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
Job posted by
Jerrin Thomas

Data Scientist

at Symansys Technologies India Pvt Ltd

Founded 2014  •  Products & Services  •  employees  •  Profitable
Data Science
Machine Learning (ML)
Python
Tableau
R Programming
SQL server
icon
Pune, Mumbai
icon
2 - 8 yrs
icon
₹5L - ₹15L / yr

Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification

Senior Analytics Consultant- Responsibilities

  • Understand business problem and requirements by building domain knowledge and translate to data science problem
  • Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
  • Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
  • Prototype and experiment the solution to successfully demonstrate the value
    Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines
  • Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
  • Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
Job posted by
Tanu Chauhan

Technical Architect

at CarDekho

Founded 2008  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Spark
Apache Hive
Python
Amazon Web Services (AWS)
Go Programming (Golang)
NodeJS (Node.js)
Elastic Search
Cassandra
Microservices
Amazon Redshift
Data Warehouse (DWH)
icon
NCR (Delhi | Gurgaon | Noida)
icon
6 - 10 yrs
icon
₹50L - ₹70L / yr
We at CarDekho are looking for a self-motivated technology leader in big data and data warehouse, search, and analytics technologies who have hands-on development experience in architecting scalable backend systems with a high volume of data.

Responsibilities: 
  • Architect and design for our customers' data-driven applications and solutions and own back-end technology
  • stack
  • Develop architectures that are inherently secure, robust, scalable, modular, and API-centric
  • Build distributed backend systems serving real-time analytics and machine learning features at scale
  • Own the scalability, performance, and performance metrics of complex distributed systems.
  • Apply architecture best practices that help increase execution velocity
  • Collaborate with the key stakeholders, like business, product, and other technology teams
  • Mentor junior members in the team
Requirement: 
  • Excellent Academic Background (MS/B.Tech from a top tier university)
  • 6-10 years of experience in backend architecture and development with large data volumes
  • Extensive hands-on experience in the Big Data Ecosystem (like Hadoop, Spark, Presto, Hive), Database (like
  • MySQL, PostgreSQL), NoSQL (like MongoDB, Cassandra), and Data Warehousing like Redshift
  • Have deep expertise with search engines like Elastic Search and javascript environment like Node.js
  • Experience in cloud-based technology solutions with scale and robustness
  • Strong data management and migration experience including proficiency in data warehousing, data quality, and analysis.
  • Experience in the development of microservices/REST APIs
  • Experience with Agile and DevOps development methodology and tools like Jira, Confluence
  • Understanding/exposure to complete product development cycle
Job posted by
Shilpa Jangid

Data Scientist

at CarDekho

Founded 2008  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
TensorFlow
Deep Learning
Python
Keras
Decision Science
Neural networks
NodeJS (Node.js)
Logistic regression
Random forest
Naive Bayes
Bayesain Network
convex optimization
icon
NCR (Delhi | Gurgaon | Noida)
icon
3 - 6 yrs
icon
₹20L - ₹50L / yr

We at CarDekho create technology products used daily by millions of people. We’re seeking an experienced Data scientist to deliver that insight to us on a daily basis. Our ideal team member will have the mathematical and statistical expertise - As you mine, interpret, and clean our data, we will rely on you to ask questions, connect the dots, and uncover opportunities that lie hidden within - all with the ultimate goal of realizing the data’s full potential.

Responsibilities:-

  • 3+ years of industrial experience in predictive modeling and analysis, predictive software development.
  • Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications.
  • Strong communication and data presentation skills. Experience implementing ML algorithms such as Logistic Regression, Naive Bayes, Bayesian Network, Decision Tree, Neural Network, SVM, Random Forest, convex optimization, transfer learning.
  • Hands-on experience on a minimum of 2 projects that involves either ML or Deep Learning(Theano OR Keras OR Tensorflow).
  • Excellent organizational and analytical skills. Exposure to REST concepts.
  • Expert knowledge developing and debugging in Node.js or Python.
  • Contribute to the production solutions development, testing, and deployment

Requirement:-

  • BTech/MTech/Ph.D. in Computer Science or degree in statistics, applied mathematics, from Tier 1 Institutes
  • 4+ years experience in data science Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc.
  • Proficiency with data mining, mathematics, and statistical analysis Advanced pattern recognition and predictive modeling experience
Job posted by
Shilpa Jangid

Backend Engineer

at Venture Highway

Founded 2015  •  Products & Services  •  0-20 employees  •  Raised funding
Python
Data engineering
Data Engineer
MySQL
MongoDB
Celery
Apache
Data modeling
RESTful APIs
Natural Language Processing (NLP)
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹10L - ₹30L / yr
-Experience with Python and Data Scraping.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.

Preference for candidates working in tech product companies
Job posted by
Nipun Gupta

Product Analyst

at App-based lending platform. ( AF1)

Agency job
via Multi Recruit
Product Analyst
Data Analytics
SQL
Business Design
Data analystics
Business strategy
Python Scripting
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹15L - ₹20L / yr
  • Product Analytics: This is the first and most obvious role of the Product Analyst. At this capacity, the Product Analyst is responsible for the development and delivery of tangible consumer benefits through the product or service of the business.
  • In addition, in this capacity, the Product Analyst is also responsible for measuring and monitoring the product or service’s performance as well as presenting product-related consumer, market, and competitive intelligence.
  • Product Strategy: As a member of the Product team, the Product Analyst is responsible for the development and proposal of product strategies.
  • Product Management Operations: The Product Analyst also has the obligation to respond in a timely manner to all requests and inquiries for product information or changes. He also performs the initial product analysis in order to assess the need for any requested changes as well as their potential impact.
  • At this capacity, the Product Analyst also undertakes financial modeling on the products or services of the business as well as of the target markets in order to bring about an understanding of the relations between the product and the target market. This information is presented to the Marketing Manager and other stakeholders, when necessary.
  • Additionally, the Product Analyst produces reports and makes recommendations to the Product Manager and Product Marketing Manager to be used as guidance in decision-making pertaining to the business’s new as well as existent products.
  • Initiative: In this capacity, the Product Analyst ensures that there is a good flow of communication between the Product team and other teams. The Product Analyst ensures this by actively participating in team meetings and keeping everyone up to date.
  • Pricing and Development: The Product Analyst has the responsibility to monitor the market, competitor activities, as well as any price movements and make recommendations that will be used in key decision making. In this function, the Product Analyst will normally liaise with other departments such as the credit/risk in the business in order to enhance and increase the efficiency of effecting price changes in accordance with market shifts.
  • Customer/Market Intelligence: The Product Analyst has the obligation to drive consumer intelligence through the development of external and internal data sources that improve the business’s understanding of the product’s market, competitor activities, and consumer activities.
  • In the performance of this role, the Product Analyst develops or adopts research tools, sources, and methods that further support and contribute to the business’s product.
Job posted by
Ayub Pasha

Data Scientist

at Episource LLC

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Python
Machine Learning (ML)
Data Science
Amazon Web Services (AWS)
Apache Spark
Natural Language Processing (NLP)
icon
Mumbai
icon
4 - 8 yrs
icon
₹12L - ₹20L / yr

We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations , clinical named entity recognition and information extraction from clinical notes.


This is a role for highly technical machine learning & data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.


You will be responsible for setting an agenda to develop and ship machine learning models that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company, and help build a foundation of tools and practices used by quantitative staff across the company.



What you will achieve:

  • Define the research vision for data science, and oversee planning, staffing, and prioritization to make sure the team is advancing that roadmap

  • Invest in your team’s skills, tools, and processes to improve their velocity, including working with engineering counterparts to shape the roadmap for machine learning needs

  • Hire, retain, and develop talented and diverse staff through ownership of our data science hiring processes, brand, and functional leadership of data scientists

  • Evangelise machine learning and AI internally and externally, including attending conferences and being a thought leader in the space

  • Partner with the executive team and other business leaders to deliver cross-functional research work and models






Required Skills:


  • Strong background in classical machine learning and machine learning deployments is a must and preferably with 4-8 years of experience

  • Knowledge of deep learning & NLP

  • Hands-on experience in TensorFlow/PyTorch, Scikit-Learn, Python, Apache Spark & Big Data platforms to manipulate large-scale structured and unstructured datasets.

  • Experience with GPU computing is a plus.

  • Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization. This could be through technical leadership with ownership over a research agenda, or developing a team as a personnel manager in a new area at a larger company.

  • Expert-level experience with a wide range of quantitative methods that can be applied to business problems.

  • Evidence you’ve successfully been able to scope, deliver and sell your own research in a way that shifts the agenda of a large organization.

  • Excellent written and verbal communication skills on quantitative topics for a variety of audiences: product managers, designers, engineers, and business leaders.

  • Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling


Qualifications

  • Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization

  • Expert-level experience with machine learning that can be applied to business problems

  • Evidence you’ve successfully been able to scope, deliver and sell your own work in a way that shifts the agenda of a large organization

  • Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling

  • Degree in a field that has very applicable use of data science / statistics techniques (e.g. statistics, applied math, computer science, OR a science field with direct statistics application)

  • 5+ years of industry experience in data science and machine learning, preferably at a software product company

  • 3+ years of experience managing data science teams, incl. managing/grooming managers beneath you

  • 3+ years of experience partnering with executive staff on data topics

Job posted by
Manas Ranjan Kar
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Codejudge?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort