Data Scientist (Forecasting)

at Anicaa Data

Agency job
icon
Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹10L - ₹25L / yr
icon
Full time
Skills
TensorFlow
PyTorch
Machine Learning (ML)
Data Science
data scientist
Forecasting
C++
Python
Artificial Neural Network (ANN)
moving average
ARIMA
Big Data
Data Analytics
Amazon Web Services (AWS)
azure
Google Cloud Platform (GCP)

Job Title – Data Scientist (Forecasting)

Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications.  The candidate should have experience in training, testing deep learning architectures.  This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.

 

Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)

 

Required Skills:

  • At least 3+ years of experience in a Data Scientist role
  • Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
  • Experience with large data sets, big data, and analytics
  • Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
  • Training Machine Learning (ML) algorithms in areas of forecasting and prediction
  • Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
  • Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
  • Experience in translating business needs into problem statements, prototypes, and minimum viable products
  • Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
  • Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models

Preferred Experience

  • Worked on forecasting projects – both classical and ML models
  • Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
  • Strong background in forecasting accuracy drivers
  • Experience in Advanced Analytics techniques such as regression, classification, and clustering
  • Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Analytics Head

at Brand Manufacturer for Bearded Men

Agency job
via Qrata
Analytics
Business Intelligence (BI)
Business Analysis
Python
SQL
Relational Database (RDBMS)
Data architecture
icon
Ahmedabad
icon
3 - 10 yrs
icon
₹15L - ₹30L / yr
Analytics Head

Technical must haves:

● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must

Technical Ideal to have:

● Exposure to our tech stack – PHP
● Microsoft workflows knowledge

Behavioural Pen Portrait:

● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate

Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
Read more
Job posted by
Prajakta Kulkarni

Data Scientist

at Top startup of India - News App

Agency job
via Jobdost
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
TensorFlow
Deep Learning
Python
PySpark
MongoDB
Hadoop
Spark
icon
Noida
icon
6 - 10 yrs
icon
₹35L - ₹65L / yr
This will be an individual contributor role and people from Tier 1/2 and Product based company can only apply.

Requirements-

● B.Tech/Masters in Mathematics, Statistics, Computer Science or another quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,Predictive modeling, Clustering, Deep Learning stack, NLP.
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark etc.
● Experience with databases: MongoDB
Read more
Job posted by
Sathish Kumar

Analyst (Research)

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Market Research
Big Data
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As an Analyst (Research) in the Mobile Publishing division you’ll be using your previous experience in analysing market trends to pull usable insights from numerous sources, and find trends others might miss.

What you tell your friends you do 

“I provide insights that help guide the direction of Kwalee’s mobile publishing team as they expand their operation”

What you will really be doing 

  • Using our internal and external data sources to generate insights.
  • Assess market trends and make recommendations to our publishing team on which opportunities to pursue and which to decline
  • Evaluate market movements and use data to assess new opportunities
  • Create frameworks to predict how successful new content can be and the metrics games are likely to achieve
  • Evaluate business opportunities and conduct due diligence on potential business partners we are planning to work with
  • Be an expert on industry data sets and how we can best use them

How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
  • You'll think creatively and be motivated by challenges and constantly striving for the best.
  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!

Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.

Skills and Requirements

  • Previous experience of working with big data sets, preferably in a gaming or tech environment
  • An advanced degree in a related field
  • A keen interest in video games and the market, particularly in the mobile space
  • Familiarity with industry tools and data providers
  • A can-do attitude and ability to move projects forward even when outcomes may not be clear 

We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
  • In addition to a competitive salary we also offer private medical cover and life assurance
  • Creative Wednesdays!(Design and make your own games every Wednesday)
  • 20 days of paid holidays plus bank holidays 
  • Hybrid model available depending on the department and the role
  • Relocation support available 
  • Great work-life balance with flexible working hours
  • Quarterly team building days - work hard, play hard!
  • Monthly employee awards
  • Free snacks, fruit and drinks

Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Read more
Job posted by
Michael Hoppitt

Senior Data Engineer

at Waterdip Labs

Founded 2021  •  Products & Services  •  0-20 employees  •  Profitable
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
Apache Spark
SQL
Amazon Redshift
Apache Kafka
Amazon Web Services (AWS)
Git
CI/CD
Apache airflow
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹15L - ₹30L / yr
About The Company
 
Waterdip Labs is a deep tech company founded in 2021. We are building an Open-Source Observability platform for AI. Our platform will help data engineers and data scientists to observe data and ml model performance in production.
Apart from the product, we are helping a few of our clients to build data and ML products.
Join us to help building the India’s 1st Open Source MLOps product.

About The Founders
Both founders are 2nd-timend time founders. Their 1st venture Inviz AI Solutions (https://www.inviz.ai) is a bootstrapped venture and became a prominent software service provider with several fortune 500 clients and a team of 100+ engineers.
Subhankar is an IIT Kharagpur alum with 10+ years of the experience software industry. He built some of the world-class Data and ML systems for companies like Target, Tesco, Falabella, and SAAS products for multiple India and USA-based start-ups. https://www.linkedin.com/in/wsubhankarb/
Gaurav is an IIT Dhanbad alum. He started his career in Tesco Labs as a data scientist for retail applications and gradually moved on to a more techno-functional role in major retail companies like Tesco, Falabella, Fazil, Lowes, and Aditya Birla group. https://www.linkedin.com/in/kumargaurav2596/
They started Waterdip with a vision to build world-class open-source software out of India.

About the Job
The client is a publicly owned, global technology company with 48 offices in 17 countries. It provides software design and delivery, tools and consulting services. The client is closely associated with the movement for agile software development and has contributed to the content of opensource products.

Job responsibilities
• Analyze, organize raw data, build data pipeline and test scripts
• Understand the business process, job orchestration from the SSIS packages
• Explore ways to enhance data quality, optimize and improve reliability
• Developing data pipelines to process event-streaming data
• Implementation of data standards, policies and data security
• Develop CI/CD pipeline to deploy and maintain data pipelines
 
Job qualifications
5-7 years of experience
 
Technical skills
• You should have experience with Python and anymore development languages
• Have working experience in SQL
• You have exposure to Data processing tooling and Data visualization
• Comfortability with Azure proficiency, CI/CD and Git
• Have understanding on Data quality, data pipelines, data storage, distributed systems architecture, data security, data privacy & ethics, data modeling, data infrastructure & operations and Business Intelligence
• Bonus points if you have prior working knowledge in creating data products and/or prior experience on Azure Data Catalog, Azure Event Hub
• Workflow management platform like Airflow.
• Large scale data processing tool like Apache Spark.
• Distributed messaging platform like Apache Kafka.

Professional skills
• You enjoy influencing others and always advocate for technical excellence while being open to change when needed
• Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
• You're resilient in ambiguous situations and can approach challenges from multiple perspectives
 
 
 
 
 
 
 
 
 
 
 
 
Read more
Job posted by
Subhankar Biswas

Data Analyst

at Srijan Technologies

Founded 2002  •  Products & Services  •  100-1000 employees  •  Profitable
Data Analytics
Data modeling
Python
PySpark
ETL
SQL
Axure
Amazon Web Services (AWS)
icon
Remote only
icon
3 - 8 yrs
icon
₹5L - ₹12L / yr

Role Description:

  • You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
  • You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
  • You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
  • You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
  • In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
  • You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
  • Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.

Job Requirement:

  • Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
  • 3+ years of professional work experience with a reputed analytics firm
  • Expertise in handling large amount of data through Python or PySpark
  • Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
  • Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
  • Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
  • A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
  • Strong problem-solving, requirement gathering, and leading.
  • Track record of completing projects successfully on time, within budget and as per scope

Read more
Job posted by
PriyaSaini

Data Scientist

at leading pharmacy provider

Agency job
via Econolytics
Data Science
R Programming
Python
Algorithms
Predictive modelling
icon
Noida, NCR (Delhi | Gurgaon | Noida)
icon
4 - 10 yrs
icon
₹18L - ₹24L / yr
Job Description:

• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.

Must Have:

• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
Read more
Job posted by
Jyotsna Econolytics

AI Engineer

at Oriserve

Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
Artificial Intelligence (AI)
Machine Learning (ML)
Deep Learning
Natural Language Processing (NLP)
Python
Big Data
NOSQL Databases
SQL
icon
Noida, NCR (Delhi | Gurgaon | Noida)
icon
4 - 8 yrs
icon
₹15L - ₹20L / yr
About ORI:-
- ORI is an end-to-end provider of AI-powered conversational tools that help enterprises simplify their customer experience, improve conversions and help them get better ROI on the marketing spend. Ori is focused on automating the customer journey through it's AI powered self-service SAAS platform, made by applying design thinking principles and Machine Learning.
- ORI's cognitive solutions provide non-intrusive customer experience for Sales, Marketing, Support & Engagement across IoT devices, sensors, web, app, social media & messaging platforms as well as AR and VR platforms.
- Founded in 2017, We've changed the way AI conversational tools are built and trained, providing a revolutionary experience. Clients who have bet on us are Tata Motors, Dishtv, Vodafone, Idea, Lenkart.com, Royal Enfield, IKEA and many more.
- At ORI, you’ll be a part of an environment that’s fast-paced, nurturing, collaborative, and challenging. We believe in 100% ownership & flexibility of how & where you work. You’ll be given complete freedom to get your creative juices flowing and implement your ideas to deliver solutions that bring about revolutionary change. We are a team that believes in working smarter and partying hard and are looking for A-players to hop on-board a rocket-ship that’s locked, loaded & ready to blast off!

Job Profile:-
We are looking for applicants who have a demonstrated research background in AI, Deep Learning and NLP, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research.The candidate will collaborate with researchers and engineers of multiple disciplines within Ori, in particular with researchers in data science and development teams to develop advanced NLP and AI solutions. Work with massive amounts of data collected from various sources.

Key Attributes you need to possess:-
- Communication Skills- Written and verbal form are a must have.You will be required to explain advanced statistical content to clients and relevant stakeholders.Therefore, you must have the ability to translate and tailor this technical content into business applicable material with clear recommendations and insights relevant to the audience at hand.
- Technological Savvy/Analytical Skills- Must be technologically adept, demonstrate exceptionally good computer skills, and demonstrate a passion for research, statistics, and data analysis as well as a demonstrated ability and passion for designing and implementing successful data analysis solutions within a business.
- Business Understanding- Someone who can understand the business's needs and develop analytics that meet those objectives through enhanced customer engagement, automation resulting in cost optimization, or business process optimization saving time and labor. However, real value comes from delivering the results that match the actual business need.
- Innovation- Someone who is always looking for the next big thing that will distinguish their offering from others already in the market and must be able to differentiate great from not-so-great analytics.

Typical work week look like:-
1. Work with product/business owners to map business requirements into products / productized solutions and/or working prototypes of NLP & ML algorithms.
2. Evaluate and compare algorithm performance based on large, real-world data sets.
3. Mine massive amounts of data from various sources to gain insights and identify patterns using machine learning techniques and complex network analysis methods.
4. Design and implement ML algorithms and models through in-depth research and experiment with neural network models, parameter optimization, and optimization algorithms.
5. Work to accelerate the distributed implementation of existing algorithms and models.
6. Conduct research to advance the state of the art in deep learning and provide technical solutions at scale for real world challenges in various scenarios.
7. Establish scalable, efficient, automated processes for model development, model validation, model implementation and large scale data analysis.
8. Optimizing pre-existing algorithms for accuracy and speed.

Our ideal candidate should have:-
- Ph.D. / Master's degree / B.Tech / B.E. from an accredited college/university in Computer Science, Statistics, Mathematics, Engineering, or related fields (strong mathematical/statistics background with the ability to understand algorithms and methods from a mathematical and intuitive viewpoint)
- 4+ years of professional experience in Artificial Intelligence, Machine Learning, Deep Learning, Natural Language Processing/Text mining or related fields.
- Technical ability and hands on expertise in Python, R, XML parsing, Big Data, NoSQL and SQL
- Preference for candidates with prior experience in deep learning tools Keras, TensorFlow, Bert, Transformers, LSTM, Python, Topic modeling, Text classification, NER,SVM, KNN, Reinforcement Learning, Summarisation etc.
- Self-starter and able to manage multiple research projects with a flexible approach and ability to develop new skills.
- Strong knowledge/experience of data extraction and data processing in a distributed cloud environment.

What you can expect from ORI:-
- Passion & happiness in the workplace with great people & open culture with amazing growth opportunities.
- An ecosystem where leadership is fostered which builds an environment where everyone is free to take necessary actions to learn from real experiences.
- Chance to work on the cutting edge of technology.- Freedom to pursue your ideas and tinker with multiple technologies- which a techie would definitely enjoy!!

If you have outstanding programming skills and a great passion for developing beautiful, innovative applications, then you will love this job!!
Read more
Job posted by
Vaishali Vishwakarma

Senior Software Engineer

at Episource LLC

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Big Data
Python
Amazon Web Services (AWS)
Serverless
DevOps
Cloud Computing
Infrastructure
Solution architecture
CI/CD
icon
Mumbai
icon
5 - 12 yrs
icon
₹18L - ₹30L / yr

ABOUT EPISOURCE:


Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.


The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.


What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.


ABOUT THE ROLE:


We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.


This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.


You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.


During the course of a typical day with our team, expect to work on one or more projects around the following;


1. Create and maintain optimal data pipeline architectures for ML


2. Develop a strong API ecosystem for ML pipelines


3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible


4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems


5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations  


6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms

 

7. Designing scalable implementations of the models developed by our Data Science teams  


8. Big data and distributed ML with PySpark on AWS EMR, and more!



BASIC REQUIREMENTS 


  1.  Bachelor’s degree or greater in Computer Science, IT or related fields

  2.  Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects

  3. Strong experience with bash scripting, unix environments and building scalable/distributed systems

  4. Experience with automation/configuration management using Ansible, Terraform, or equivalent

  5. Very strong experience with AWS and Python

  6. Experience building CI/CD systems

  7. Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent

  8. Ability to build and manage application and performance monitoring processes

Read more
Job posted by
Ahamed Riaz

Data Engineer

at Paysense

Founded 2015  •  Product  •  100-500 employees  •  Raised funding
Python
Data Analytics
Hadoop
Data Warehouse (DWH)
Machine Learning (ML)
icon
NCR (Delhi | Gurgaon | Noida), Mumbai
icon
2 - 7 yrs
icon
₹10L - ₹30L / yr
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
Read more
Job posted by
Pragya Singh

Data Support/Analyst

at Pepr

Founded 2016  •  Products & Services  •  20-100 employees  •  Bootstrapped
Data Analytics
Statistical Analysis
MS-Excel
Excel VBA
icon
Anywhere
icon
0 - 2 yrs
icon
₹1L - ₹2L / yr
Preparing and analyzing data along with a team of data analysts and business specialists, initially in excel(google Sheet) and as your skills improve on more advanced tools as required. Will work with experts to develop best-in-class solutions and will be responsible for delivering information to support strategic decisions with data-driven insights. Will be expected to understand basic excel formula such as vlookup, if statements, Sum and Count formulae and will be required to upskill advanced excel as well as on multiple tools such as google script custom tools amongst others on your own time to progress to the required standards. Skills Required: - Experience of data audits and defect tracking tools. - Good Knowledge of Microsoft Office/Google Suits - Hands on experience on leading statistical and analytical tools and an affinity for numbers must. - Excellent verbal and written communications skills - Google script is added advantages
Read more
Job posted by
Naveen Annapureddy
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Anicaa Data?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort