Cutshort logo
Anicaa Data logo
Data Scientist (Forecasting)
Data Scientist (Forecasting)
Anicaa Data's logo

Data Scientist (Forecasting)

Agency job
3 - 6 yrs
₹10L - ₹25L / yr
Bengaluru (Bangalore)
Skills
TensorFlow
PyTorch
Machine Learning (ML)
Data Science
data scientist
Forecasting
C++
Python
Artificial Neural Network (ANN)
moving average
ARIMA
Big Data
Data Analytics
Amazon Web Services (AWS)
azure
Google Cloud Platform (GCP)

Job Title – Data Scientist (Forecasting)

Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications.  The candidate should have experience in training, testing deep learning architectures.  This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.

 

Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)

 

Required Skills:

  • At least 3+ years of experience in a Data Scientist role
  • Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
  • Experience with large data sets, big data, and analytics
  • Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
  • Training Machine Learning (ML) algorithms in areas of forecasting and prediction
  • Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
  • Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
  • Experience in translating business needs into problem statements, prototypes, and minimum viable products
  • Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
  • Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models

Preferred Experience

  • Worked on forecasting projects – both classical and ML models
  • Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
  • Strong background in forecasting accuracy drivers
  • Experience in Advanced Analytics techniques such as regression, classification, and clustering
  • Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Anicaa Data

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Publicis Sapient
at Publicis Sapient
10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹25L / yr
Relational Database (RDBMS)
PostgreSQL
MySQL
Python
Spark
+6 more

What is the role?

You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project

Key Responsibilities

  • Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
  • Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
  • Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
  • Optimize performance to meet high throughput and scale

What are we looking for?

  • 4+ years of relevant industry experience.
  • Working with data at the terabyte scale.
  • Experience designing, building and operating robust distributed systems.
  • Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
  • Building and leading teams.
  • Working knowledge of relational databases like Postgresql/MySQL.
  • Experience with Python / Spark / Kafka / Celery
  • Experience working with OLTP and OLAP systems
  • Excellent communication skills, both written and verbal.
  • Experience working in cloud e.g., AWS, Azure or GCP

Whom will you work with?

You will work with a top-notch tech team, working closely with the architect and engineering head.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company

We are

We  strive to make selling fun with our SaaS incentive gamification product.  Company  is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.

We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.

Way forward

If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.

 
Read more
Propellor.ai
at Propellor.ai
5 candid answers
1 video
Anila Nair
Posted by Anila Nair
Remote only
2 - 5 yrs
₹5L - ₹15L / yr
SQL
API
Python
Spark

Job Description - Data Engineer

About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable. 

 

What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data

Qualifications:

• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.

Whom will you work with?
You will closely work with the engineering team and support the Product Team.

Hiring Process includes : 

a. Written Test on Python and SQL

b. 2 - 3 rounds of Interviews

Immediate Joiners will be preferred

Read more
Talent folks
at Talent folks
2 recruiters
Agency job
via Talent folks by Rijooshri Saikia
Remote only
3 - 6 yrs
₹8L - ₹10L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more
Data Platform Operations
Remote Work, US shift

General Scope and Summary

The Data and Analytics Team sits in the Digital and Enterprise Capabilities Group and is responsible for driving the strategy, implementation and delivery of Data,
Analytics and Automation capabilities across Enterprise.
This global team will deliver “Next-Gen Value” by establishing core Data and Analytics capabilities needed to effectively manage and exploit Data as an Enterprise Asset. Data Platform Operations will be responsible for implementing and supporting Enterprise Data Operations tools and capabilities which will enable teams
 to answer strategic and business questions through data .

Roles and Responsibilities

● Manage overall data operations ensuring adherence to data quality metrics by establishing standard operating procedures and best practices/playbooks.
● Champion the advocacy and adoption of enterprise data assets for analytics and analytics through optimal operating models.
● Provide day-to-day ownership and project management data operations activities including data quality/data management support cases and other ad-hoc requests.
● Create standards, frameworks for CI/CD pipelines and DevOps.
● Collaborative cross-functionally to develop and implement data operations policies balancing centralized control and standardization with decentralized speed and flexibility.
● Identify areas for improvement. Create procedures, teams, and policies to support near real-time clean data, where applicable, or in a batch and close process, where applicable.
● Improve processes by tactically focusing on business outcomes. Drive prioritization based on business needs and strategy.
● Lead and control workflow operations by driving critical issues and discussions with partners to identify and implement improvements.
● Responsible for defining, measuring, monitoring, and reporting of key SLA metrics to support its vision.

Experience, Education and Specialized Knowledge and Skills

Must thrive working in a fast-paced, innovative environment while remaining flexible, proactive, resourceful, and efficient. Strong interpersonal skills, ability to understand
stakeholder pain points, ability to analyze complex issues to develop relevant and realistic solutions and recommendations. Demonstrated ability to translate strategy into action; excellent technical skills and an ability to communicate complex issues in a simple way and to orchestrate solutions to resolve issues and mitigate risks.
Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹12L - ₹16L / yr
Python
Bash
MySQL
Elastic Search
Amazon Web Services (AWS)

What are we looking for:

 

  1. Strong experience in MySQL and writing advanced queries
  2. Strong experience in Bash and Python
  3. Familiarity with ElasticSearch, Redis, Java, NodeJS, ClickHouse, S3
  4. Exposure to cloud services such as AWS, Azure, or GCP
  5. 2+ years of experience in the production support
  6. Strong experience in log management and performance monitoring like ELK, Prometheus + Grafana, logging services on various cloud platforms
  7. Strong understanding of Linux OSes like Ubuntu, CentOS / Redhat Linux
  8. Interest in learning new languages / framework as needed
  9. Good written and oral communications skills
  10. A growth mindset and passionate about building things from the ground up, and most importantly, you should be fun to work with

 

As a product solutions engineer, you will:

 

  1. Analyze recorded runtime issues, diagnose and do occasional code fixes of low to medium complexity
  2. Work with developers to find and correct more complex issues
  3. Address urgent issues quickly, work within and measure against customer SLAs
  4. Using shell and python scripts, and use scripting to actively automate manual / repetitive activities
  5. Build anomaly detectors wherever applicable
  6. Pass articulated feedback from customers to the development and product team
  7. Maintain ongoing record of the operation of problem analysis and resolution in a on call monitoring system
  8. Offer technical support needed in development

 

Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹10L / yr
Big Data
Hadoop
Apache Spark
Spark
Apache Kafka
+11 more

We are looking for a savvy Data Engineer to join our growing team of analytics experts. 

 

The hire will be responsible for:

- Expanding and optimizing our data and data pipeline architecture

- Optimizing data flow and collection for cross functional teams.

- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.

- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates

- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.

Nice to have experience with :

- Big data tools: Hadoop, Spark and Kafka

- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow

- Stream-processing systems: Storm

Database : SQL DB

Programming languages : PL/SQL, Spark SQL

Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.

The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.

Read more
GitHub
at GitHub
4 recruiters
Nataliia Mediana
Posted by Nataliia Mediana
Remote only
3 - 15 yrs
$50K - $80K / yr
Data Science
Data Scientist
Data engineering
Financial analysis
Finance
+8 more

We are a nascent quantitative hedge fund led by an MIT PhD and Math Olympiad medallist, offering opportunities to grow with us as we build out the team. Our fund has  world class investors and big data experts as part of the GP,  top-notch ML experts as advisers to the fund, plus has equity funding to grow the team, license data and scale the data processing.

We are interested in researching and taking in live a variety of quantitative strategies based on historic and live market data, alternative datasets, social media data (both audio and video) and stock fundamental data.

You would join, and, if qualified, lead a growing team of data scientists and researchers, and be responsible for a complete lifecycle of quantitative strategy implementation and trading.

Requirements:

  • Atleast 3 years of relevant ML experience
  • Graduation date : 2018 and earlier
  •   3-5 years of experience in high level Python programming.
  • Master Degree (or Phd) in quantitative disciplines such as Statistics, Mathematics, Physics, Computer Science in top universities.
  •   Good knowledge of applied and theoretical statistics, linear algebra and machine learning techniques. 
  •   Ability to leverage financial and statistical insights to research, explore and harness a large collection of quantitative strategies and financial datasets in order to build strong predictive models.
  • Should take ownership for the research, design, development and implementation of the strategy development and effectively communicate with other team mates
  •   Prior experience and good knowledge of lifecycle and pitfalls of algorithmic strategy development and modelling. 
  •   Good practical knowledge in understanding financial statements, value investing, portfolio and risk management techniques.
  •   A proven ability to lead and drive innovation to solve challenges and road blocks in project completion.
  • A valid Github profile with some activity in it

Bonus to have:

  •   Experience in storing and retrieving data from large and complex time series databases
  •   Very good practical knowledge on time-series modelling and forecasting (ARIMA, ARCH and Stochastic modelling)
  •   Prior experience in optimizing and back testing quantitative strategies, doing return and risk attribution, feature/factor evaluation. 
  •   Knowledge of AWS/Cloud ecosystem is an added plus (EC2s, Lambda, EKS, Sagemaker etc.) 
  •   Knowledge of REST APIs and data extracting and cleaning techniques 
  •   Good to have experience in Pyspark or any other big data programming/parallel computing
  •   Familiarity with derivatives, knowledge in multiple asset classes along with Equities.
  •   Any progress towards CFA or FRM is a bonus
  • Average tenure of atleast 1.5 years in a company
Read more
Bewakoof Brands Pvt Ltd
at Bewakoof Brands Pvt Ltd
2 recruiters
Sahil Khan
Posted by Sahil Khan
Mumbai
1 - 2 yrs
₹8L - ₹12L / yr
Data Science
R Programming
Tableau
1 to 3 years of experience in product analytics - Highly conversant with Google Analytics and other similar tools - Basic programming ability(preferably R or Python)
Read more
LatentView Analytics
at LatentView Analytics
3 recruiters
Kannikanti madhuri
Posted by Kannikanti madhuri
Chennai
3 - 5 yrs
₹0L / yr
SAS
SQL server
Python
SOFA Statistics
Analytics
+11 more
Looking for Immediate JoinersAt LatentView, we would expect you to:- Independently handle delivery of analytics assignments- Mentor a team of 3 - 10 people and deliver to exceed client expectations- Co-ordinate with onsite LatentView consultants to ensure high quality, on-time delivery- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)You'll be a valuable addition to our team if you have:- 3 - 5 years of hands-on experience in delivering analytics solutions- Great analytical skills, detail-oriented approach- Strong experience in R, SAS, Python, SQL, SPSS, Statistica, MATLAB or such analytic tools would be preferable- Working knowledge in MS Excel, Power Point and data visualization tools like Tableau, etc- Ability to adapt and thrive in the fast-paced environment that young companies operate in- A background in Statistics / Econometrics / Applied Math / Operations Research / MBA, or alternatively an engineering degree from a premier institution.
Read more
zeotap India Pvt Ltd
at zeotap India Pvt Ltd
2 recruiters
Projjol Banerjea
Posted by Projjol Banerjea
Bengaluru (Bangalore)
6 - 10 yrs
₹5L - ₹40L / yr
Python
Big Data
Hadoop
Scala
Spark
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos