Cutshort logo
Data Engineer
numadic's logo

Data Engineer

at numadic

DP
Posted by Enrich Braz
icon
Remote only
icon
2 - 4 yrs
icon
₹7L - ₹14L / yr
icon
Full time
Skills
Data Warehouse (DWH)
Informatica
ETL
Python
MS-Excel

Numadic is hiring a Data Engineer

 

We are Numads

Drawn to the unknown, we are new age nomads who seek to bring near what is far. We work as full stack humans, able to operate independently while enjoying the journey together. We see past the sandlines of clan and craft and value the unique and special talents of each. We think, we design, we code, we write, we share, we care and we ride together. We aim to live by our values of Humility, Collaboration and Transformation.

 

We undisrupt vehicle payments

To impact a highly fragmented v-commerce space, we aim to bring order to simplify & aggregate. We are a full stack v-commerce platform. We build the Network side of the products to achieve dense on-ground digital coverage by working with & aggregating different types of partners. Further help set the standards for scaling sustainably for the future. We also build the User side of the products to make road travel experience for our vehicle owners and drivers contactless and fully autonomous. 

 

 

About the role:

  1. Apply advanced predictive modeling and statistical techniques to design, build, maintain, and improve upon multiple real-time decision systems.
  2. Visualize and show complex data-sets via multidimensional visualization tools.
  3. Perform data cleansing, transformation & feature engineering.
  4. Design scalable automated data mining, modelling and validation processes.
  5. Produce scalable, reusable, efficient feature code to be implemented on clusters and standalone data servers.
  6. Contribute to the development/ deployment of machine learning algorithms, operational research, semantic analysis, and statistical methods for finding structure in large data sets.

 

Why is the opportunity exciting

We are a startup and provide an opportunity to be a part of a fast growing company. With full ownership, you will have the direct ability to make a difference and lead teams. You will work and learn from among a diverse group of Numads. You will solve first-to-market problems to be taken globally. We are based out of Goa and we offer a great opportunity to work from one of the most beautiful parts of India.

 

Role requirements:

  1. An absolute minimum of 1-3 years of relevant data science experience.
  2. Have an Engineering or comparable math / physics degree.
  3. Knowledge & working proficiency in Excel, Python, R
  4. Expert proficiency in at least 2 structured programming languages.
  5. Deep understanding about statistical and analytical models.
  6. Bias for action - Ability to move quickly while taking time out to review the details.
  7. Clear communicator - Ability to synthesise and clearly articulate complex information, highlighting key takeaways and actionable insights.
  8. Team player - Working mostly autonomously, yet being a team player keeping your crews looped-in.
  9. Mindset - Ability to take responsibility for your life and that of your people and projects.
  10. Mindfulness - Ability to maintain practices that keep you grounded.

 

Join Numadic

From the founders to our investors and advisors, what we share is a common respect for the value of human life and of meaningful relationships. We are full-stack humans, who work with full-stack humans and seek to do business with full-stack humans. We have turned down projects, when we found misalignment of values at the other end of the table. We do not believe that the customer is always right. We believe that all humans are equal and that the direction of the flow of money should not define the way people are treated. This is life at Numadic.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort

About numadic

Founded
2016
Type
Size
20-100
Stage
Raised funding
About

'Undisrupting movement’ by delivering products that redefine fintech for mobility.


Drawn to the unknown we are new age nomads that seek to bring near what is far. We work as full stack humans, able to operate independently while enjoying the journey together. We see past the sandlines of clan and craft and value the unique and special talents of each.


We think, we design, we code, we write, we share, we care and we ride together. We love movement. We love exploring the unknown and charting new paths and maps.


We simplify movement. We build technology products that simplify the flow of people and goods. We move together. We traverse together, working as equals among our team and customers.

Read more
Connect with the team
icon
Enrich Braz
Company social profiles
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹10L - ₹15L / yr
SQL
Hadoop
Spark
Machine Learning (ML)
Data Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
at Amagi Media Labs
3 recruiters
DP
Posted by Rajesh C
Chennai
10 - 12 yrs
Best in industry
Data Science
Machine Learning (ML)
Python
SQL
Artificial Intelligence (AI)
Job Title: Data Science Manager
Job Location: India
Job Summary
We at CondeNast are looking for a data science manager for the content intelligence
workstream primarily, although there might be some overlap with other workstreams. The
position is based out of Chennai and shall report to the head of the data science team, Chennai
Responsibilities:
1. Ideate new opportunities within the content intelligence workstream where data Science can
be applied to increase user engagement
2. Partner with business and translate business and analytics strategies into multiple short-term
and long-term projects
3. Lead data science teams to build quick prototypes to check feasibility and value to business
and present to business
4. Formulate the business problem into an machine learning/AI problem
5. Review & validate models & help improve the accuracy of model
6. Socialize & present the model insights in a manner that business can understand
7. Lead & own the entire value chain of a project/initiative life cycle - Interface with business,
understand the requirements/specifications, gather data, prepare it, train,validate, test the
model, create business presentations to communicate insights, monitor/track the performance
of the solution and suggest improvements
8. Work closely with ML engineering teams to deploy models to production
9. Work closely with data engineering/services/BI teams to help develop data stores, intuitive
visualizations for the products
10. Setup career paths & learning goals for reportees & mentor them
Required Skills:
1. 5+ years of experience in leading Data Science & Advanced analytics projects with a focus on
building recommender systems and 10-12 years of overall experience
2. Experience in leading data science teams to implement recommender systems using content
based, collaborative filtering, embedding techniques
3. Experience in building propensity models, churn prediction, NLP - language models,
embeddings, recommendation engine etc
4. Master’s degree with an emphasis in a quantitative discipline such as statistics, engineering,
economics or mathematics/ Degree programs in data science/ machine learning/ artificial
intelligence
5. Exceptional Communication Skills - verbal and written
6. Moderate level proficiency in SQL, Python
7. Needs to have demonstrated continuous learning through external certifications, degree
programs in machine learning & artificial intelligence
8. Knowledge of Machine learning algorithms & understanding of how they work
9. Knowledge of Reinforcement Learning
Preferred Qualifications
1. Expertise in libraries for data science - pyspark(Databricks), scikit-learn, pandas, numpy,
matplotlib, pytorch/tensorflow/keras etc
2. Working Knowledge of deep learning models
3. Experience in ETL/ data engineering
4. Prior experience in e-commerce, media & publishing domain is a plus
5. Experience in digital advertising is a plus
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move
to invest heavily in understanding this data and formed a whole new Data team entirely
dedicated to data processing, engineering, analytics, and visualization. This team helps drive
engagement, fuel process innovation, further content enrichment, and increase market
revenue. The Data team aimed to create a company culture where data was the common
language and facilitate an environment where insights shared in real-time could improve
performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team
at Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and
boost online revenue. We are broadly divided into four groups, Data Intelligence, Data
Engineering, Data Science, and Operations (including Product and Marketing Ops, Client
Services) along with Data Strategy and monetization. The teams built capabilities and products
to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are Condé
Nast, and It Starts Here.
Read more
Mumbai
5 - 7 yrs
₹20L - ₹25L / yr
Teradata
Vertica
Python
DBA
Redshift
+8 more
  • Key responsibility is to design, develop & maintain efficient Data models for the organization maintained to ensure optimal query performance by the consumption layer.
  • Developing, Deploying & maintaining a repository of UDXs written in Java / Python.
  • Develop optimal Data Model design, analyzing complex distributed data deployments, and making recommendations to optimize performance basis data consumption patterns, performance expectations, the query is executed on the tables/databases, etc.
  • Periodic Database health check and maintenance
  • Designing collections in a no-SQL Database for efficient performance
  • Document & maintain data dictionary from various sources to enable data governance
  • Coordination with Business teams, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
  • Data Governance Process Implementation and ensuring data security

Requirements

  • Extensive working experience in Designing & Implementing Data models in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc).
  • Programming experience using Python / Java.
  • Working knowledge in developing & deploying User-defined Functions (UDXs) using Java / Python.
  • Strong understanding & extensive working experience in OLAP Data Warehousing (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) architecture and cloud-native Data Lake (S3, ADLS, BigQuery, etc) Architecture.
  • Strong knowledge in Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model.
  • Extensive technical experience in SQL including code optimization techniques.
  • Strung knowledge of database performance and tuning, troubleshooting, and tuning.
  • Knowledge of collection design in any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc), along with implementation of best practices.
  • Ability to understand business functionality, processes, and flows.
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
  • Any OLAP DWH DBA Experience and User Management will be added advantage.
  • Knowledge in financial industry-specific Data models such as FSLDM, IBM Financial Data Model, etc will be added advantage.
  • Experience in Snowflake will be added advantage.
  • Working experience in BFSI/NBFC & data understanding of Loan/Mortgage data will be added advantage.

Functional knowledge

  • Data Governance & Quality Assurance
  • Modern OLAP Database Architecture & Design
  • Linux
  • Data structures, algorithm & data modeling techniques
  • No-SQL database architecture
  • Data Security

 

Read more
at Futurense Technologies
1 recruiter
DP
Posted by Rajendra Dasigari
Bengaluru (Bangalore)
2 - 7 yrs
₹6L - ₹12L / yr
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
+5 more
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more
at Aptus Data LAbs
1 recruiter
DP
Posted by Merlin Metilda
Bengaluru (Bangalore)
5 - 10 yrs
₹6L - ₹15L / yr
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
+5 more

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Read more
at Ganit Business Solutions
3 recruiters
DP
Posted by Kavitha J
Remote, Chennai, Bengaluru (Bangalore), Mumbai
3 - 6 yrs
₹12L - ₹20L / yr
Data Science
Data Scientist
R Programming
Python
Predictive modelling
+3 more

Ganit Inc. is the fastest growing Data Science & AI company in Chennai.

Founded in 2017, by 3 industry experts who are alumnus of IITs/SPJIMR with each of them having 17+ years of experience in the field of analytics.

We are in the business of maximising Decision Making Power (DMP) for companies by providing solutions at the intersection of hypothesis based analytics, discovery based AI and IoT. Our solutions are a combination of customised services and functional product suite.

We primarily operate as a US-based start-up and have clients across US, Asia-Pacific, Middle-East and have offices in USA - New Jersey & India - Chennai.

 

Started with 3 people, the company is fast growing with 100+ employees

 

1. What do we expect from you

 

- Should posses minimum 2 years of experience of data analytics model development and deployment

- Skills relating to core Statistics & Mathematics.

- Huge interest in handling numbers

- Ability to understand all domains in businesses across various sectors

- Natural passion towards numbers, business, coding, visualisation

 

2. Necessary skill set:

 

- Proficient in R/Python, Advanced Excel, SQL

- Should have worked with Retail/FMCG/CPG projects solving analytical problems in Sales/Marketing/Supply Chain functions

- Very good understanding of algorithms, mathematical models, statistical techniques, data mining, like Regression models, Clustering/ Segmentation, time series forecasting, Decision trees/Random forest, etc.

- Ability to choose the right model for the right data and translate that into code in R, Python, VBA (Proven capabilities)

- Should have handled large datasets and with through understanding of SQL

- Ability to handle a team of Data Analysts

 

3. Good to have skill set:

 

- Microsoft PowerBI / Tableau / Qlik View / Spotfire

 

4. Job Responsibilities:

 

- Translate business requirements into technical requirements

- Data extraction, preparation and transformation

- Identify, develop and implement statistical techniques and algorithms that address business challenges and adds value to the organisation

- Create and implement data models

- Interact with clients for queries and delivery adoption

 

5. Screening Methodology

 

- Problem Solving round (Telephonic Conversation)

- Technical discussion round (Telephonic Conversation)

- Final fitment discussion (Video Round

 

 

Read more
Agency job
via FlexAbility by srikanth voona
Hyderabad
8 - 14 yrs
₹15L - ₹35L / yr
Machine Learning (ML)
Artificial Intelligence (AI)
Deep Learning
Java
Python

Required skill

  • Around 6- 8.5 years of experience and around 4+ years in AI / Machine learning space
  • Extensive experience in designing large scale machine learning solution for the ML use case,  large scale deployments and establishing continues automated improvement / retraining framework.
  • Strong experience in Python and Java is required.
  • Hands on experience on Scikit-learn, Pandas, NLTK
  • Experience in Handling of Timeseries data and associated techniques like Prophet, LSTM
  • Experience in Regression, Clustering, classification algorithms
  • Extensive experience in buildings traditional Machine Learning SVM, XGBoost, Decision tree and Deep Neural Network models like RNN, Feedforward is required.
  • Experience in AutoML like TPOT or other
  • Must have strong hands on experience in Deep learning frameworks like Keras, TensorFlow or PyTorch 
  • Knowledge of Capsule Network or reinforcement learning, SageMaker is a desirable skill
  • Understanding of Financial domain is desirable skill

 Responsibilities 

  • Design and implementation of solutions for ML Use cases
  • Productionize System and Maintain those
  • Lead and implement data acquisition process for ML work
  • Learn new methods and model quickly and utilize those in solving use cases
Read more
at Artivatic
1 video
3 recruiters
DP
Posted by Layak Singh
Bengaluru (Bangalore)
3 - 7 yrs
₹6L - ₹14L / yr
Python
Machine Learning (ML)
Artificial Intelligence (AI)
Deep Learning
Natural Language Processing (NLP)
+3 more
About Artivatic : Artivatic is a technology startup that uses AI/ML/Deep learning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 25+ team focus on technology. The artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access, and health benefits with alternative data sources to increase their productivity, efficiency, automation power, and profitability, hence improving their way of doing business more intelligently & seamlessly.  - Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, healthcare, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more.   Job description We at artivatic are seeking for passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. The ideal candidate will have research/implementation experience in modeling and developing NLP tools and have experience working with machine learning/deep learning algorithms. Roles and responsibilities Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing. Developing NLP based tools and solutions end to end. Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.Be responsible for measuring and optimizing the quality of your algorithms Requirements Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc Good understanding of Rule-based, Statistical and probabilistic NLP techniques. Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc. Good understanding of Machine learning and Deep learning algorithms. Good knowledge of Data Structures and Algorithms. Strong programming skills in Python/Java/Scala/C/C++. Strong problem solving and logical skills. A go-getter kind of attitude with the willingness to learn new technologies. Well versed in software design paradigms and good development practices. Basic Qualifications Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language - Processing, Machine Learning or Deep Learning. Publication record in conferences/journals is a plus. 2+ years of working/research experience building NLP based solutions is preferred. If you feel that you are the ideal candidate & can bring a lot of values to our culture & company's vision, then please do apply. If your profile matches as per our requirements, you will hear from one of our team members. We are looking for someone who can be part of our Team not Employee. Job Perks Insurance, Travel compensation & others
Read more
at Indium Software
16 recruiters
DP
Posted by Mohamed Aslam
Hyderabad
3 - 7 yrs
₹7L - ₹13L / yr
Python
Spark
SQL
PySpark
HiveQL
+2 more

Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.

With over 1000+ associates globally, Indium operates through offices in the US, UK and India

Visit www.indiumsoftware.com to know more.

Job Title: Analytics Data Engineer

What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.

We ask:

Extensive Experience with SQL and strong ability to process and analyse complex data

The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.

Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto

  • Relate Metrics to product
  • Programmatic Thinking
  • Edge cases
  • Good Communication
  • Product functionality understanding

Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!

Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!

Read more
at The Smart Cube
1 recruiter
DP
Posted by Jasmine Batra
Remote, Noida, NCR (Delhi | Gurgaon | Noida)
2 - 5 yrs
₹2L - ₹5L / yr
R Programming
Advanced analytics
Python
Marketing analytics
• Act as a lead analyst on various data analytics projects aiding strategic decision making for Fortune 500 / FTSE 100 companies, Blue Chip Consulting Firms and Global Financial Services companies • Understand the client objectives, and work with the PL to design the analytical solution/framework. Be able to translate the client objectives / analytical plan into clear deliverables with associated priorities and constraints • Collect/Organize/Prepare/Manage data for the analysis and conduct quality checks • Use and implement basic and advanced statistical techniques like frequencies, cross-tabs, correlation, Regression, Decision Trees, Cluster Analysis, etc. to identify key actionable insights from the data • Develop complete sections of final client report in Power Point. Identify trends and evaluate insights in terms of logic and reasoning, and be able to succinctly present them in terms of an executive summary/taglines • Conduct sanity checks of the analysis output based on reasoning and common sense, and be able to do a rigorous self QC, as well as of the work assigned to analysts to ensure an error free output • Aid in decision making related to client management, and also be able to take client calls relatively independently • Support the project leads in managing small teams of 2-3 analysts, independently set targets and communicate to team members • Discuss queries/certain sections of deliverable report over client calls or video conferences Technical Skills: • Hands on experience of one or more statistical tools such as SAS, R and Python • Working knowledge or experience in using SQL Server (or other RDBMS tools) would be an advantage Work Experience: • 2-4 years of relevant experience in Marketing Analytics / MR. • Experience in managing, cleaning and analysis of large datasets using statistical packages like SAS, R, Python, etc. • Experience in data management using SQL queries on tools like Access/ SQL Server
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort