Cutshort logo
Smartavya logo
Senior Business Analytics Manager
Smartavya
Senior Business Analytics Manager
Smartavya's logo

Senior Business Analytics Manager

at Smartavya

Agency job
12 - 16 yrs
₹32L - ₹35L / yr
Mumbai
Skills
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
Spotfire
ETL
SQL
skill iconPython
skill iconR Language
skill iconData Science
Dashboard
skill iconSASS
skill iconAmazon Web Services (AWS)
Windows Azure
Hadoop
Spark
  • Engage with client business team managers and leaders independently to understand their requirements, help them structure their needs into data needs, prepare functional and technical specifications for execution and ensure delivery from the data team. This can be combination of ETL Processes, Reporting Tools, Analytics tools like SAS, R and alike.
  • Lead and manage the Business Analytics team, ensuring effective execution of projects and initiatives.
  • Develop and implement analytics strategies to support business objectives and drive data-driven decision-making.
  • Analyze complex data sets to provide actionable insights that improve business performance.
  • Collaborate with other departments to identify opportunities for process improvements and implement data-driven solutions.
  • Oversee the development, maintenance, and enhancement of dashboards, reports, and analytical tools.
  • Stay updated with the latest industry trends and technologies in analytics and data
  • science.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

Similar jobs

Arahas Technologies
Nidhi Shivane
Posted by Nidhi Shivane
Pune
3 - 8 yrs
₹10L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+3 more


Role Description

This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.

Skill Name: GCP Data Engineer

Experience: 7-10 years

Notice Period: 0-15 days

Location :-Pune

If you have a passion for data engineering and possess the following , we would love to hear from you:


🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)

🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query

🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting

🔹 Experience in the Finance/Revenue domain would be considered an added advantage

🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial


You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.

Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..


Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.

Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr
Data Warehouse (DWH)
Spark
Data engineering
skill iconPython
PySpark
+5 more

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
4 - 7 yrs
₹12L - ₹19L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPHP
skill iconJava
Spark
+1 more

LogiNext is looking for a technically savvy and passionate Senior Software Engineer - Data Science to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.

In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.

Your goal will be to help our company analyze trends to make better decisions. Without knowledge of how the software works, data scientists might have difficulty in work. Apart from experience in developing R and Python, they must know modern approaches to software development and their impact. DevOps continuous integration and deployment, experience in cloud computing are everyday skills to manage and process data.

Responsibilities :

Adapting and enhancing machine learning techniques based on physical intuition about the domain Design sampling methodology, prepare data, including data cleaning, univariate analysis, missing value imputation, , identify appropriate analytic and statistical methodology, develop predictive models and document process and results Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule and on budget Coordinate and lead efforts to innovate by deriving insights from heterogeneous sets of data generated by our suite of Aerospace products Support and mentor data scientists Maintain and work with our data pipeline that transfers and processes several terabytes of data using Spark, Scala, Python, Apache Kafka, Pig/Hive & Impala Work directly with application teams/partners (internal clients such as Xbox, Skype, Office) to understand their offerings/domain and help them become successful with data so they can run controlled experiments (a/b testing) Understand the data generated by experiments, and producing actionable, trustworthy conclusions from them Apply data analysis, data mining and data processing to present data clearly and develop experiments (ab testing) Work with development team to build tools for data logging and repeatable data tasks tol accelerate and automate data scientist duties


Requirements:

Bachelor’s or Master’s degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field. PhD preferred 4 to 7 years of experience in data mining, data modeling, and reporting 3+ years of experience working with large data sets or do large scale quantitative analysis Expert SQL scripting required Development experience in one of the following: Scala, Java, Python, Perl, PHP, C++ or C# Experience working with Hadoop, Pig/Hive, Spark, MapReduce Ability to drive projects Basic understanding of statistics – hypothesis testing, p-values, confidence intervals, regression, classification, and optimization are core lingo Analysis - Should be able to perform Exploratory Data Analysis and get actionable insights from the data, with impressive visualization. Modeling - Should be familiar with ML concepts and algorithms; understanding of the internals and pros/cons of models is required. Strong algorithmic problem-solving skills Experience manipulating large data sets through statistical software (ex. R, SAS) or other methods Superior verbal, visual and written communication skills to educate and work with cross functional teams on controlled experiments Experimentation design or A/B testing experience is preferred. Experince in team management.

Read more
Ascendeum
at Ascendeum
3 recruiters
Swezelle Esteves
Posted by Swezelle Esteves
Remote only
1 - 5 yrs
₹8L - ₹10L / yr
skill iconPython
skill iconData Analytics
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
+4 more

Job Responsibilities: 

 

  • Identify valuable data sources and automate collection processes 
  • Undertake preprocessing of structured and unstructured data. 
  • Analyze large amounts of information to discover trends and patterns 
  • Helping develop reports and analysis. 
  • Present information using data visualization techniques. 
  • Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems. 
  • Evaluating changes and updates to source production systems. 
  • Develop, implement, and maintain leading-edge analytic systems, taking complicated problems and building simple frameworks 
  • Providing technical expertise in data storage structures, data mining, and data cleansing. 
  • Propose solutions and strategies to business challenges 

 

Desired Skills and Experience: 

 

  • At least 1 year of experience in Data Analysis 
  • Complete understanding of Operations Research, Data Modelling, ML, and AI concepts. 
  • Knowledge of Python is mandatory, familiarity with MySQL, SQL, Scala, Java or C++ is an asset 
  • Experience using visualization tools (e.g. Jupyter Notebook) and data frameworks (e.g. Hadoop) 
  • Analytical mind and business acumen 
  • Strong math skills (e.g. statistics, algebra) 
  • Problem-solving aptitude 
  • Excellent communication and presentation skills. 
  • Bachelor’s / Master's Degree in Computer Science, Engineering, Data Science or other quantitative or relevant field is preferred  
Read more
Ganit Business Solutions
at Ganit Business Solutions
3 recruiters
Vijitha VS
Posted by Vijitha VS
Remote only
4 - 7 yrs
₹10L - ₹30L / yr
skill iconScala
ETL
Informatica
Data Warehouse (DWH)
Big Data
+4 more

Job Description:

We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores.    The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

Responsibilities:

  • Develop, test, and implement data solutions based on functional / non-functional business requirements.
  • You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
  • Build Data Models to store the data in a most optimized manner
  • Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Implementing the ETL process and optimal data pipeline architecture
  • Monitoring performance and advising any necessary infrastructure changes.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Proactively identify potential production issues and recommend and implement solutions
  • Must be able to write quality code and build secure, highly available systems.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Review peer-codes and pipelines before deploying to Production for optimization issues and code standards

Skill Sets:

  • Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable data size (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Creation of DAGs for data engineering
  • Expert at Python /Scala programming, especially for data engineering/ ETL purposes

 

 

 

Read more
Vedantu
at Vedantu
8 recruiters
Gaurav Singhal
Posted by Gaurav Singhal
Bengaluru (Bangalore)
1 - 4 yrs
₹8L - ₹16L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconR Programming
skill iconPython
Decision Science
+1 more
About Vedantu --------------------------- If you have ever dreamed about being in the driver’s seat of a revolution, THIS is the place for you. Vedantu is an Ed-Tech startup which is into Live Online Tutoring. Recently raised Series B funding of $11M Job Description We are looking for a Data Scientist who will support our product, sales, leadership and marketing teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product, sales and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data analysis methods, building and implementing models and using/creating appropriate algorithms. Desired Skills 1. Experience using statistical computer languages (R, Python,etc.) to manipulate data and draw insights from large data sets. 2. Process, cleanse, and verify the integrity of data used for analysis. 3. Comfortable manipulating and analyzing complex, high-volume, high-dimensionality data from varying, heterogeneous sources 4. Experience with messy real-world data -- handling missing/incomplete/inaccurate data 5. Understanding of a broad set of Algorithms and Applied Math. 6. Good at problem solving, probability and statistics and knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage) and experience with applications. 7. Knowledge of data scraping is preferable 8. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks) and their real-world advantages/drawbacks. 9. Experience with big data tools (Hadoop, Hive, MapReduce) a plus.
Read more
Hex Business Innovations
Dhruv Dua
Posted by Dhruv Dua
Faridabad
0 - 4 yrs
₹1L - ₹3L / yr
SQL
SQL server
MySQL
MS SQLServer
skill iconC#
+1 more

Job Summary
SQL development for our Enterprise Resource Planning (ERP) Product offered to SMEs. Regular modifications , creation and validation with testing of stored procedures , views, functions on MS SQL Server.
Responsibilities and Duties
Understanding the ERP Software and use cases.
Regular Creation,modifications and testing of

  • Stored Procedures
  • Views
  • Functions
  • Nested Queries
  • Table and Schema Designs

Qualifications and Skills
MS SQL

  • Procedural Language
  • Datatypes
  • Objects
  • Databases
  • Schema
Read more
Velocity Services
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
skill iconPython
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more
Product based Company
Product based Company
Agency job
via Crewmates by Gowtham V
Coimbatore
4 - 15 yrs
₹5L - ₹25L / yr
ETL
Big Data
Hi Professionals,
We are looking for ETL Developer for Reputed Client @ Coimbatore Permanent role
Work Location : Coimbatore
Experience : 4+ Years
Skills ;
  •  Talend (or)Strong experience in any of the ETL Tools like (Informatica/Datastage/Talend)
  • DB preference (Teradata /Oracle /Sql server )
  • Supporting Tools (JIRA/SVN)
Notice Period : Immediate to 30 Days
Read more
MNC
MNC
Agency job
via I Squaresoft by Khadri SH
Remote only
5 - 8 yrs
₹10L - ₹20L / yr
ETL
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
SSIS
Cloud Datawarehouse
Hi,

job Description

Problem Formulation: Identifies possible options to address the business problems and must possess good understanding of dimension modelling

Must have worked on at least one end to end project using any Cloud Datawarehouse (Azure Synapses, AWS Redshift, Google Big query)

Good to have an understand of POWER BI and integration with any Cloud services like Azure or GCP

Experience of working with SQL Server, SSIS(Preferred)

Applied Business Acumen: Supports the development of business cases and recommendations. Owns delivery of project activity and tasks assigned by others. Supports process updates and changes. Solves business issues.

Data Transformation/Integration/Optimization:

The ETL developer is responsible for designing and creating the Data warehouse and all related data extraction, transformation and load of data function in the company

The developer should provide the oversight and planning of data models, database structural design and deployment and work closely with the data architect and Business analyst

Duties include working in a cross functional software development teams (Business analyst, Testers, Developers) following agile ceremonies and development practices.

The developer plays a key role in contributing to the design, evaluation, selection, implementation and support of databases solution.

Development and Testing: Develops codes for the required solution by determining the appropriate approach and leveraging business, technical, and data requirements.

Creates test cases to review and validate the proposed solution design. Work on POCs and deploy the software to production servers.

Good to Have (Preferred Skills):

  • Minimum 4-8 Years of experience in Data warehouse design and development for large scale application
  • Minimum 3 years of experience with star schema, dimensional modelling and extract transform load (ETL) Design and development
  • Expertise working with various databases (SQL Server, Oracle)
  • Experience developing Packages, Procedures, Views and triggers
  • Nice to have Big data technologies
  • The individual must have good written and oral communication skills.
  • Nice to have SSIS

 

Education and Experience

  • Minimum 4-8 years of software development experience
  • Bachelor's and/or Master’s degree in computer science

Please revert back with below details.

Total Experience:
Relevant Experience:

Current CTC:
Expected CTC:

Any offers: Y/N

Notice Period:

Qualification:

DOB:
Present Company Name:

Designation:

Domain

Reason for job change:

Current Location:

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos