Cutshort logo
Indium Software logo
Analytics Data Engineer
Analytics Data Engineer
Indium Software's logo

Analytics Data Engineer

Mohamed Aslam's profile picture
Posted by Mohamed Aslam
3 - 7 yrs
₹7L - ₹13L / yr
Hyderabad
Skills
skill iconPython
Spark
SQL
PySpark
HiveQL
Mixpanel
Apache Hive

Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.

With over 1000+ associates globally, Indium operates through offices in the US, UK and India

Visit http://www.indiumsoftware.com">www.indiumsoftware.com to know more.

Job Title: Analytics Data Engineer

What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.

We ask:

Extensive Experience with SQL and strong ability to process and analyse complex data

The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.

Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto

  • Relate Metrics to product
  • Programmatic Thinking
  • Edge cases
  • Good Communication
  • Product functionality understanding

Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!

Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Indium Software

Founded :
1999
Type
Size :
100-1000
Stage :
Profitable
About
We are an independent Software Testing Services Company established in 1999, offering QA and software testing services across the globe. We specialize in test automation of web, mobile and desktop applications
Read more
Connect with the team
Profile picture
Thushara Sasidharan
Profile picture
Jerome Christofher
Profile picture
Karunya P
Profile picture
Swaathipriya P
Profile picture
Vaishnavi Sundaram
Profile picture
Nandha Chandrashekar
Profile picture
Sandy S
Profile picture
Rekha N
Profile picture
Mohammed Shabeer
Profile picture
Preetha Saravanan
Profile picture
Emma Wilson
Profile picture
Ivarajneasan S K
Profile picture
Mohamed Aslam
Profile picture
Dinesh Kumar
Profile picture
Shalini S
Profile picture
Guna Sundari
Company social profiles
bloglinkedintwitterfacebook

Similar jobs

Antuit
at Antuit
1 recruiter
Purnendu Shakunt
Posted by Purnendu Shakunt
Bengaluru (Bangalore)
8 - 12 yrs
₹25L - ₹30L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Data Scientist
skill iconPython
+9 more

About antuit.ai

 

Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.

 

Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.

 

The Role:

 

Antuit.ai is interested in hiring a Principal Data Scientist, this person will facilitate standing up standardization and automation ecosystem for ML product delivery, he will also actively participate in managing implementation, design and tuning of product to meet business needs.

 

Responsibilities:

 

Responsibilities includes, but are not limited to the following:

 

  • Manage and provides technical expertise to the delivery team. This includes recommendation of solution alternatives, identification of risks and managing business expectations.
  • Design, build reliable and scalable automated processes for large scale machine learning.
  • Use engineering expertise to help design solutions to novel problems in software development, data engineering, and machine learning. 
  • Collaborate with Business, Technology and Product teams to stand-up MLOps process.
  • Apply your experience in making intelligent, forward-thinking, technical decisions to delivery ML ecosystem, including implementing new standards, architecture design, and workflows tools.
  • Deep dive into complex algorithmic and product issues in production
  • Own metrics and reporting for delivery team. 
  • Set a clear vision for the team members and working cohesively to attain it.
  • Mentor and coach team members


Qualifications and Skills:

 

Requirements

  • Engineering degree in any stream
  • Has at least 7 years of prior experience in building ML driven products/solutions
  • Excellent programming skills in any one of the language C++ or Python or Java.
  • Hands on experience on open source libraries and frameworks- Tensorflow,Pytorch, MLFlow, KubeFlow, etc.
  • Developed and productized large-scale models/algorithms in prior experience
  • Can drive fast prototypes/proof of concept in evaluating various technology, frameworks/performance benchmarks.
  • Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
  • Good verbal, written and presentation skills.
  • Ability to learn new skills and technologies.
  • 3+ years working with retail or CPG preferred.
  • Experience in forecasting and optimization problems, particularly in the CPG / Retail industry preferred.

 

Information Security Responsibilities

 

  • Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
  • Take part in Information Security training and act accordingly while handling information.
  • Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).

EEOC

 

Antuit.ai is an at-will, equal opportunity employer.  We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
Read more
Quinnox
at Quinnox
2 recruiters
MidhunKumar T
Posted by MidhunKumar T
Bengaluru (Bangalore), Mumbai
10 - 15 yrs
₹30L - ₹35L / yr
ADF
azure data lake services
SQL Azure
azure synapse
Spark
+4 more

Mandatory Skills: Azure Data Lake Storage, Azure SQL databases, Azure Synapse, Data Bricks (Pyspark/Spark), Python, SQL, Azure Data Factory.


Good to have: Power BI, Azure IAAS services, Azure Devops, Microsoft Fabric


Ø Very strong understanding on ETL and ELT

Ø Very strong understanding on Lakehouse architecture.

Ø Very strong knowledge in Pyspark and Spark architecture.

Ø Good knowledge in Azure data lake architecture and access controls

Ø Good knowledge in Microsoft Fabric architecture

Ø Good knowledge in Azure SQL databases

Ø Good knowledge in T-SQL

Ø Good knowledge in CI /CD process using Azure devops

Ø Power BI

Read more
Bengaluru (Bangalore)
1 - 6 yrs
₹2L - ₹8L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+9 more

ROLE AND RESPONSIBILITIES

Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should

be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and

transform data into insights that drive business value, through use of data analytics, data visualization and data

modeling techniques.


QUALIFICATIONS AND EDUCATION REQUIREMENTS

Technical Bachelor’s Degree.

Non-Technical Degree holders should have 1+ years of relevant experience.

Read more
Compile
at Compile
16 recruiters
Sarumathi NH
Posted by Sarumathi NH
Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
Spark

You will be responsible for designing, building, and maintaining data pipelines that handle Real-world data at Compile. You will be handling both inbound and outbound data deliveries at Compile for datasets including Claims, Remittances, EHR, SDOH, etc.

You will

  • Work on building and maintaining data pipelines (specifically RWD).
  • Build, enhance and maintain existing pipelines in pyspark, python and help build analytical insights and datasets.
  • Scheduling and maintaining pipeline jobs for RWD.
  • Develop, test, and implement data solutions based on the design.
  • Design and implement quality checks on existing and new data pipelines.
  • Ensure adherence to security and compliance that is required for the products.
  • Maintain relationships with various data vendors and track changes and issues across vendors and deliveries.

You have

  • Hands-on experience with ETL process (min of 5 years).
  • Excellent communication skills and ability to work with multiple vendors.
  • High proficiency with Spark, SQL.
  • Proficiency in Data modeling, validation, quality check, and data engineering concepts.
  • Experience in working with big-data processing technologies using - databricks, dbt, S3, Delta lake, Deequ, Griffin, Snowflake, BigQuery.
  • Familiarity with version control technologies, and CI/CD systems.
  • Understanding of scheduling tools like Airflow/Prefect.
  • Min of 3 years of experience managing data warehouses.
  • Familiarity with healthcare datasets is a plus.

Compile embraces diversity and equal opportunity in a serious way. We are committed to building a team of people from many backgrounds, perspectives, and skills. We know the more inclusive we are, the better our work will be.         

Read more
Impact Guru
at Impact Guru
15 recruiters
shubham samel
Posted by shubham samel
Mumbai
3 - 7 yrs
₹5L - ₹9L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

Job Responsibilities:


Excellent problem solving and analytical skills - ability to develop hypotheses,

understand and interpret data within the context of the product / business -

solve problems and distill data into actionable recommendations.


 Strong communication skills with the ability to confidently work with cross-

functional teams across the globe and to present information to all levels of the

organization.

 Intellectual and analytical curiosity - initiative to dig into the why, what & how.

 Strong number crunching and quantitative skills.

 Advanced knowledge of MS Excel and PowerPoint.

 Good hands on SQL

 Experience with in Google Analytics, Optimize, Tag Manager and other Google Suite tools

 Understanding of Business analytics tools & statistical programming languages - R, SAS, SPSS, Tableau is a plus

 Inherent interest in e-commerce & marketplace technology platforms and

broadly in the consumer Internet & mobile space.

 Previous experience of 1+ years working in a product company in

a product analytics role

 Strong understanding of building and interpreting product funnels.


Perquisites & Benefits:

 Opportunity to work with India's no.1 crowdfunding platform

 Be a part of a young, smart and rapidly growing team with management from Ivy League and Premier colleges

 Competitive compensation and incentives

 Fun, casual, relaxed and flexible work environment


Read more
A Product Based Client,Chennai
Chennai
4 - 8 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Spark
PySpark
+2 more

Analytics Job Description

We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will

partner closely with leaders across the organization, working together to understand the how

and why of people, team and company challenges, workflows and culture. The team is

responsible for delivering data and insights that drive decision-making, execution, and

investments for our product initiatives.

You will work cross-functionally with product, marketing, sales, engineering, finance, and our

customer-facing teams enabling them with data and narratives about the customer journey.

You’ll also work closely with other data teams, such as data engineering and product analytics,

to ensure we are creating a strong data culture at Blend that enables our cross-functional partners

to be more data-informed.


Role : DataEngineer 

Please find below the JD for the DataEngineer Role..

  Location: Guindy,Chennai

How you’ll contribute:

• Develop objectives and metrics, ensure priorities are data-driven, and balance short-

term and long-term goals


• Develop deep analytical insights to inform and influence product roadmaps and

business decisions and help improve the consumer experience

• Work closely with GTM and supporting operations teams to author and develop core

data sets that empower analyses

• Deeply understand the business and proactively spot risks and opportunities

• Develop dashboards and define metrics that drive key business decisions

• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,

and Workato

• Design our Analytics and Business Intelligence architecture, assessing and

implementing new technologies that fitting


• Work with our engineering teams to continually make our data pipelines and tooling

more resilient


Who you are:

• Bachelor’s degree or equivalent required from an accredited institution with a

quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist

• Must have strong SQL and data modeling skills, with experience applying skills to

thoughtfully create data models in a warehouse environment.

• A proven track record of using analysis to drive key decisions and influence change

• Strong storyteller and ability to communicate effectively with managers and

executives

• Demonstrated ability to define metrics for product areas, understand the right

questions to ask and push back on stakeholders in the face of ambiguous, complex

problems, and work with diverse teams with different goals

• A passion for documentation.

• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a

dynamic environment.

• A bias towards communication and collaboration with business and technical

stakeholders.

• Quantitative rigor and systems thinking.

• Prior startup experience is preferred, but not required.

• Interest or experience in machine learning techniques (such as clustering, decision

tree, and segmentation)

• Familiarity with a scientific computing language, such as Python, for data wrangling

and statistical analysis

• Experience with a SQL focused data transformation framework such as dbt

• Experience with a Business Intelligence Tool such as Mode/Tableau


Mandatory Skillset:


-Very Strong in SQL

-Spark OR pyspark OR Python

-Shell Scripting


Read more
Gurugram
1 - 7 yrs
₹7L - ₹35L / yr
SQL
skill iconPython
PySpark
ETL
Informatica
+11 more

About Us:

Cognitio Analytics is an award-winning, niche service provider that offers digital transformation solutions powered by AI and machine learning. We help clients realize the full potential of their data assets and the investments made in related technologies, be it analytics and big data platforms, digital technologies or your people. Our solutions include Health Analytics powered by Cognitio’s Health Data Factory that drives better care outcomes and higher ROI on care spending. We specialize in providing Data Governance solutions that help effectively use data in a consistent and compliant manner. Additionally, our smart intelligence solutions enable a deeper understanding of operations through the use of data science and advanced solutions like process mining technologies. We have offices in New Jersey and Connecticut in the USA and in Gurgaon in India.

 

What we're looking for:

  • Ability in data modelling, design, build and deploy DW/BI systems for Insurance, Health Care, Banking, etc.
  • Performance tuning of ETL process and SQL queries and recommend & implement ETL and query tuning techniques.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automations.
  • Translate business needs into technical specifications, evaluate, and improve existing BI systems.
  • Use business intelligence and visualization software (e.g., Tableau, Qlik Sense, Power BI etc.) to empower customers to drive their analytics and reporting
  • Develop and update technical documentation for BI systems.

 

 

Key Technical Skills:

  • Hands on experience in MS SQL Server & MSBI (SSIS/SSRS/SSAS), with understanding of database concepts, star schema, SQL Tuning, OLAP, Databricks, Hadoop, Spark, cloud technologies.
  • Experience in designing and building complete ETL / SSIS processes and transforming data for ODS, Staging, and Data Warehouse
  • Experience building self-service reporting solutions using business intelligence software (e.g., Tableau, Qlik Sense, Power BI etc.)
Read more
Number Theory
at Number Theory
3 recruiters
Nidhi Mishra
Posted by Nidhi Mishra
Gurugram
5 - 12 yrs
₹10L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Job Description – Big Data Architect
Number Theory is looking for experienced software/data engineer who would be focused on owning and rearchitecting dynamic pricing engineering systems
Job Responsibilities:
 Evaluate and recommend Big Data technology stack best suited for NT AI at scale Platform
and other products
 Lead the team for defining proper Big Data Architecture Design.
 Design and implement features on NT AI at scale platform using Spark and other Hadoop
Stack components.
 Drive significant technology initiatives end to end and across multiple layers of architecture
 Provides strong technical leadership in adopting and contributing to open source technologies related to Big Data across multiple engagements
 Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with considerable scalable amount of data
 Identify and work upon incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.)

Requirements:
 A successful candidate with 8+ years of experience in the role of implementation of a highend software product.
 Provides technical leadership in Big Data space (Spark and Hadoop Stack like Map/Reduc,
HDFS, Hive, HBase, Flume, Sqoop etc. NoSQL stores like Cassandra, HBase etc) across
Engagements and contributes to open-source Big Data technologies.
 Rich hands on in Spark and worked on Spark at a larger scale.
 Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near
Real-time, Realtime technologies).
 Passionate for continuous learning, experimenting, applying and contributing towards
cutting edge open-source technologies and software paradigms
 Expert-level proficiency in Java and Scala.
 Strong understanding and experience in distributed computing frameworks, particularly
Apache Hadoop2.0 (YARN; MR & HDFS) and associated technologies one or more of Hive,
Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its
components (Streaming, SQL, MLLib)
 Operating knowledge of cloud computing platforms (AWS,Azure) –

Good to have:

 Operating knowledge of different enterprise hadoop distribution (C) –
 Good Knowledge of Design Patterns
 Experience working within a Linux computing environment, and use of command line tools
including knowledge of shell/Python scripting for automating common tasks.
Read more
Gauge Data Solutions Pvt Ltd
Deeksha Dewal
Posted by Deeksha Dewal
Noida
0 - 4 yrs
₹3L - ₹8L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Artificial Intelligence (AI)
+4 more

Essential Skills :

- Develop, enhance and maintain Python related projects, data services, platforms and processes.

- Apply and maintain data quality checks to ensure data integrity and completeness.

- Able to integrate multiple data sources and databases.

- Collaborate with cross-functional teams across, Decision Sciences, Search, Database Management. To design innovative solutions, capture requirements and drive a common future vision.

Technical Skills/Capabilities :

- Hands on experience in Python programming language.

- Understanding and proven application of Computer Science fundamentals in object oriented design, data structures, algorithm design, Regular expressions, data storage procedures, problem solving, and complexity analysis.

- Understanding of natural language processing and basic ML algorithms will be a plus.

- Good troubleshooting and debugging skills.

- Strong individual contributor, self-motivated, and a proven team player.

- Eager to learn and develop new experience and skills.

- Good communication and interpersonal skills.

About Company Profile :

Gauge Data Solutions Pvt Ltd :

- We are a leading company into Data Science, Machine learning and Artificial Intelligence.

- Within Gauge data we have a competitive environment for the Developers and Engineers.

- We at Gauge create potential solutions for the real world problems. One such example of our engineering is Casemine.

- Casemine is a legal research platform powered by Artificial Intelligence. It helps lawyers, judges and law researchers in their day to day life.

- Casemine provides exhaustive case results to its users with the use of cutting edge technologies.

- It is developed with the efforts of great engineers at Gauge Data.

- One such opportunity is now open for you. We at Gauge Data invites application for competitive, self motivated Python Developer.

Purpose of the Role :

- This position will play a central role in developing new features and enhancements for the products and services at Gauge Data.

- To know more about what we do and how we do it, feel free to read these articles:

- https://bit.ly/2YfVAsv

- https://bit.ly/2rQArJc

- You can also visit us at https://www.casemine.com/.

- For more information visit us at: - www.gaugeanalytics.com

- Join us on LinkedIn, Twitter & Facebook
Read more
MNC
at MNC
Agency job
via Fragma Data Systems by Priyanka U
Chennai
1 - 5 yrs
₹6L - ₹12L / yr
skill iconData Science
Natural Language Processing (NLP)
Data Scientist
skill iconR Programming
skill iconPython
Skills
  • Python coding skills
  • Scikit-learn, pandas, tensorflow/keras experience
  • Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
  • Implementing Machine learning models and pushing it to production 
  • Creating docker images for ML models, REST API creation in Python
1) Data scientist with NLP experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
2) Data scientist with Computer vision for documents experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos