Cutshort logo
Analytics Jobs in Hyderabad

6+ Analytics Jobs in Hyderabad | Analytics Job openings in Hyderabad

Apply to 6+ Analytics Jobs in Hyderabad on CutShort.io. Explore the latest Analytics Job opportunities across top companies like Google, Amazon & Adobe.

Analytics jobs in other cities
Adobe Analytics JobsAdobe Analytics Jobs in Bangalore (Bengaluru)Adobe Analytics Jobs in MumbaiAdvanced analytics JobsAdvanced analytics Jobs in Bangalore (Bengaluru)Advanced analytics Jobs in Delhi, NCR and GurgaonAdvanced analytics Jobs in MumbaiAdvanced analytics Jobs in PuneAnalytics JobsAnalytics Jobs in AhmedabadAnalytics Jobs in Bangalore (Bengaluru)Analytics Jobs in ChandigarhAnalytics Jobs in ChennaiAnalytics Jobs in Delhi, NCR and GurgaonAnalytics Jobs in MumbaiAnalytics Jobs in PuneData Analytics JobsData Analytics Jobs in AhmedabadData Analytics Jobs in Bangalore (Bengaluru)Data Analytics Jobs in BhubaneswarData Analytics Jobs in ChandigarhData Analytics Jobs in ChennaiData Analytics Jobs in CoimbatoreData Analytics Jobs in Delhi, NCR and GurgaonData Analytics Jobs in HyderabadData Analytics Jobs in IndoreData Analytics Jobs in JaipurData Analytics Jobs in Kochi (Cochin)Data Analytics Jobs in KolkataData Analytics Jobs in MumbaiData Analytics Jobs in PuneGoogle Analytics JobsGoogle Analytics Jobs in AhmedabadGoogle Analytics Jobs in Bangalore (Bengaluru)Google Analytics Jobs in BhubaneswarGoogle Analytics Jobs in ChandigarhGoogle Analytics Jobs in ChennaiGoogle Analytics Jobs in CoimbatoreGoogle Analytics Jobs in Delhi, NCR and GurgaonGoogle Analytics Jobs in HyderabadGoogle Analytics Jobs in IndoreGoogle Analytics Jobs in JaipurGoogle Analytics Jobs in Kochi (Cochin)Google Analytics Jobs in KolkataGoogle Analytics Jobs in MumbaiGoogle Analytics Jobs in PuneHR analytics JobsHR analytics Jobs in Bangalore (Bengaluru)HR analytics Jobs in ChandigarhHR analytics Jobs in Delhi, NCR and GurgaonHR analytics Jobs in HyderabadHR analytics Jobs in JaipurHR analytics Jobs in MumbaiHR analytics Jobs in PuneLogi Analytics JobsMarketing analytics JobsMarketing analytics Jobs in Bangalore (Bengaluru)Marketing analytics Jobs in ChennaiMarketing analytics Jobs in Delhi, NCR and GurgaonPredictive analytics JobsPredictive analytics Jobs in Bangalore (Bengaluru)Predictive analytics Jobs in MumbaiRemote Data Analytics JobsSAS Analytics JobsSAS Analytics Jobs in Bangalore (Bengaluru)SAS Visual Analytics JobsSAS Visual Analytics Jobs in MumbaiWeb Analytics JobsWeb Analytics Jobs in AhmedabadWeb Analytics Jobs in Bangalore (Bengaluru)Web Analytics Jobs in BhubaneswarWeb Analytics Jobs in ChandigarhWeb Analytics Jobs in ChennaiWeb Analytics Jobs in CoimbatoreWeb Analytics Jobs in Delhi, NCR and GurgaonWeb Analytics Jobs in HyderabadWeb Analytics Jobs in IndoreWeb Analytics Jobs in JaipurWeb Analytics Jobs in Kochi (Cochin)Web Analytics Jobs in KolkataWeb Analytics Jobs in MumbaiWeb Analytics Jobs in Pune
icon
AI as a Service

AI as a Service

Agency job
via Purple Hirez by Aditya K
Hyderabad
7 - 15 yrs
₹7L - ₹45L / yr
Analytics
skill iconKubernetes
skill iconPython
  • 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
  • Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • 5+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins

Responsibilities

  • Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
  • Create custom Operators for Kubernetes, Kubeflow
  • Develop data ingestion processes and ETLs
  • Assist in dev ops operations
  • Design and Implement APIs
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
Enterprise Artificial Intelligence

Enterprise Artificial Intelligence

Agency job
via Purple Hirez by Aditya K
Hyderabad
5 - 12 yrs
₹10L - ₹35L / yr
Analytics
skill iconKubernetes
Apache Kafka
skill iconData Analytics
skill iconPython
+3 more
  • 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
  • Strong industry expertise with containerization technologies including kubernetes, docker-compose
  • 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • Experience with scripting languages. Python experience highly desirable.
  • 2+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Demonstrated expertise of building cloud native applications
  • Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
  • Experience in API development using Swagger
  • Strong expertise with containerization technologies including kubernetes, docker-compose
  • Experience with cloud platform services such as AWS, Azure or GCP.
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins
Responsibilities
  • Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
  • Assist in dev ops operations
  • Develop data ingestion processes and ETLs
  • Design and Implement APIs
  • Assist in dev ops operations
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
Product Development

Product Development

Agency job
via Purple Hirez by Aditya K
Hyderabad
12 - 20 yrs
₹15L - ₹50L / yr
Analytics
skill iconData Analytics
skill iconKubernetes
PySpark
skill iconPython
+1 more

Job Description

We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.

Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.

 

Skills

  • Bachelors/Masters/Phd in CS or equivalent industry experience
  • Demonstrated expertise of building and shipping cloud native applications
  • 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
  • Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • 5+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins

Responsibilities

  • Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
  • Create custom Operators for Kubernetes, Kubeflow
  • Develop data ingestion processes and ETLs
  • Assist in dev ops operations
  • Design and Implement APIs
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Sayali Kachi
Posted by Sayali Kachi
Pune, Hyderabad
4 - 10 yrs
₹5L - ₹20L / yr
ETL
SQL
Data engineering
Analytics
PL/SQL
+3 more

We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike. 

Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description

Experience : 4-10 years

Location : Pune

 


Mandatory Skills - 

  • Strong in ETL/SQL development
  • Strong Data Warehousing skills
  • Hands-on experience working with Unix/Linux
  • Development experience in Enterprise Data warehouse projects
  • Good to have experience working with Python, shell scripting
  •  

Opportunities -

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
  • Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing


 

About Us!

A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

 

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

 

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

 

We have our own products!

Eagle – Data warehouse Assessment & Migration Planning Product

Raven – Automated Workload Conversion Product

Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

 

Why join us!

Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

 

 

Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy

 

Check out more about us on our website below!

http://www.datametica.com/">www.datametica.com

Read more
Thinkdeeply

at Thinkdeeply

5 recruiters
Aditya Kanchiraju
Posted by Aditya Kanchiraju
Hyderabad
6 - 16 yrs
₹7L - ₹26L / yr
skill iconJava
Technical Architecture
Analytics
skill iconSpring Boot
Apache Kafka
+4 more

We are looking for an experienced engineer with superb technical skills. You will primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions. The successful candidate will be curious, creative, ambitious, self motivated, flexible, and have a bias towards taking action. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.

 

Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.

 

Experience

12+ Years

 

Location

Hyderabad

 

Skills

Bachelors/Masters/Phd in CS or equivalent industry experience

10+ years of industry experience in java related frameworks such as Spring and/or Typesafe

Experience with scripting languages. Python experience highly desirable. 5+ Industry experience in python

Experience with popular modern web frameworks such as Spring boot, Play framework, or Django

Demonstrated expertise of building and shipping cloud native applications

Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd

Experience in API development using Swagger

Strong expertise with containerization technologies including kubernetes, docker-compose

Experience with cloud platform services such as AWS, Azure or GCP.

Implementing automated testing platforms and unit tests

Proficient understanding of code versioning tools, such as Git

Familiarity with continuous integration, Jenkins

 

Responsibilities

 

Architect, Design and Implement Large scale data processing pipelines

Design and Implement APIs

Assist in dev ops operations

Identify performance bottlenecks and bugs, and devise solutions to these problems

Help maintain code quality, organization, and documentation

Communicate with stakeholders regarding various aspects of solution.

Mentor team members on best practices

 

Read more
Bigdatamatica Solutions Pvt Ltd

at Bigdatamatica Solutions Pvt Ltd

1 video
1 recruiter
sriram bhattaram
Posted by sriram bhattaram
Hyderabad
4 - 8 yrs
₹45000 - ₹60000 / mo
Analytics
skill iconPython
skill iconR Programming
SQL server

Top MNC looking for candidates on Business Analytics(4-8 Years Experience).

 

Requirement :

- Experience in metric development & Business analytics

- High Data Skill Proficiency/Statistical Skills

- Tools: R, SQL, Python, Advanced Excel

- Good verbal/communication Skills 

- Supply Chain domain knowledge

 

*Job Summary*

Duration: 6months contract based at Hyderabad

Availability: 1 week/Immediate

Qualification: Graduate/PG from Reputed University

 

 

*Key Skills*

R, SQL, Advanced Excel, Python

 

*Required Experience and Qualifications*

5 to 8 years of Business Analytics experience.

 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort