9+ Analytics Jobs in Hyderabad | Analytics Job openings in Hyderabad
Apply to 9+ Analytics Jobs in Hyderabad on CutShort.io. Explore the latest Analytics Job opportunities across top companies like Google, Amazon & Adobe.

Location: Hyderabad (In Office)
Function: Product Management
Type: Full-time
About CraftMyPlate
CraftMyPlate is building India’s most trusted catering-tech platform, streamlining bulk food ordering for events, families, and corporate gatherings. From curated platters to fully customizable menus, we make large-scale ordering simple, transparent, and delightful—for both consumers and kitchen partners. We're a fast-growing startup solving real-world operational chaos in food ordering through tech, and are now looking for a sharp, execution-focused Product Manager to join our core team.
Role Overview
As a Product Manager at CraftMyPlate, you’ll own key products and features across our marketplace stack—customer ordering experience, vendor tools, and internal ops automation. You’ll drive end-to-end product thinking: from discovery and definition to delivery and optimization. You’ll be a key bridge between business, design, engineering, and operations, ensuring the right problems are solved in the right way.
Key Responsibilities
- Own the full product lifecycle: define strategy, prioritize roadmaps, write specs, ship features, and measure success.
- Understand business deeply: collaborate with sales, ops, and marketing to identify pain points, process gaps, and monetization opportunities.
- Translate complexity into clarity: write clear product requirements, create wireframes, and drive user stories that engineering can execute on.
- Work closely with design and engineering to build intuitive, scalable user experiences for both customers and vendor partners.
- Build for efficiency: automate internal workflows that reduce human dependency and improve operational throughput.
- Analyze product performance: define metrics, instrument dashboards, and make data-driven decisions to refine product iterations.
- Drive cross-functional alignment: ensure key stakeholders across supply, demand, support, and tech are aligned before launches.
- Manage trade-offs across timelines, tech constraints, and user experience to ship MVPs that move the business forward.
Minimum Requirements
- 2–4 years of experience in product management or adjacent roles (tech, consulting, operations, growth).
- Experience building products in marketplaces, consumer apps, or operationally heavy domains.
- Hands-on with tools like Figma, Notion, Jira, and Mixpanel (or equivalent).
- Strong written communication and structured thinking—can write a clear PRD, an intuitive flow, and a logical product spec.
- Comfort working in high-velocity, ambiguous startup environments.
- Analytical mindset; ability to break down metrics and drive insights.
- Passion for solving real-world problems with technology and simplicity.
Bonus (Preferred Qualifications)
- Prior experience in foodtech, e-commerce, logistics, or vendor marketplaces.
- Exposure to no-code tools, internal dashboards, or process automation.
- Worked closely with engineering teams and understand dev life cycles.
- Experience with ClickUp, PostHog, Zapier, or other internal stack tools we use.
Why Join Us?
- Be part of a high-ownership, mission-driven team shaping the future of catering-tech.
- Opportunity to lead products with 0-to-1 and 1-to-10 scale scope.
- Deep exposure to full-stack business thinking—tech, ops, sales, and support.
- Work directly with the founder and leadership on strategic priorities.
- Build for real India: solve hard problems that touch thousands of meals every week.
Looking for technically skilled candidates with excellent interpersonal skills for the technical support position. Technical support officers troubleshoot technical issues, provide timely customer feedback, and support the roll-out of new applications, among other duties.
Moreover, technical support officers need to talk to customers directly, as well as create written documentation, requiring excellent written and verbal communication.
Responsibilities:
- Identifying hardware and software solutions.
- Troubleshooting technical issues.
- Diagnosing and repairing faults.
- Resolving network issues.
- Installing and configuring hardware and software.
- Speaking to customers to quickly get to the root of their problem.
- Providing timely and accurate customer feedback.
- Talking customers through a series of actions to resolve a problem.
- Following up with clients to ensure the problem is resolved.
- Replacing or repairing the necessary parts.
- Supporting the roll-out of new applications.
- Providing support in the form of procedural documentation.
- Managing multiple cases at one time.
- Testing and evaluating new technologies.
- Conducting electrical safety checks on equipment.
Requirements:
- Degree in computer science or information technology.
- Certification in Microsoft, Linux, or Cisco is advantageous.
- Prior experience in tech support, desktop support, or a similar role.
- Proficiency in Windows/Linux/Mac OS.
- Experience with remote desktop applications and help desk software.
- Attention to detail and good problem-solving skills.
- Excellent interpersonal skills.
- Good written and verbal communication.
Seeking a detail-oriented Market Research Analyst to conduct surveys and analyze customer preferences and statistical data. Your role will involve providing valuable insights to support customers in their decision-making processes related to product designs, pricing, and promotions. As a successful Market Research Analyst, you will have the ability to independently analyze qualitative data, identify trends, assess strategies, and evaluate competition, with the ultimate goal of enhancing competitiveness. Your responsibilities will include gathering and interpreting market research data, generating reports, and presenting findings to stakeholders. We are looking for a self-motivated professional with strong analytical skills and a deep understanding of market dynamics. Join our team and contribute to our company’s success by helping us make data-driven decisions to optimize our products and strategies.

- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices


- 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
- Strong industry expertise with containerization technologies including kubernetes, docker-compose
- 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- Experience with scripting languages. Python experience highly desirable.
- 2+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Demonstrated expertise of building cloud native applications
- Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
- Experience in API development using Swagger
- Strong expertise with containerization technologies including kubernetes, docker-compose
- Experience with cloud platform services such as AWS, Azure or GCP.
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
- Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
- Assist in dev ops operations
- Develop data ingestion processes and ETLs
- Design and Implement APIs
- Assist in dev ops operations
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices


Job Description
We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Skills
- Bachelors/Masters/Phd in CS or equivalent industry experience
- Demonstrated expertise of building and shipping cloud native applications
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 4-10 years
Location : Pune
Mandatory Skills -
- Strong in ETL/SQL development
- Strong Data Warehousing skills
- Hands-on experience working with Unix/Linux
- Development experience in Enterprise Data warehouse projects
- Good to have experience working with Python, shell scripting
Opportunities -
- Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
- Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
http://www.datametica.com/">www.datametica.com
We are looking for an experienced engineer with superb technical skills. You will primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions. The successful candidate will be curious, creative, ambitious, self motivated, flexible, and have a bias towards taking action. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Experience
12+ Years
Location
Hyderabad
Skills
Bachelors/Masters/Phd in CS or equivalent industry experience
10+ years of industry experience in java related frameworks such as Spring and/or Typesafe
Experience with scripting languages. Python experience highly desirable. 5+ Industry experience in python
Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
Demonstrated expertise of building and shipping cloud native applications
Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
Experience in API development using Swagger
Strong expertise with containerization technologies including kubernetes, docker-compose
Experience with cloud platform services such as AWS, Azure or GCP.
Implementing automated testing platforms and unit tests
Proficient understanding of code versioning tools, such as Git
Familiarity with continuous integration, Jenkins
Responsibilities
Architect, Design and Implement Large scale data processing pipelines
Design and Implement APIs
Assist in dev ops operations
Identify performance bottlenecks and bugs, and devise solutions to these problems
Help maintain code quality, organization, and documentation
Communicate with stakeholders regarding various aspects of solution.
Mentor team members on best practices


Top MNC looking for candidates on Business Analytics(4-8 Years Experience).
Requirement :
- Experience in metric development & Business analytics
- High Data Skill Proficiency/Statistical Skills
- Tools: R, SQL, Python, Advanced Excel
- Good verbal/communication Skills
- Supply Chain domain knowledge
*Job Summary*
Duration: 6months contract based at Hyderabad
Availability: 1 week/Immediate
Qualification: Graduate/PG from Reputed University
*Key Skills*
R, SQL, Advanced Excel, Python
*Required Experience and Qualifications*
5 to 8 years of Business Analytics experience.