Cutshort logo
Microsoft Business Intelligence (MSBI) Jobs in Hyderabad

11+ Microsoft Business Intelligence (MSBI) Jobs in Hyderabad | Microsoft Business Intelligence (MSBI) Job openings in Hyderabad

Apply to 11+ Microsoft Business Intelligence (MSBI) Jobs in Hyderabad on CutShort.io. Explore the latest Microsoft Business Intelligence (MSBI) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Data Semantics
Deepu Vijayan
Posted by Deepu Vijayan
Remote, Hyderabad, Bengaluru (Bangalore)
4 - 15 yrs
₹3L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL Server Analysis Services (SSAS)
SQL Server Reporting Services (SSRS)
+4 more

It's regarding a permanent opening with Data Semantics

Data Semantics 


We are Product base company and Microsoft Gold Partner

Data Semantics is an award-winning Data Science company with a vision to empower every organization to harness the full potential of its data assets. In order to achieve this, we provide Artificial Intelligence, Big Data and Data Warehousing solutions to enterprises across the globe. Data Semantics was listed as one of the top 20 Analytics companies by Silicon India 2018 & CIO Review India 2014 as one of the Top 20 BI companies.  We are headquartered in Bangalore, India with our offices in 6 global locations including USA United Kingdom, Canada, United Arab Emirates (Dubai Abu Dhabi), and Mumbai. Our mission is to enable our people to learn the art of data management and visualization to help our customers make quick and smart decisions. 

 

Our Services include: 

Business Intelligence & Visualization

App and Data Modernization

Low Code Application Development

Artificial Intelligence

Internet of Things

Data Warehouse Modernization

Robotic Process Automation

Advanced Analytics

 

Our Products:

Sirius – World’s most agile conversational AI platform

Serina

Conversational Analytics

Contactless Attendance Management System

 

 

Company URL:   https://datasemantics.co 


JD:

MSBI

SSAS

SSRS

SSIS

Datawarehousing

SQL

Read more
master works
Spandana Bomma
Posted by Spandana Bomma
Hyderabad
3 - 7 yrs
₹6L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+8 more

Job Description-

Responsibilities:

* Work on real-world computer vision problems

* Write robust industry-grade algorithms

* Leverage OpenCV, Python and deep learning frameworks to train models.

* Use Deep Learning technologies such as Keras, Tensorflow, PyTorch etc.

* Develop integrations with various in-house or external microservices.

* Must have experience in deployment practices (Kubernetes, Docker, containerization, etc.) and model compression practices

* Research latest technologies and develop proof of concepts (POCs).

* Build and train state-of-the-art deep learning models to solve Computer Vision related problems, including, but not limited to:

* Segmentation

* Object Detection

* Classification

* Objects Tracking

* Visual Style Transfer

* Generative Adversarial Networks

* Work alongside other researchers and engineers to develop and deploy solutions for challenging real-world problems in the area of Computer Vision

* Develop and plan Computer Vision research projects, in the terms of scope of work including formal definition of research objectives and outcomes

* Provide specialized technical / scientific research to support the organization on different projects for existing and new technologies

Skills:

* Object Detection

* Computer Science

* Image Processing

* Computer Vision

* Deep Learning

* Artificial Intelligence (AI)

* Pattern Recognition

* Machine Learning

* Data Science

* Generative Adversarial Networks (GANs)

* Flask

* SQL

Read more
Wallero technologies
Hyderabad
7 - 15 yrs
₹20L - ₹28L / yr
SQL
Data modeling
ADF
PowerBI
  1. Strong communication skills are essential, as the selected candidate will be responsible for leading a team of two in the future.
  2. Proficiency in SQL.
  3. Expertise in Data Modelling.
  4. Experience with Azure Data Factory (ADF).
  5. Competence in Power BI.
  6. SQL – Should be strong in Data Modeling , Tables Design and SQL Queries.
  7. ADF – Must have hands-on experience in ADF pipelines and its set-up from End-to-End in Azure including subscriptions, IR and Resource Group creations.
  8. Power BI – Hands-on knowledge in Power BI reports including documentation and follow existing standards.


Read more
IT MNC
Agency job
via Apical Mind by Madhusudan Patade
Bengaluru (Bangalore), Hyderabad, Noida, Chennai, NCR (Delhi | Gurgaon | Noida)
3 - 12 yrs
₹15L - ₹40L / yr
Presto
Hadoop
presto
SQL

Experience – 3 – 12 yrs

Budget - Open

Location - PAN India (Noida/Bangaluru/Hyderabad/Chennai)


Presto Developer (4)

 

Understanding of distributed SQL query engine running on Hadoop 

Design and develop core components for Presto 

Contribute to the ongoing Presto development by implementing new features, bug fixes, and other improvements 

Develop new and extend existing Presto connectors to various data sources 

Lead complex and technically challenging projects from concept to completion 

Write tests and contribute to ongoing automation infrastructure development 

Run and analyze software performance metrics 

Collaborate with teams globally across multiple time zones and operate in an Agile development environment 

Hands-on experience and interest with Hadoop 

Read more
InnovAccer

at InnovAccer

3 recruiters
Jyoti Kaushik
Posted by Jyoti Kaushik
Noida, Bengaluru (Bangalore), Pune, Hyderabad
4 - 7 yrs
₹4L - ₹16L / yr
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
+2 more

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹3L - ₹3.5L / yr
SQL
Data engineering
Data Engineer
skill iconPython
Big Data
+1 more
Strong Programmer with expertise in Python and SQL
 
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Pune, Bengaluru (Bangalore), Hyderabad, Chennai
3 - 8 yrs
₹2L - ₹9L / yr
Data engineering
Data engineer
Spark
Apache Spark
Apache Kafka
+13 more

Responsibilities for Data Engineer

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:

  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Syrencloud

at Syrencloud

3 recruiters
Samarth Patel
Posted by Samarth Patel
Hyderabad
3 - 7 yrs
₹5L - ₹8L / yr
skill iconData Analytics
Data analyst
SQL
SAP
Our growing technology firm is looking for an experienced Data Analyst who is able to turn project requirements into custom-formatted data reports. The ideal candidate for this position is able to do complete life cycle data generation and outline critical information for each Project Manager. We also need someone who is able to analyze business procedures and recommend specific types of data that can be used to improve upon them.
Read more
Chennai, Bengaluru (Bangalore), Hyderabad
4 - 10 yrs
₹9L - ₹20L / yr
Informatica
informatica developer
Informatica MDM
Data integration
Informatica Data Quality
+7 more
  • Should have good hands-on experience in Informatica MDM Customer 360, Data Integration(ETL) using PowerCenter, Data Quality.
  • Must have strong skills in Data Analysis, Data Mapping for ETL processes, and Data Modeling.
  • Experience with the SIF framework including real-time integration
  • Should have experience in building C360 Insights using Informatica
  • Should have good experience in creating performant design using Mapplets, Mappings, Workflows for Data Quality(cleansing), ETL.
  • Should have experience in building different data warehouse architecture like Enterprise,
  • Federated, and Multi-Tier architecture.
  • Should have experience in configuring Informatica Data Director in reference to the Data
  • Governance of users, IT Managers, and Data Stewards.
  • Should have good knowledge in developing complex PL/SQL queries.
  • Should have working experience on UNIX and shell scripting to run the Informatica workflows and to control the ETL flow.
  • Should know about Informatica Server installation and knowledge on the Administration console.
  • Working experience with Developer with Administration is added knowledge.
  • Working experience in Amazon Web Services (AWS) is an added advantage. Particularly on AWS S3, Data pipeline, Lambda, Kinesis, DynamoDB, and EMR.
  • Should be responsible for the creation of automated BI solutions, including requirements, design,development, testing, and deployment
Read more
Carbynetech

at Carbynetech

3 recruiters
Sahithi Kandlakunta
Posted by Sahithi Kandlakunta
Hyderabad
3 - 9 yrs
₹6L - ₹8L / yr
PowerBI
DAX
MDX
Windows Azure
Databricks
+1 more
Should have Experience in building and delivery of dashboard and analytics solutions using Microsofts Power BI and related Azure data services.
Should have Business Intelligence Experience in a data warehouse environment
Should have good experience in writing Power Query, DAX, MDX for complex data projects
Good on Rest Services including the API documentation.

Should have Experience authoring, diagnosing, and altering SQL Server objects and T-SQL
queries

Should have worked on Tabular models in Azure Analysis Services or SSAS

Should have Experience in Microsoft Azure Platform
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort