Cutshort logo
a global business process management company logo
Azure Developer (Power BI Developer )
Azure Developer (Power BI Developer )
a global business process management company's logo

Azure Developer (Power BI Developer )

Agency job
3 - 8 yrs
₹14L - ₹20L / yr
Bengaluru (Bangalore)
Skills
Business Intelligence (BI)
PowerBI
Windows Azure
skill iconGit
SVN
Hadoop
skill iconAmazon Web Services (AWS)
Salesforce
SAP
HANA
SQL server
Apache Synapse
Flat file
Data Visualization

Power BI Developer(Azure Developer )

Job Description:

Senior visualization engineer with understanding in Azure Data Factory & Databricks to develop and deliver solutions that enable delivery of information to audiences in support of key business processes.

Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business and technical counterparts.

 

Desired Competencies:

  • Strong designing concepts of data visualization centered on business user and a knack of communicating insights visually.
  • Ability to produce any of the charting methods available with drill down options and action-based reporting. This includes use of right graphs for the underlying data with company themes and objects.
  • Publishing reports & dashboards on reporting server and providing role-based access to users.
  • Ability to create wireframes on any tool for communicating the reporting design.
  • Creation of ad-hoc reports & dashboards to visually communicate data hub metrics (metadata information) for top management understanding.
  • Should be able to handle huge volume of data from databases such as SQL Server, Synapse, Delta Lake or flat files and create high performance dashboards.
  • Should be good in Power BI development
  • Expertise in 2 or more BI (Visualization) tools in building reports and dashboards.
  • Understanding of Azure components like Azure Data Factory, Data lake Store, SQL Database, Azure Databricks
  • Strong knowledge in SQL queries
  • Must have worked in full life-cycle development from functional design to deployment
  • Intermediate understanding to format, process and transform data
  • Should have working knowledge of GIT, SVN
  • Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
  • Basic understanding of data modelling and ability to combine data from multiple sources to create integrated reports

 

Preferred Qualifications:

  • Bachelor's degree in Computer Science or Technology
  • Proven success in contributing to a team-oriented environment
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About a global business process management company

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Startup Focused on simplifying Buying Intent
Bengaluru (Bangalore)
4 - 9 yrs
₹28L - ₹56L / yr
Big Data
Apache Spark
Spark
Hadoop
ETL
+7 more
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Mumbai, Navi Mumbai
6 - 14 yrs
₹16L - ₹37L / yr
skill iconPython
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

• Software Development that creates data driven intelligence in the products which deals with Big Data backends

• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

• Creating metrics and evaluation of algorithm for better accuracy and recall

• Ensuring efficient access and usage of data through the means of indexing, clustering etc.

• Collaborate with engineering and product development teams.


Requirements:


• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.

• Experience of 8 to 10 year with product development, having done algorithmic work

• 5+ years of experience working with large data sets or do large scale quantitative analysis

• Understanding of SaaS based products and services.

• Strong algorithmic problem-solving skills

• Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


• In depth Knowledge Python programming languages

• Understanding of software architecture and software design

• Must have fully managed a project with a team

• Having worked with Agile project management practices

• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
Tier 1 MNC
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
3 - 12 yrs
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more
Shiprocket
at Shiprocket
5 recruiters
Kailuni Lanah
Posted by Kailuni Lanah
Gurugram
4 - 10 yrs
₹25L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

We are seeking an experienced Senior Data Platform Engineer to join our team. The ideal candidate should have extensive experience with Pyspark, Airflow, Presto, Hive, Kafka and Debezium, and should be passionate about developing scalable and reliable data platforms.

Responsibilities:

  • Design, develop, and maintain our data platform architecture using Pyspark, Airflow, Presto, Hive, Kafka, and Debezium.
  • Develop and maintain ETL processes to ingest, transform, and load data from various sources into our data platform.
  • Work closely with data analysts, data scientists, and other stakeholders to understand their requirements and design solutions that meet their needs.
  • Implement and maintain data governance policies and procedures to ensure data quality, privacy, and security.
  • Continuously monitor and optimize the performance of our data platform to ensure scalability, reliability, and cost-effectiveness.
  • Keep up-to-date with the latest trends and technologies in the field of data engineering and share knowledge and best practices with the team.

Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering or related fields.
  • Strong proficiency in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium.
  • Experience with data warehousing, data modeling, and data governance.
  • Experience working with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure).
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Excellent communication and interpersonal skills.

If you are a self-motivated and driven individual with a passion for data engineering and a strong background in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium, we encourage you to apply for this exciting opportunity. We offer competitive compensation, comprehensive benefits, and a collaborative work environment that fosters innovation and growth.

Read more
India Bison
Vanita Deokar
Posted by Vanita Deokar
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Responsibilities

> Selecting features, building and optimizing classifiers using machine

> learning techniques

> Data mining using state-of-the-art methods

> Extending company’s data with third party sources of information when

> needed

> Enhancing data collection procedures to include information that is

> relevant for building analytic systems

> Processing, cleansing, and verifying the integrity of data used for

> analysis

> Doing ad-hoc analysis and presenting results in a clear manner

> Creating automated anomaly detection systems and constant tracking of

> its performance


Key Skills

> Hands-on experience of analysis tools like R, Advance Python

> Must Have Knowledge of statistical techniques and machine learning

> algorithms

> Artificial Intelligence

> Understanding of Text analysis- Natural Language processing (NLP)

> Knowledge on Google Cloud Platform

> Advanced Excel, PowerPoint skills

> Advanced communication (written and oral) and strong interpersonal

> skills

> Ability to work cross-culturally

> Good to have Deep Learning

> VBA and visualization tools like Tableau, PowerBI, Qliksense, Qlikview

> will be an added advantage

Read more
TartanHQ Solutions Private Limited
Prabhat Shobha
Posted by Prabhat Shobha
Bengaluru (Bangalore)
2 - 4 yrs
₹9L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
skill iconPython
+4 more

Key deliverables for the Data Science Engineer would be to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality prediction systems integrated with our products.

What will you do?

  • You will be building and deploying ML models to solve specific business problems related to NLP, computer vision, and fraud detection.
  • You will be constantly assessing and improving the model using techniques like Transfer learning
  • You will identify valuable data sources and automate collection processes along with undertaking pre-processing of structured and unstructured data
  • You will own the complete ML pipeline - data gathering/labeling, cleaning, storage, modeling, training/testing, and deployment.
  • Assessing the effectiveness and accuracy of new data sources and data gathering techniques.
  • Building predictive models and machine-learning algorithms to apply to data sets.
  • Coordinate with different functional teams to implement models and monitor outcomes.
  •  
  • Presenting information using data visualization techniques and proposing solutions and strategies to business challenges


We would love to hear from you if :

  • You have 2+ years of experience as a software engineer at a SaaS or technology company
  • Demonstrable hands-on programming experience with Python/R Data Science Stack
  • Ability to design and implement workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python
  • Familiarity with Big Data Platforms (Databricks, Hadoop, Hive), AWS Services (AWS, Sagemaker, IAM, S3, Lambda Functions, Redshift, Elasticsearch)
  • Experience in Probability and Statistics, ability to use ideas of Data Distributions, Hypothesis Testing and other Statistical Tests.
  • Demonstrable competency in Data Visualisation using the Python/R Data Science Stack.
  • Preferable Experience Experienced in web crawling and data scraping
  • Strong experience in NLP. Worked on libraries such as NLTK, Spacy, Pattern, Gensim etc.
  • Experience with text mining, pattern matching and fuzzy matching

Why Tartan?
  • Brand new Macbook
  • Stock Options
  • Health Insurance
  • Unlimited Sick Leaves
  • Passion Fund (Invest in yourself or your passion project)
  • Wind Down
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
skill iconJava
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
Remote, Baroda
2 - 10 yrs
₹5L - ₹7L / yr
Business Intelligence (BI)
skill iconData Analytics
Customer Retention
Customer Success
Data conversion
+2 more
Job is for www.spicevillage.eu

Job Responsibilities 
  • Drive growth strategies for our online eCommerce shop across all channels (own store, Amazon, eBay, etc.). 
  • Optimize customer journey with the help of product owner and project manager 
  • Optimize conversion funnel across all channels and sources. Create detailed maps of customer journey and analyze the funnels to increase conversions. 
  • Be responsible for overall online marketing REPORTING, channel attribution, analyzing marketing effectiveness (CPL, CAC, CLV/CAC, ROI, CLTV) and assist in creating presentations for key stakeholders, leadership and invertors. 
  • Develop and implement acquisition and customer retentions strategies (think customer first) in collaboration with Creative, CRM, Social Media, SEO, etc. 
  • Partner closely with all teams to create a customer-first approach while communicating the brand message clearly. 
Requirements 
  • Minimum 3 years experience in High Growth E-Commerce company or agency
  • A proven marketer who understands and has worked in a high functioning, high revenue eCommerce brand. 
  • Deep understanding of A/B testing, experimentation and eCommerce analysis 
  • An expert in analyzing data and creating reports
Read more
Maveric Systems
at Maveric Systems
3 recruiters
Rashmi Poovaiah
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
Alien Brains
at Alien Brains
5 recruiters
Praveen Baheti
Posted by Praveen Baheti
Kolkata
0 - 15 yrs
₹4L - ₹8L / yr
skill iconPython
skill iconDeep Learning
skill iconMachine Learning (ML)
skill iconData Analytics
skill iconData Science
+3 more
You'll be giving industry standard training to engineering students and mentoring them to develop their custom mini projects.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos