Cutshort logo
Scry AI logo
Data Engineer (Azure)
Data Engineer (Azure)
Scry AI's logo

Data Engineer (Azure)

Siddarth Thakur's profile picture
Posted by Siddarth Thakur
3 - 8 yrs
₹15L - ₹20L / yr
Remote only
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
SQL
NOSQL Databases
Apache Kafka

Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)

Salary: Competitive as per Industry Standard

We are expanding our Data Engineering Team and hiring passionate professionals with extensive

knowledge and experience in building and managing large enterprise data and analytics platforms. We

are looking for creative individuals with strong programming skills, who can understand complex

business and architectural problems and develop solutions. The individual will work closely with the rest

of our data engineering and data science team in implementing and managing Scalable Smart Data

Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale

Processing Clusters, Data Mining and Search Engines.

What You’ll Need:

  • 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal

Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied

data types.

  • Proficiency in Python, Linux and shell scripting.
  • Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
    ● Strong experience in developing the infrastructure required for data ingestion, optimal

extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory,  Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).

  • Working knowledge of github or other version control tools.
  • Experience with creating Restful web services and API platforms.
  • Work with data science and infrastructure team members to implement practical machine

learning solutions and pipelines in production.

  • Experience with cloud providers like Azure/AWS/GCP.
  • Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
  • Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
  • Strong analytic skills related to working with unstructured datasets.

 

Good to have (to filter or prioritize candidates)

  • Experience with testing libraries such as pytest for writing unit-tests for the developed code.
  • Knowledge of Machine Learning algorithms and libraries would be good to have,

implementation experience would be an added advantage.

  • Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
  • Knowledge of Azure functions , Elastic search etc will be good to have.

 

  • Having experience with model versioning (mlflow) and data versioning will be beneficial
  • Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Scry AI

Founded :
2014
Type
Size
Stage :
Profitable
About

Scry AI invents, designs, and develops cutting-edge technology-based Enterprise solutions powered by Machine Learning, Natural Language Processing, Big Data, and Computer Vision.


Scry AI is an R&D organization leading innovation in business automation technology and has been helping companies and businesses transform how they work.


Catering to core industries like Fintech, Healthcare, Communication, Mobility, and Smart Cities, Scry has invested heavily in R&D to build cutting-edge product suites that address challenges and roadblocks that plague traditional business environments.

Read more
Connect with the team
Profile picture
Siddarth Thakur
Company social profiles
instagramlinkedintwitterfacebook

Similar jobs

Digi Upaay Solutions Pvt Ltd
Sridhar Chakkravarthy
Posted by Sridhar Chakkravarthy
Remote only
8 - 11 yrs
₹11L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
PL/SQL
+4 more

Required Skill Set-

Project experience in any of the following - Data Management,

Database Development, Data Migration or Data Warehousing.

• Expertise in SQL, PL/SQL.


Role and Responsibilities -

• Work on a complex data management program for multi-billion dollar

customer

• Work on customer projects related to data migration, data

integration

•No Troubleshooting

• Execution of data pipelines, perform QA, project documentation for

project deliverables

• Perform data profiling, data cleansing, data analysis for migration

data

• Participate and contribute in project meeting

• Experience in data manipulation using Python preferred

• Proficient in using Excel, PowerPoint

 -Perform other tasks as per project requirements.

Read more
EnterpriseMinds
at EnterpriseMinds
2 recruiters
Komal S
Posted by Komal S
Remote only
4 - 10 yrs
₹10L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+2 more

Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance, and growth. 

Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalizing go-to-market models and not just explore possibilities. 

We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership, and collaboration our people work in a spirit of co-creation, co-innovation, and co-development to engineer next-generation software products with the help of accelerators. 

Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions. 
We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes. 

We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We are constantly self-asses and realign to work with each customer in the most impactful manner. 

Pre-requisites for the Role 

 

  1. Job ID-EMBD0120PS 
  1. Primary skill: GCP DATA ENGINEER, BIGQUERY, ETL 
  1. Secondary skill: HADOOP, PYTHON, SPARK 
  1. Years of Experience: 5-8Years  
  1. Location: Remote 

 

Budget- Open  

NP- Immediate 

 

 

GCP DATA ENGINEER 

Position description 

  • Designing and implementing software systems 
  • Creating systems for collecting data and for processing that data 
  • Using Extract Transform Load operations (the ETL process) 
  • Creating data architectures that meet the requirements of the business 
  • Researching new methods of obtaining valuable data and improving its quality 
  • Creating structured data solutions using various programming languages and tools 
  • Mining data from multiple areas to construct efficient business models 
  • Collaborating with data analysts, data scientists, and other teams. 

Candidate profile 

  • Bachelor’s or master’s degree in information systems/engineering, computer science and management or related. 
  • 5-8 years professional experience as Big Data Engineer 
  • Proficiency in modelling and maintaining Data Lakes with PySpark – preferred basis. 
  • Experience with Big Data technologies (e.g., Databricks) 
  • Ability to model and optimize workflows GCP. 
  • Experience with Streaming Analytics services (e.g., Kafka, Grafana) 
  • Analytical, innovative and solution-oriented mindset 
  • Teamwork, strong communication and interpersonal skills 
  • Rigor and organizational skills 
  • Fluency in English (spoken and written). 
Read more
Phenom People
at Phenom People
5 recruiters
Srivatsav Chilukoori
Posted by Srivatsav Chilukoori
Hyderabad
3 - 6 yrs
₹10L - ₹18L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython
skill iconDeep Learning
+4 more

JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
 

Company Description

Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
 

Job Responsibilities:

  • Design and implement machine learning, information extraction, probabilistic matching algorithms and models
  • Research and develop innovative, scalable and dynamic solutions to hard problems
  • Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
  • Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
  • Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
  • Be a valued contributor in shaping the future of our products and services
  • You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
  • Be part of a fast pace, fun-focused, agile team

Job Requirement:

  • 4+ years of industry experience
  • Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
  • Strong mathematics, statistics, and data analytics
  • Solid coding and engineering skills preferably in Machine Learning (not mandatory)
  • Proficient in Java, Python, and Scala
  • Industry experience building and productionizing end-to-end systems
  • Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
  • Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.


Position Summary

We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.

  • Capable of building accurate machine learning models is the main goal of a machine learning engineer
  • Linear Algebra, Applied Statistics and Probability
  • Building Data Models
  • Strong knowledge of NLP
  • Good understanding of multithreaded and object-oriented software development
  • Mathematics, Mathematics and Mathematics
  • Collaborate with Data Engineers to prepare data models required for machine learning models
  • Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
  • Build large-scale software systems and numerical computation topics
  • Use predictive analytics and data mining to solve complex problems and drive business decisions
  • Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
  • Tackle situations where problem is unknown and the Solution is unknown
  • Solve analytical problems, and effectively communicate methodologies and results to the customers
  • Adept at translating business needs into technical requirements and translating data into actionable insights
  • Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.

Benefits

  • Competitive salary for a startup
  • Gain experience rapidly
  • Work directly with the executive team
  • Fast-paced work environment

 

About Phenom People

At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.

 

 Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.

 We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.

 Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.

 We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.

 Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.

 

Kindly explore about the company phenom (https://www.phenom.com/">https://www.phenom.com/)
Youtube - https://www.youtube.com/c/PhenomPeople">https://www.youtube.com/c/PhenomPeople
LinkedIn - https://www.linkedin.com/company/phenompeople/">https://www.linkedin.com/company/phenompeople/

https://www.phenom.com/">Phenom | Talent Experience Management

Read more
Avhan Technologies Pvt Ltd
Aarti Vohra
Posted by Aarti Vohra
Kolkata
7 - 10 yrs
₹8L - ₹20L / yr
MDX
DAX
SQL
SQL server
Microsoft Analysis Services
+3 more
Exp : 7 to 8 years
Notice Period: Immediate to 15 days
Job Location : Kolkata
 
Responsibilities:
• Develop and improve solutions spanning data processing activities from the data lake (stage) to star schemas and reporting view’s / tables and finally into SSAS.
• Develop and improve Microsoft Analysis Services cubes (tabular and dimensional)
• Collaborate with other teams within the organization and be able to devise the technical solution as it relates to the business & technical requirements
• Mentor team members and be proactive in training and coaching team members to develop their proficiency in Analysis Services
• Maintain documentation for all processes implemented
• Adhere to and suggest improvements to coding standards, applying best practices
 
Skillsets:
• Proficient in MDX and DAX for query in SSAS
Read more
Felicitycare
at Felicitycare
2 recruiters
Shikha Patni
Posted by Shikha Patni
Remote, Jajpur
1 - 3 yrs
₹3L - ₹12L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more

Responsibilities 

 

  • Identify, analyze, and interpret trends or patterns in complex data sets to develop a thorough understanding of users, and acquisition channels.
  • Run exploratory analysis uncover new areas of opportunity, generate hypotheses, and quickly assess the potential upside of a given opportunity.
  • Help execute projects to drive insights that lead to growth.
  • Work closely with marketing, design, product, support, and engineering to anticipate analytics needs and to quantify the impact of existing features, future product changes, and marketing campaigns.
  • Work with data engineering to develop and implement new analytical tools and improve our underlying data infrastructure. Build tracking plans for new and existing products and work with engineering to ensure proper
  • Analyze, forecast, and build custom reports to make key performance indicators and insights available to the entire company.
  • Monitor, optimize, and report on marketing and growth metrics and split-test results. Make recommendations based on analytics and test findings.
  • Drive optimization and data minded culture inside the company.
  • Develop frameworks, models, tools, and processes to ensure that analytical insights can be incorporated into all key decision making.
  • Effectively present and communicate analysis to the company to drive business decisions.
  • Create a management dashboard including all important KPIs to be tracked on a company and department level
  • Establish end to end campaign ROI tracking mechanism to attribute sales to specific google and Facebook campaigns



Skills and Experience


  • Minimum 1-2 years of proven work experience in a data analyst role.
  • Excellent analytical skills and problem-solving ability; the ability to answer unstructured business questions and work independently to drive projects to conclusion.
  • Strong analytical skills with the capacity to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Experience extracting insights using advanced SQL or similar tool to work efficiently at scale. Advanced expertise with commonly used analytics tools including Google Analytics, studio and Excel.
  • Strong knowledge of statistics, this includes experimental design for optimization, statistical significance, confidence intervals and predictive analytics techniques.
  • Must be self-directed, organized and detail oriented as well as have the ability to multitask and work effectively in a fast-paced environment.
  • Active team player, excellent communication skills, positive attitude and good work ethic.
Read more
ERUDITUS Executive Education
Payal Thakker
Posted by Payal Thakker
Remote only
3 - 10 yrs
₹15L - ₹45L / yr
skill iconDocker
skill iconKubernetes
Apache Kafka
Apache Beam
skill iconPython
+2 more
Emeritus is committed to teaching the skills of the future by making high-quality education accessible and affordable to individuals, companies, and governments around the world. It does this by collaborating with more than 50 top-tier universities across the United States, Europe, Latin America, Southeast Asia, India and China. Emeritus’ short courses, degree programs, professional certificates, and senior executive programs help individuals learn new skills and transform their lives, companies and organizations. Its unique model of state-of-the-art technology, curriculum innovation, and hands-on instruction from senior faculty, mentors and coaches has educated more than 250,000 individuals across 80+ countries. Founded in 2015, Emeritus, part of Eruditus Group, has more than 2,000 employees globally and offices in Mumbai, New Delhi, Shanghai, Singapore, Palo Alto, Mexico City, New York, Boston, London, and Dubai. Following its $650 million Series E funding round in August 2021, the Company is valued at $3.2 billion, and is backed by Accel, SoftBank Vision Fund 2, the Chan Zuckerberg Initiative, Leeds Illuminate, Prosus Ventures, Sequoia Capital India, and Bertelsmann.
 
 
As a data engineer at Emeritus, you'll be working on a wide variety of data problems. At this fast paced company, you will frequently have to balance achieving an immediate goal with building sustainable and scalable architecture. The ideal candidate gets excited about streaming data, protocol buffers and microservices. They want to develop and maintain a centralized data platform that provides accurate, comprehensive, and timely data to a growing organization

Role & responsibilities:

    • Developing ETL pipelines for data replication
    • Analyze, query and manipulate data according to defined business rules and procedures
    • Manage very large-scale data from a multitude of sources into appropriate sets for research and development for data science and analysts across the company
    • Convert prototypes into production data engineering solutions through rigorous software engineering practices and modern deployment pipelines
    • Resolve internal and external data exceptions in timely and accurate manner
    • Improve multi-environment data flow quality, security, and performance 

Skills & qualifications:

    • Must have experience with:
    • virtualization, containers, and orchestration (Docker, Kubernetes)
    • creating log ingestion pipelines (Apache Beam) both batch and streaming processing (Pub/Sub, Kafka)
    • workflow orchestration tools (Argo, Airflow)
    • supporting machine learning models in production
    • Have a desire to continually keep up with advancements in data engineering practices
    • Strong Python programming and exploratory data analysis skills
    • Ability to work independently and with team members from different backgrounds
    • At least a bachelor's degree in an analytical or technical field. This could be applied mathematics, statistics, computer science, operations research, economics, etc. Higher education is welcome and encouraged.
    • 3+ years of work in software/data engineering.
    • Superior interpersonal, independent judgment, complex problem-solving skills
    • Global orientation, experience working across countries, regions and time zones
Emeritus provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Read more
Rakuten
at Rakuten
1 video
1 recruiter
Agency job
via zyoin by RAKESH RANJAN
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹38L / yr
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
+4 more

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/">https://www.rakuten.com/

Crunchbase : https://www.crunchbase.com/organization/rakuten">Rakuten has raised a total of https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/rakuten">$42.4M in funding over https://www.crunchbase.com/search/funding_rounds/field/organizations/num_funding_rounds/rakuten">2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more
Fast paced Startup
Fast paced Startup
Agency job
via Kavayah People Consulting by Kavita Singh
Pune
3 - 6 yrs
₹15L - ₹22L / yr
Big Data
Data engineering
Hadoop
Spark
Apache Hive
+6 more

ears of Exp: 3-6+ Years 
Skills: Scala, Python, Hive, Airflow, Spark

Languages: Java, Python, Shell Scripting

GCP: BigTable, DataProc,  BigQuery, GCS, Pubsub

OR
AWS: Athena, Glue, EMR, S3, Redshift

MongoDB, MySQL, Kafka

Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time 

Read more
Freelancer
at Freelancer
4 recruiters
Nirmala Hk
Posted by Nirmala Hk
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹35L / yr
skill iconPython
Shell Scripting
MySQL
SQL
skill iconAmazon Web Services (AWS)
+3 more

   3+ years of experience in deployment, monitoring, tuning, and administration of high concurrency MySQL production databases.

  • Solid understanding of writing optimized SQL queries on MySQL databases
  • Understanding of AWS, VPC, networking, security groups, IAM, and roles.
  • Expertise in scripting in Python or Shell/Powershell
  • Must have experience in large scale data migrations
  • Excellent communication skills.
Read more
Saviance Technologies
at Saviance Technologies
1 recruiter
Shipra Agrawal
Posted by Shipra Agrawal
NCR (Delhi | Gurgaon | Noida)
3 - 5 yrs
₹7L - ₹9L / yr
PowerBI
power bi
Business Intelligence (BI)
DAX
Data modeling
+3 more

 

Job Title: Power BI Developer(Onsite)

Location: Park Centra, Sec 30, Gurgaon

CTC:        8 LPA

Time:       1:00 PM - 10:00 PM

  

Must Have Skills: 

  • Power BI Desktop Software
  • Dax Queries
  • Data modeling
  • Row-level security
  • Visualizations
  • Data Transformations and filtering
  • SSAS and SQL

 

Job description:

 

We are looking for a PBI Analytics Lead responsible for efficient Data Visualization/ DAX Queries and Data Modeling. The candidate will work on creating complex Power BI reports. He will be involved in creating complex M, Dax Queries and working on data modeling, Row-level security, Visualizations, Data Transformations, and filtering. He will be closely working with the client team to provide solutions and suggestions on Power BI.

 

Roles and Responsibilities:

 

  • Accurate, intuitive, and aesthetic Visual Display of Quantitative Information: We generate data, information, and insights through our business, product, brand, research, and talent teams. You would assist in transforming this data into visualizations that represent easy-to-consume visual summaries, dashboards and storyboards. Every graph tells a story.
  • Understanding Data: You would be performing and documenting data analysis, data validation, and data mapping/design. You would be mining large datasets to determine its characteristics and select appropriate visualizations.
  • Project Owner: You would develop, maintain, and manage advanced reporting, analytics, dashboards and other BI solutions, and would be continuously reviewing and improving existing systems and collaborating with teams to integrate new systems. You would also contribute to the overall data analytics strategy by knowledge sharing and mentoring end users.
  • Perform ongoing maintenance & production of Management dashboards, data flows, and automated reporting.
  • Manage upstream and downstream impact of all changes on automated reporting/dashboards
  • Independently apply problem-solving ability to identify meaningful insights to business
  • Identify automation opportunities and work with a wide range of stakeholders to implement the same.
  • The ability and self-confidence to work independently and increase the scope of the service line

 

Requirements: 

  • 3+ years of work experience as an Analytics Lead / Senior Analyst / Sr. PBI Developer.
  • Sound understanding and knowledge of PBI Visualization and Data Modeling with DAX queries
  • Experience in leading and mentoring a small team.

 

 

 

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos