BI Lead

at Rishabh Software

DP
Posted by Baiju Sukumaran
icon
Vadodara, Bengaluru (Bangalore), Ahmedabad, Pune, Kolkata, Hyderabad
icon
6 - 8 yrs
icon
Best in industry
icon
Full time
Skills
Datawarehousing
Microsoft Windows Azure
ETL
Relational Database (RDBMS)
SQL Server Integration Services (SSIS)
PowerBI
OLAP
Informatica
Azure synapse
Azure DevOps
Technical Skills
Mandatory (Minimum 4 years of working experience)
 3+ years of experience leading data warehouse implementation with technical architectures , ETL / ELT ,
reporting / analytic tools and scripting (end to end implementation)
 Experienced in Microsoft Azure (Azure SQL Managed Instance , Data Factory , Azure Synapse, Azure Monitoring ,
Azure DevOps , Event Hubs , Azure AD Security)
 Deep experience in using any BI tools such as Power BI/Tableau, QlikView/SAP-BO etc.,
 Experienced in ETL tools such as SSIS, Talend/Informatica/Pentaho
 Expertise in using RDBMSes like Oracle, SQL Server as source or target and online analytical processing (OLAP)
 Experienced in SQL/T-SQL/ DML/DDL statements, stored procedure, function, trigger, indexes, cursor
 Expertise in building and organizing advanced DAX calculations and SSAS cubes
 Experience in data/dimensional modelling, analysis, design, testing, development, and implementation
 Experienced in advanced data warehouse concepts using structured, semi-structured and un-structured data
 Experienced with real time ingestion, change data capture, real time & batch processing
 Good knowledge of meta data management and data governance
 Great problem solving skills, with a strong bias for quality and design excellence
 Experienced in developing dashboards with a focus on usability, performance, flexibility, testability, and
standardization.
 Familiarity with development in cloud environments like AWS / Azure / Google

Good To Have (1+ years of working experience)
 Experience working with Snowflake, Amazon RedShift
Soft Skills
 Good verbal and written communication skills
 Ability to collaborate and work effectively in a team.
 Excellent analytical and logical skills

About Rishabh Software

Rishabh Software, a CMMi Level 3 software development company, provides enterprise solutions and mobile app development services to our clients globally.
Founded
2001
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

DB ETL Developer

at Digital Transformation services provider

Agency job
via HyrHub
ETL
Databases
Informatica
Teradata
icon
Bengaluru (Bangalore), Mumbai
icon
5 - 7 yrs
icon
₹7L - ₹15L / yr
Principal Duties and Responsibilities:
Design, implement, and execute appropriate solutions and enhancements to ensure an improvement in
system reliability and performance.
Ensure project deadlines are met and are in alignment with the needs of the business unit, and coincide
with release management and governance.
Ensure that operational aspects of supported applications are included in architectural standards
Produce service metrics, analyze trends and identify opportunities to improve the level of service and
reduce cost as appropriate.
Support implementation activities.
Enable technical knowledge sharing across team
Work with vendors on designated areas
Skills Required:
Strong background with relational databases, primarily Teradata with 3+ years of experience
3+ years of experience in developing ETL processes using Informatica
3+ years of experience in reporting tools such as Business Objects
Strong understanding of UNIX and shell scripting
Thorough knowledge of SDLC (Software Development Life Cycle)
Excellent interpersonal and communication skills (verbal and written)

Skills Desired:
Exposure to Hadoop ecosystem.
Exposure to programming languages python/java
Exposure to Regulatory Reporting and Credit Risk

Nice to have-
Experience developing in ServiceNow (JavaScript, workflows, update sets)

Angular and Node.js experience a plus
Knowledge of database application concepts, SQL, query optimization
Experience with web application user interface and usability concepts

Understanding of secure software development concepts, especially in a cloud platform
Experience with monitoring, event/alert management and observability concepts a plus.
Exposure to financial industry
Source control (preferably Git) and continuous Integration tools
Job posted by
Ashwitha Naik

Data Analyst

at Extramarks Education India Pvt Ltd

Founded 2007  •  Product  •  1000-5000 employees  •  Profitable
Tableau
PowerBI
Data Analytics
SQL
Python
icon
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
icon
3 - 5 yrs
icon
₹8L - ₹10L / yr

Required Experience

· 3+ years of relevant technical experience as a data analyst role

· Intermediate / expert skills with SQL and basic statistics

· Experience in Advance SQL

· Python programming- Added advantage

· Strong problem solving and structuring skills

· Automation in connecting various sources to the data and representing it through various dashboards

· Excellent with Numbers and communicate data points through various reports/templates

· Ability to communicate effectively internally and outside Data Analytics team

· Proactively take up work responsibilities and take adhocs as and when needed

· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable

· Strong technical communication skills; both written and verbal

· Ability to understand and articulate the "big picture" and simplify complex ideas

· Ability to identify and learn applicable new techniques independently as needed

· Must have worked with various Databases (Relational and Non-Relational) and ETL processes

· Must have experience in handling large volume and data and adhere to optimization and performance standards

· Should have the ability to analyse and provide relationship views of the data from different angles

· Must have excellent Communication skills (written and oral).

· Knowing Data Science is an added advantage

Required Skills

MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills

Preferred Experience

· Strong understanding of relational database MY SQL etc.

· Prior experience working remotely full-time

· Prior Experience working in Advance SQL

· Experience with one or more BI tools, such as Superset, Tableau etc.

· High level of logical and mathematical ability in Problem Solving

Job posted by
Prachi Sharma

Data Engineer

at Numerator

Founded 2018  •  Product  •  500-1000 employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
Python
SQL
Datawarehousing
icon
Remote, Pune
icon
3 - 9 yrs
icon
₹5L - ₹20L / yr

We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless.  As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines.

Duties/Responsibilities Include:

  •  Develop expertise in the different upstream data stores and systems across Numerator.
  • Design, develop and maintain data integration pipelines for Numerators growing data sets and product offerings.
  • Build testing and QA plans for data pipelines.
  • Build data validation testing frameworks to ensure high data quality and integrity.
  • Write and maintain documentation on data pipelines and schemas
 

Requirements:

  • BS or MS in Computer Science or related field of study
  • 3 + years of experience in the data warehouse space
  • Expert in SQL, including advanced analytical queries
  • Proficiency in Python (data structures, algorithms, object oriented programming, using API’s)
  • Experience working with a cloud data warehouse (Redshift, Snowflake, Vertica)
  • Experience with a data pipeline scheduling framework (Airflow)
  • Experience with schema design and data modeling

Exceptional candidates will have:

  • Amazon Web Services (EC2, DMS, RDS) experience
  • Terraform and/or ansible (or similar) for infrastructure deployment
  • Airflow -- Experience building and monitoring DAGs, developing custom operators, using script templating solutions.
  • Experience supporting production systems in an on-call environment
Job posted by
Ketaki Kambale

Analytics

at ProGrad

Founded 2018  •  Services  •  20-100 employees  •  Profitable
Python
Java
Tableau
SQL
PowerBI
icon
Chennai
icon
1 - 4 yrs
icon
₹3L - ₹8L / yr
Company Name: LatentView Analytics

Job Summary :


Independently handle the delivery of analytics assignments by mentoring a team of 3 - 10 people and delivering to exceed client expectations

Responsibilities :

- Co-ordinate with onsite company consultants to ensure high quality, on-time delivery

- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)

- Take part in organizational development activities to take company to the next level

Qualification, Skills & Prior Work Experience :

- Great analytical skills, detail-oriented approach

- Sound knowledge in MS Office tools like Excel, Power Point and data visualization tools like Tableau, PowerBI or such tools

- Strong experience in SQL, Python, SAS, SPSS, Statistica, R, MATLAB or such tools would be preferable

- Ability to adapt and thrive in the fast-paced environment that young companies operate in

- Priority for people with analytics work experience

- Programming skills- Java/Python/SQL/OOPS based programming knowledge

Job Location : Chennai, Work from Home will be provided until COVID situation improves

Note :

- Minimum one year experience needed

- Only 2019, 2020 and 2020 passed outs applicable

- Only above 70% aggregate throughout studies is applicable

- POST GRADUATION is must
Job posted by
Heruba C

Data Engineer - Azure

at Global consulting company

Agency job
via HyringNinja
SQL Azure
Data migration
Windows Azure
Big Data
PySpark
Relational Database (RDBMS)
ETL
Amazon Web Services (AWS)
icon
Pune
icon
2 - 5 yrs
icon
₹10L - ₹30L / yr

2-4 years of experience in developing ETL activities for Azure – Big data, relational databases, and data warehouse solutions.

 

Extensive hands-on experience implementing data migration and data processing using Azure services: ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Analysis Service, Azure Databricks, Azure Data Catalog, ML Studio, AI/ML, Snowflake, etc.

 

Well versed in DevOps and CI/CD deployments

 

Cloud migration methodologies and processes including tools like Azure Data Factory, Data Migration Service, SSIS, etc.

 

Minimum of 2 years of RDBMS experience

 

Experience with private and public cloud architectures, pros/cons, and migration considerations.

 

Nice-to-Have Skills/Qualifications:

 

- DevOps on an Azure platform

- Experience developing and deploying ETL solutions on Azure

- IoT, event-driven, microservices, Containers/Kubernetes in the cloud

- Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog etc.

- Multi-cloud experience a plus - Azure, AWS, Google

   

Professional Skill Requirements

Proven ability to build, manage and foster a team-oriented environment

Proven ability to work creatively and analytically in a problem-solving environment

Desire to work in an information systems environment

Excellent communication (written and oral) and interpersonal skills

Excellent leadership and management skills

Excellent organizational, multi-tasking, and time-management skills

Job posted by
Thomas G

Data Engineer

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Hive
Architecture
Data engineering
Java
Python
Scala
ETL
icon
Mumbai
icon
5 - 12 yrs
icon
₹10L - ₹20L / yr
JD of Data Engineer
As a Data Engineer, you are a full-stack data engineer that loves solving business problems.
You work with business leads, analysts and data scientists to understand the business domain
and engage with fellow engineers to build data products that empower better decision making.
You are passionate about data quality of our business metrics and flexibility of your solution that
scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses
on productivity and creativity, and allows you to be part of a world-class team while still being
yourself.

What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Think big! and drive the strategy for better data quality for the customers
● Collaborate with many teams - engineering and business, to build better data products

What We’re Looking For
● Over 1-3 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Pig, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
Job posted by
Reena Bandekar

Big Data Engineer

at Netmeds.com

Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Hadoop
Apache Hive
Scala
Spark
Datawarehousing
Machine Learning (ML)
Deep Learning
SQL
Data modeling
PySpark
Python
Amazon Web Services (AWS)
Java
Cassandra
DevOps
HDFS
icon
Chennai
icon
2 - 5 yrs
icon
₹6L - ₹25L / yr

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Job posted by
Vijay Hemnath

Talend Developer

at Product based company

Agency job
via Crewmates
ETL
talend
Talend
icon
Coimbatore
icon
4 - 15 yrs
icon
₹5L - ₹20L / yr
Hi Professionals,
Role : Talend developer
Location : Coimbatore
Experience : 4+Years
Skills : Talend, any DB
Notice period : Immediate to 15 Days
Job posted by
Gowtham V

ETL specialist (for a startup hedge fund)

at Prediction Machine

Founded 2021  •  Products & Services  •  20-100 employees  •  Raised funding
ETL
PySpark
Data engineering
Data engineer
athena
Amazon S3
Machine Learning (ML)
Data Science
Python
Apache Kafka
Apache Spark
Data modeling
Predictive analytics
AWS Glue
icon
Remote only
icon
3 - 8 yrs
icon
$24K - $60K / yr
We are a nascent quant hedge fund; we need to stage financial data and make it easy to run and re-run various preprocessing and ML jobs on the data.
- We are looking for an experienced data engineer to join our team.
- The preprocessing involves ETL tasks, using pyspark, AWS Glue, staging data in parquet formats on S3, and Athena

To succeed in this data engineering position, you should care about well-documented, testable code and data integrity. We have devops who can help with AWS permissions.
We would like to build up a consistent data lake with staged, ready-to-use data, and to build up various scripts that will serve as blueprints for various additional data ingestion and transforms.

If you enjoy setting up something which many others will rely on, and have the relevant ETL expertise, we’d like to work with you.

Responsibilities
- Analyze and organize raw data
- Build data pipelines
- Prepare data for predictive modeling
- Explore ways to enhance data quality and reliability
- Potentially, collaborate with data scientists to support various experiments

Requirements
- Previous experience as a data engineer with the above technologies
Job posted by
Nataliia Mediana

Data Engineer

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
ETL
Big Data
Hadoop
PySpark
SQL
Python
Spark
Microsoft Windows Azure
Datawarehousing
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹8L - ₹14L / yr
Roles and Responsibilities:

• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Job posted by
Sudarshini K
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Rishabh Software?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort