Mandatory (Minimum 4 years of working experience)
3+ years of experience leading data warehouse implementation with technical architectures , ETL / ELT ,
reporting / analytic tools and scripting (end to end implementation)
Experienced in Microsoft Azure (Azure SQL Managed Instance , Data Factory , Azure Synapse, Azure Monitoring ,
Azure DevOps , Event Hubs , Azure AD Security)
Deep experience in using any BI tools such as Power BI/Tableau, QlikView/SAP-BO etc.,
Experienced in ETL tools such as SSIS, Talend/Informatica/Pentaho
Expertise in using RDBMSes like Oracle, SQL Server as source or target and online analytical processing (OLAP)
Experienced in SQL/T-SQL/ DML/DDL statements, stored procedure, function, trigger, indexes, cursor
Expertise in building and organizing advanced DAX calculations and SSAS cubes
Experience in data/dimensional modelling, analysis, design, testing, development, and implementation
Experienced in advanced data warehouse concepts using structured, semi-structured and un-structured data
Experienced with real time ingestion, change data capture, real time & batch processing
Good knowledge of meta data management and data governance
Great problem solving skills, with a strong bias for quality and design excellence
Experienced in developing dashboards with a focus on usability, performance, flexibility, testability, and
standardization.
Familiarity with development in cloud environments like AWS / Azure / Google
Good To Have (1+ years of working experience)
Experience working with Snowflake, Amazon RedShift
Soft Skills
Good verbal and written communication skills
Ability to collaborate and work effectively in a team.
Excellent analytical and logical skills
About Rishabh Software
Similar jobs
Design, implement, and execute appropriate solutions and enhancements to ensure an improvement in
system reliability and performance.
Ensure project deadlines are met and are in alignment with the needs of the business unit, and coincide
with release management and governance.
Ensure that operational aspects of supported applications are included in architectural standards
Produce service metrics, analyze trends and identify opportunities to improve the level of service and
reduce cost as appropriate.
Support implementation activities.
Enable technical knowledge sharing across team
Work with vendors on designated areas
Skills Required:
Strong background with relational databases, primarily Teradata with 3+ years of experience
3+ years of experience in developing ETL processes using Informatica
3+ years of experience in reporting tools such as Business Objects
Strong understanding of UNIX and shell scripting
Thorough knowledge of SDLC (Software Development Life Cycle)
Excellent interpersonal and communication skills (verbal and written)
Skills Desired:
Exposure to Hadoop ecosystem.
Exposure to programming languages python/java
Exposure to Regulatory Reporting and Credit Risk
Nice to have-
Experience developing in ServiceNow (JavaScript, workflows, update sets)
Angular and Node.js experience a plus
Knowledge of database application concepts, SQL, query optimization
Experience with web application user interface and usability concepts
Understanding of secure software development concepts, especially in a cloud platform
Experience with monitoring, event/alert management and observability concepts a plus.
Exposure to financial industry
Source control (preferably Git) and continuous Integration tools
Data Analyst
at Extramarks Education India Pvt Ltd
Required Experience
· 3+ years of relevant technical experience as a data analyst role
· Intermediate / expert skills with SQL and basic statistics
· Experience in Advance SQL
· Python programming- Added advantage
· Strong problem solving and structuring skills
· Automation in connecting various sources to the data and representing it through various dashboards
· Excellent with Numbers and communicate data points through various reports/templates
· Ability to communicate effectively internally and outside Data Analytics team
· Proactively take up work responsibilities and take adhocs as and when needed
· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
· Strong technical communication skills; both written and verbal
· Ability to understand and articulate the "big picture" and simplify complex ideas
· Ability to identify and learn applicable new techniques independently as needed
· Must have worked with various Databases (Relational and Non-Relational) and ETL processes
· Must have experience in handling large volume and data and adhere to optimization and performance standards
· Should have the ability to analyse and provide relationship views of the data from different angles
· Must have excellent Communication skills (written and oral).
· Knowing Data Science is an added advantage
Required Skills
MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills
Preferred Experience
· Strong understanding of relational database MY SQL etc.
· Prior experience working remotely full-time
· Prior Experience working in Advance SQL
· Experience with one or more BI tools, such as Superset, Tableau etc.
· High level of logical and mathematical ability in Problem Solving
We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless. As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines. Duties/Responsibilities Include:
|
Requirements:
Exceptional candidates will have:
|
Job Summary :
Independently handle the delivery of analytics assignments by mentoring a team of 3 - 10 people and delivering to exceed client expectations
Responsibilities :
- Co-ordinate with onsite company consultants to ensure high quality, on-time delivery
- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)
- Take part in organizational development activities to take company to the next level
Qualification, Skills & Prior Work Experience :
- Great analytical skills, detail-oriented approach
- Sound knowledge in MS Office tools like Excel, Power Point and data visualization tools like Tableau, PowerBI or such tools
- Strong experience in SQL, Python, SAS, SPSS, Statistica, R, MATLAB or such tools would be preferable
- Ability to adapt and thrive in the fast-paced environment that young companies operate in
- Priority for people with analytics work experience
- Programming skills- Java/Python/SQL/OOPS based programming knowledge
Job Location : Chennai, Work from Home will be provided until COVID situation improves
Note :
- Minimum one year experience needed
- Only 2019, 2020 and 2020 passed outs applicable
- Only above 70% aggregate throughout studies is applicable
- POST GRADUATION is must
2-4 years of experience in developing ETL activities for Azure – Big data, relational databases, and data warehouse solutions.
Extensive hands-on experience implementing data migration and data processing using Azure services: ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Analysis Service, Azure Databricks, Azure Data Catalog, ML Studio, AI/ML, Snowflake, etc.
Well versed in DevOps and CI/CD deployments
Cloud migration methodologies and processes including tools like Azure Data Factory, Data Migration Service, SSIS, etc.
Minimum of 2 years of RDBMS experience
Experience with private and public cloud architectures, pros/cons, and migration considerations.
Nice-to-Have Skills/Qualifications:
- DevOps on an Azure platform
- Experience developing and deploying ETL solutions on Azure
- IoT, event-driven, microservices, Containers/Kubernetes in the cloud
- Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog etc.
- Multi-cloud experience a plus - Azure, AWS, Google
Professional Skill Requirements
Proven ability to build, manage and foster a team-oriented environment
Proven ability to work creatively and analytically in a problem-solving environment
Desire to work in an information systems environment
Excellent communication (written and oral) and interpersonal skills
Excellent leadership and management skills
Excellent organizational, multi-tasking, and time-management skills
As a Data Engineer, you are a full-stack data engineer that loves solving business problems.
You work with business leads, analysts and data scientists to understand the business domain
and engage with fellow engineers to build data products that empower better decision making.
You are passionate about data quality of our business metrics and flexibility of your solution that
scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses
on productivity and creativity, and allows you to be part of a world-class team while still being
yourself.
What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Think big! and drive the strategy for better data quality for the customers
● Collaborate with many teams - engineering and business, to build better data products
What We’re Looking For
● Over 1-3 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Pig, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
Role : Talend developer
Location : Coimbatore
Experience : 4+Years
Skills : Talend, any DB
Notice period : Immediate to 15 Days
ETL specialist (for a startup hedge fund)
- We are looking for an experienced data engineer to join our team.
- The preprocessing involves ETL tasks, using pyspark, AWS Glue, staging data in parquet formats on S3, and Athena
To succeed in this data engineering position, you should care about well-documented, testable code and data integrity. We have devops who can help with AWS permissions.
We would like to build up a consistent data lake with staged, ready-to-use data, and to build up various scripts that will serve as blueprints for various additional data ingestion and transforms.
If you enjoy setting up something which many others will rely on, and have the relevant ETL expertise, we’d like to work with you.
Responsibilities
- Analyze and organize raw data
- Build data pipelines
- Prepare data for predictive modeling
- Explore ways to enhance data quality and reliability
- Potentially, collaborate with data scientists to support various experiments
Requirements
- Previous experience as a data engineer with the above technologies
• Responsible for developing and maintaining applications with PySpark
Must Have Skills: