Required Experience
· 3+ years of relevant technical experience as a data analyst role
· Intermediate / expert skills with SQL and basic statistics
· Experience in Advance SQL
· Python programming- Added advantage
· Strong problem solving and structuring skills
· Automation in connecting various sources to the data and representing it through various dashboards
· Excellent with Numbers and communicate data points through various reports/templates
· Ability to communicate effectively internally and outside Data Analytics team
· Proactively take up work responsibilities and take adhocs as and when needed
· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
· Strong technical communication skills; both written and verbal
· Ability to understand and articulate the "big picture" and simplify complex ideas
· Ability to identify and learn applicable new techniques independently as needed
· Must have worked with various Databases (Relational and Non-Relational) and ETL processes
· Must have experience in handling large volume and data and adhere to optimization and performance standards
· Should have the ability to analyse and provide relationship views of the data from different angles
· Must have excellent Communication skills (written and oral).
· Knowing Data Science is an added advantage
Required Skills
MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills
Preferred Experience
· Strong understanding of relational database MY SQL etc.
· Prior experience working remotely full-time
· Prior Experience working in Advance SQL
· Experience with one or more BI tools, such as Superset, Tableau etc.
· High level of logical and mathematical ability in Problem Solving
About Extramarks
Extramarks Education India Private Limited
Extramarks is leading the Education Technology sector in India by providing 360⁰ education support to learners through new age digital education solutions. These solutions are used by schools in class room for imparting education and by students at home to make learning easy and effective. Keeping pace with globalization and technology in education, Extramarks empowers young learners to step in with the latest technology and have anytime-anywhere access to quality learning. In a very short period, Extramarks has become extremely popular among schools and students. More than 8000 schools use digital learning and technology solutions of Extramarks across India, Singapore, Kuwait, UAE and South Africa.
Extramarks Learning App allows students to learn at home at their own pace and space and provides complete educational support eliminating the need of a tutor. The three pronged pedagogical approach of Learn, Practice and Test ensures better learning outcomes for students. All concepts are first explained in an easy to learn manner with the help of rich media then the students are allowed to practice the concept. Virtual Practicing modules and Q&A allow the retention of knowledge that is tested on a robust teaching platform to identify the learning gaps.
Extramarks is currently in the process of scaling up rapidly the entire technology workforce, by hiring experienced professionals with unique skill who stretches from strategy to execution. Extramarks is driving student learning through data driven techniques by providing personalized learning recommendations to the students. Apart from such customer facing innovations, we are also implementing internal operational excellence solutions through similar data driven approaches. It is a once in a lifetime opportunity to join Extramarks and help change the face of education in India.
Similar jobs
Good experience in the Extraction, Transformation, and Loading (ETL) of data from various sources into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,
Designer, Workflow Manager, Workflow Monitor, Metadata Manager), Power Connect as ETL tool on Oracle, and SQL Server Databases.
Knowledge of Data Warehouse/Data mart, ODS, OLTP, and OLAP implementations teamed with
project scope, Analysis, requirements gathering, data modeling, ETL Design, development,
System testing, Implementation, and production support.
Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts
and Dimensions
Used various transformations like Filter, Expression, Sequence Generator, Update Strategy,
Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.
Developed mapping parameters and variables to support SQL override.
Created applets to use them in different mappings.
Created sessions, configured workflows to extract data from various sources, transformed data,
and loading into the data warehouse.
Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.
Modified existing mappings for enhancements of new business requirements.
Involved in Performance tuning at source, target, mappings, sessions, and system levels.
Prepared migration document to move the mappings from development to testing and then to
production repositories
Extensive experience in developing Stored Procedures, Functions, Views and Triggers, Complex
SQL queries using PL/SQL.
Experience in resolving on-going maintenance issues and bug fixes; monitoring Informatica
/Talend sessions as well as performance tuning of mappings and sessions.
Experience in all phases of Data warehouse development from requirements gathering for the
data warehouse to develop the code, Unit Testing, and Documenting.
Extensive experience in writing UNIX shell scripts and automation of the ETL processes using
UNIX shell scripting.
Experience in using Automation Scheduling tools like Control-M.
Hands-on experience across all stages of Software Development Life Cycle (SDLC) including
business requirement analysis, data mapping, build, unit testing, systems integration, and user
acceptance testing.
Build, operate, monitor, and troubleshoot Hadoop infrastructure.
Develop tools and libraries, and maintain processes for other engineers to access data and write
MapReduce programs.
We are looking out for a technically driven "Full-Stack Engineer" for one of our premium client
COMPANY DESCRIPTION:
Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential.
Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 2,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed around the globe.
We are looking for someone who is creative , analytical and a thinker to join our Business Intelligence team as a Senior Tableau Developer in Hyderabad, India
Responsibilities
- Proficient in all Tableau platforms including desktop, server and web-based including Tableau Prep
- Design, build, maintain & deploy complex reports in Tableau.
- Identify, analyze, and interpret trends or patterns in complex data sets using standard statistical tools and techniques.
- Consult with and advise senior management and multiple clients on strategies to improve reporting techniques through automation and the creation of dashboards.
- Generate monthly reporting and ad-hoc analysis to support leadership decision-making.
- Gather and document requirements from cross-functional teams and projects, and align on visualization expectations.
- Build best practices of integrating financial and operational data for a variety of data sources; including: MS Excel, SQL, internal forecasting and planning tools.
- Create dynamic dashboards using parameters, filters, calculated fields and applying level of details functionality.
Minimum Qualifications
- 5+ years’ experience in data analytics.
- Systems: Proficient with Tableau, MS Excel, Anaplan, SQL (Oracle, Hive, MySQL) and other analytical applications.
- Bachelor’s degree in Computer Science, Statistics, Math, Engineering, Business and/or equivalent specialized education and experience in Data Science, Statistics, Data Analysis or related field
Knowledge, Skills and Abilities
- Expertise in designing and presenting data using best practices in data visualization.
- Ability to present to variety of internal and external stakeholders.
- Strong written and verbal communication skills.
- Knowledge of best practices and experience optimizing Tableau for performance by optimizing queries, extracts and logic.
- Experience performing administration tasks in Tableau Server in a production environment.
- Experience with connecting and blending multiple data sources.
- Strong analytical and critical thinking skills.
•3+ years of experience in big data & data warehousing technologies
•Experience in processing and organizing large data sets
•Experience with big data tool sets such Airflow and Oozie
•Experience working with BigQuery, Snowflake or MPP, Kafka, Azure, GCP and AWS
•Experience developing in programming languages such as SQL, Python, Java or Scala
•Experience in pulling data from variety of databases systems like SQL Server, maria DB, Cassandra
NOSQL databases
•Experience working with retail, advertising or media data at large scale
•Experience working with data science engineering, advanced data insights development
•Strong quality proponent and thrives to impress with his/her work
•Strong problem-solving skills and ability to navigate complicated database relationships
•Good written and verbal communication skills , Demonstrated ability to work with product
management and/or business users to understand their needs.
Job Description - Sr Azure Data Engineer
Roles & Responsibilities:
- Hands-on programming in C# / .Net,
- Develop serverless applications using Azure Function Apps.
- Writing complex SQL Queries, Stored procedures, and Views.
- Creating Data processing pipeline(s).
- Develop / Manage large-scale Data Warehousing and Data processing solutions.
- Provide clean, usable data and recommend data efficiency, quality, and data integrity.
Skills
- Should have working experience on C# /.Net.
- Proficient with writing SQL queries, Stored Procedures, and Views
- Should have worked on Azure Cloud Stack.
- Should have working experience ofin developing serverless code.
- Must have MANDATORILY worked on Azure Data Factory.
Experience
- 4+ years of relevant experience
Work Location : Chennai
Experience Level : 5+yrs
Package : Upto 18 LPA
Notice Period : Immediate Joiners
It's a full-time opportunity with our client.
Mandatory Skills:Machine Learning,Python,Tableau & SQL
Job Requirements:
--2+ years of industry experience in predictive modeling, data science, and Analysis.
--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.
--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.
--Experience writing code in Python and SQL with documentation for reproducibility.
--Strong Proficiency in Tableau.
--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.
--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.
--AWS Sagemaker experience is a plus not required.
- 6+ years of recent hands-on Java development
- Developing data pipelines in AWS or Google Cloud
- Java, Python, JavaScript programming languages
- Great understanding of designing for performance, scalability, and reliability of data intensive application
- Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
- In-depth understanding of object oriented programming concepts and design patterns
- Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
- Understanding of full software development life cycle, agile development and continuous integration
- Experience in Agile methodologies including Scrum and Kanban
2. Should understand the importance and know-how of taking the machine-learning-based solution to the consumer.
3. Hands-on experience with statistical, machine-learning tools and techniques
4. Good exposure to Deep learning libraries like Tensorflow, PyTorch.
5. Experience in implementing Deep Learning techniques, Computer Vision and NLP. The candidate should be able to develop the solution from scratch with Github codes exposed.
6. Should be able to read research papers and pick ideas to quickly reproduce research in the most comfortable Deep Learning library.
7. Should be strong in data structures and algorithms. Should be able to do code complexity analysis/optimization for smooth delivery to production.
8. Expert level coding experience in Python.
9. Technologies: Backend - Python (Programming Language)
10. Should have the ability to think long term solutions, modularity, and reusability of the components.
11. Should be able to work in a collaborative way. Should be open to learning from peers as well as constantly bring new ideas to the table.
12. Self-driven missile. Open to peer criticism, feedback and should be able to take it positively. Ready to be held accountable for the responsibilities undertaken.