Assistant Manager TS- (Data Governance)
at a global business process management company
Job Description
We are looking for a senior resource with Analyst skills and knowledge of IT projects, to support delivery of risk mitigation activities and automation in Aviva’s Global Finance Data Office. The successful candidate will bring structure to this new role in a developing team, with excellent communication, organisational and analytical skills. The Candidate will play the primary role of supporting data governance project/change activities. Candidates should be comfortable with ambiguity in a fast-paced and ever-changing environment. Preferred skills include knowledge of Data Governance, Informatica Axon, SQL, AWS. In our team, success is measured by results and we encourage flexible working where possible.
Key Responsibilities
- Engage with stakeholders to drive delivery of the Finance Data Strategy
- Support data governance project/change activities in Aviva’s Finance function.
- Identify opportunities and implement Automations for enhanced performance of the Team
Required profile
- Relevant work experience in at least one of the following: business/project analyst, project/change management and data analytics.
- Proven track record of successful communication of analytical outcomes, including an ability to effectively communicate with both business and technical teams.
- Ability to manage multiple, competing priorities and hold the team and stakeholders to account on progress.
- Contribute, plan and execute end to end data governance framework.
- Basic knowledge of IT systems/projects and the development lifecycle.
- Experience gathering business requirements and reports.
- Advanced experience of MS Excel data processing (VBA Macros).
- Good communication
Additional Information
Degree in a quantitative or scientific field (e.g. Engineering, MBA Finance, Project Management) and/or experience in data governance/quality/privacy
Knowledge of Finance systems/processes
Experience in analysing large data sets using dedicated analytics tools
Designation – Assistant Manager TS
Location – Bangalore
Shift – 11 – 8 PMSimilar jobs
closely with the Kinara management team to investigate strategically important business
questions.
Lead a team through the entire analytical and machine learning model life cycle:
Define the problem statement
Build and clean datasets
Exploratory data analysis
Feature engineering
Apply ML algorithms and assess the performance
Code for deployment
Code testing and troubleshooting
Communicate Analysis to Stakeholders
Manage Data Analysts and Data Scientists
Data Engineer
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
About Us:-
The fastest rising startup in the EdTech space, focussed on Engineering and Government Job Exams and with an eye to capture UPSC, PSC, and international exams. Testbook is poised to revolutionize the industry. With a registered user base of over 2.2 Crore students, more than 450 crore questions solved on the WebApp, and a knockout Android App. Testbook has raced to the front and is ideally placed to capture bigger markets.
Testbook is the perfect incubator for talent. You come, you learn, you conquer. You train under the best mentors and become an expert in your field in your own right. That being said, the flexibility in the projects you choose, how and when you work on them, what you want to add to them is respected in this startup. You are the sole master of your work.
The IIT pedigree of the co-founders has attracted some of the brightest minds in the country to Testbook. A team that is quickly swelling in ranks, it now stands at 500+ in-house employees and hundreds of remote interns and freelancers. And the number is rocketing weekly. Now is the time to join the force.
In this role you will get to:-
- Work with state-of-the-art data frameworks and technologies like Dataflow(Apache Beam), Dataproc(Apache Spark & Hadoop), Apache Kafka, Google PubSub, Apache Airflow, and others.
- You will work cross-functionally with various teams, creating solutions that deal with large volumes of data.
- You will work with the team to set and maintain standards and development practices.
- You will be a keen advocate of quality and continuous improvement.
- You will modernize the current data systems to develop Cloud-enabled Data and Analytics solutions
- Drive the development of cloud-based data lake, hybrid data warehouses & business intelligence platforms
- Improve upon the data ingestion models, ETL jobs, and alerts to maintain data integrity and data availability
- Build Data Pipelines to ingest structured and Unstructured Data.
- Gain hands-on experience with new data platforms and programming languages
- Analyze and provide data-supported recommendations to improve product performance and customer acquisition
- Design, Build and Support resilient production-grade applications and web services
Who you are:-
- 1+ years of work experience in Software Engineering and development.
- Very strong understanding of Python & pandas library.Good understanding of Scala, R, and other related languages
- Experience with data transformation & data analytics in both batch & streaming mode using cloud-native technologies.
- Strong experience with the big data technologies like Hadoop, Spark, BigQuery, DataProc, Dataflow
- Strong analytical and communication skills.
- Experience working with large, disconnected, and/or unstructured datasets.
- Experience building and optimizing data pipelines, architectures, and data sets using cloud-native technologies.
- Hands-on experience with any cloud tech like GCP/AWS is a plus.
The responsibilities of a tableau developer include creating technical solutions, creating data storage tools, and conducting tests. To be successful as a tableau developer, you should have a broad understanding of the business technology landscape, the ability to design reports, and strong analytical skills.
About Us:
GreedyGame is looking for a Business Analyst to join its clan. We are looking to get an enthusiastic Business Analyst who likes to play with Data. You'll be building insights from Data, creating analytical dashboard and monitoring KPI values. Also you will coordinate with teams working on different layers of the infrastructure.
Job details:
Seniority Level: Associate
Level Industry: Marketing & Advertising
Employment Type: Full Time
Job Location: Bangalore
Experience: 1-2 years
WHAT ARE WE LOOKING FOR?
- Excellent planning, organizational, and time management skills.
- Exceptional analytical and conceptual thinking skills.
- A previous experience of working closely with Operations and Product Teams.
- Competency in Excel and SQL is a must.
- Experience with a programming language like Python is required.
- Knowledge of Marketing Tools is preferable.
WHAT WILL BE YOUR RESPONSIBILITIES?
- Evaluating business processes, anticipating requirements, uncovering areas for improvement, developing and implementing solutions.
- Should be able to generate meaningful insights to help the marketing team and product team in enhancing the user experience for Mobile and Web Apps.
- Leading ongoing reviews of business processes and developing optimization strategies.
- Performing requirements analysis from a user and business point of view
- Combining data from multiple sources like SQL tables, Google Analytics, Inhouse Analytical signals etc and driving relevant insights
- Deciding the success metrics and KPIs for different Products and features and making sure they are achieved.
- Act as quality assurance liaison prior to the release of new data analysis or application.
Skills and Abilities:
- Python
- SQL
- Business Analytics
- BigQuery
WHAT'S IN IT FOR YOU?
- An opportunity to be a part of a fast scaling start-up in the AdTech space that offers unmatched services and products.
- To work with a team of young enthusiasts who are always upbeat and self-driven to achieve bigger milestones in shorter time spans.
- A workspace that is wide open as per the open door policy at the company, located in the most happening center of Bangalore.
- A well-fed stomach makes the mind work better and therefore we provide - free lunch with a wide variety on all days of the week, a stocked-up pantry to satiate your want for munchies, a Foosball table to burst stress and above all a great working environment.
- We believe that we grow as you grow. Once you are a part of our team, your growth also becomes essential to us, and in order to make sure that happens, there are timely formal and informal feedbacks given
4-6 years of total experience in data warehousing and business intelligence
3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)
2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)
Strong experience building visually appealing UI/UX in Power BI
Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)
Experience building Power BI using large data in direct query mode
Expert SQL background (query building, stored procedure, optimizing performance)
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
empower healthcare payers, providers and members to quickly process medical data to
make informed decisions and reduce health care costs. You will be focusing on research,
development, strategy, operations, people management, and being a thought leader for
team members based out of India. You should have professional healthcare experience
using both structured and unstructured data to build applications. These applications
include but are not limited to machine learning, artificial intelligence, optical character
recognition, natural language processing, and integrating processes into the overall AI
pipeline to mine healthcare and medical information with high recall and other relevant
metrics. The results will be used dually for real-time operational processes with both
automated and human-based decision making as well as contribute to reducing
healthcare administrative costs. We work with all major cloud and big data vendors
offerings including (Azure, AWS, Google, IBM, etc.) to achieve our goals in healthcare and
support
The Director, Data Science will have the opportunity to build a team, shape team culture
and operating norms as a result of the fast-paced nature of a new, high-growth
organization.
• Strong communication and presentation skills to convey progress to a diverse group of stakeholders
• Strong expertise in data science, data engineering, software engineering, cloud vendors, big data technologies, real-time streaming applications, DevOps and product delivery
• Experience building stakeholder trust and confidence in deployed models especially via application of the algorithmic bias, interpretable machine learning,
data integrity, data quality, reproducible research and reliable engineering 24x7x365 product availability, scalability
• Expertise in healthcare privacy, federated learning, continuous integration and deployment, DevOps support
• Provide mentoring to data scientists and machine learning engineers as well as career development
• Meet project related team members for individual specific needs on a regular basis related to project/product deliverables
• Provide training and guidance for team members when required
• Provide performance feedback when required by leadership
The Experience You’ll Need (Required):
• MS/M.Tech degree or PhD in Computer Science, Mathematics, Physics or related STEM fields
• Significant healthcare data experience including but not limited to usage of claims data
• Delivered multiple data science and machine learning projects over 8+ years with values exceeding $10 Million or more and has worked on platform members exceeding 10 million lives
• 9+ years of industry experience in data science, machine learning, and artificial intelligence
• Strong expertise in data science, data engineering, software engineering, cloud vendors, big data technologies, real time streaming applications, DevOps, and product delivery
• Knows how to solve and launch real artificial intelligence and data science related problems and products along with managing and coordinating the
business process change, IT / cloud operations, meeting production level code standards
• Ownerships of key workflows part of data science life cycle like data acquisition, data quality, and results
• Experience building stakeholder trust and confidence in deployed models especially via application of algorithmic bias, interpretable machine learning,
data integrity, data quality, reproducible research, and reliable engineering 24x7x365 product availability, scalability
• Expertise in healthcare privacy, federated learning, continuous integration and deployment, DevOps support
• 3+ Years of experience managing directly five (5) or more senior level data scientists, machine learning engineers with advanced degrees and directly
made staff decisions
• Very strong understanding of mathematical concepts including but not limited to linear algebra, advanced calculus, partial differential equations, and
statistics including Bayesian approaches at master’s degree level and above
• 6+ years of programming experience in C++ or Java or Scala and data science programming languages like Python and R including strong understanding of
concepts like data structures, algorithms, compression techniques, high performance computing, distributed computing, and various computer architecture
• Very strong understanding and experience with traditional data science approaches like sampling techniques, feature engineering, classification, and
regressions, SVM, trees, model evaluations with several projects over 3+ years
• Very strong understanding and experience in Natural Language Processing,
reasoning, and understanding, information retrieval, text mining, search, with
3+ years of hands on experience
• Experience with developing and deploying several products in production with
experience in two or more of the following languages (Python, C++, Java, Scala)
• Strong Unix/Linux background and experience with at least one of the
following cloud vendors like AWS, Azure, and Google
• Three plus (3+) years hands on experience with MapR \ Cloudera \ Databricks
Big Data platform with Spark, Hive, Kafka etc.
• Three plus (3+) years of experience with high-performance computing like
Dask, CUDA distributed GPU, TPU etc.
• Presented at major conferences and/or published materials
Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem.
What are we looking for:
-
3+ years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and experience in working with Big Data technologies like Big Data frameworks/platforms/data stores like Hadoop, HDFS, Spark, Oozie, Hue, EMR, Scala, Hive, Glue, Kerberos etc.
-
Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud
-
2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc
-
2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred
-
Knowledge of statistical analysis tools like R, SAS etc
-
Familiarity with any data visualization software
-
A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with
As a data engineer at Recko, you will:
-
Create and maintain optimal data pipeline architecture,
-
Assemble large, complex data sets that meet functional / non-functional business requirements.
-
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
-
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
-
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
-
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
-
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
-
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
-
Work with data and analytics experts to strive for greater functionality in our data systems.
About Recko:
Recko was founded in 2017 to organise the world’s transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. With the Recko Platform, businesses can build, integrate and adapt innovative and complex financial use cases within the organization and across external payment ecosystems with agility, confidence and at scale. . Today, customer-obsessed brands such as Deliveroo, Meesho, Grofers, Dunzo, Acommerce, etc use Recko so their finance teams can optimize resources with automation and prioritize growth over repetitive and time-consuming tasks around day-to-day operations.
Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We believe software is an extension of one’s capability, and it should be delightful and fun to use.
Working at Recko:
We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 60+ members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals.