Assistant Manager TS- (Data Governance)

at a global business process management company

Agency job
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹4L - ₹10L / yr
icon
Full time
Skills
Data governance
Data security
Data Analytics
Informatica
SQL
Amazon Web Services (AWS)
Excel VBA
Macros
Finance

Job Description

We are looking for a senior resource with Analyst skills and knowledge of IT projects, to support delivery of risk mitigation activities and automation in Aviva’s Global Finance Data Office. The successful candidate will bring structure to this new role in a developing team, with excellent communication, organisational and analytical skills. The Candidate will play the primary role of supporting data governance project/change activities. Candidates should be comfortable with ambiguity in a fast-paced and ever-changing environment. Preferred skills include knowledge of Data Governance, Informatica Axon, SQL, AWS. In our team, success is measured by results and we encourage flexible working where possible.

Key Responsibilities

  • Engage with stakeholders to drive delivery of the Finance Data Strategy
  • Support data governance project/change activities in Aviva’s Finance function.
  • Identify opportunities and implement Automations for enhanced performance of the Team

Required profile

  • Relevant work experience in at least one of the following: business/project analyst, project/change management and data analytics.
  • Proven track record of successful communication of analytical outcomes, including an ability to effectively communicate with both business and technical teams.
  • Ability to manage multiple, competing priorities and hold the team and stakeholders to account on progress.
  • Contribute, plan and execute end to end data governance framework.
  • Basic knowledge of IT systems/projects and the development lifecycle.
  • Experience gathering business requirements and reports.
  • Advanced experience of MS Excel data processing (VBA Macros).
  • Good communication

 

Additional Information

Degree in a quantitative or scientific field (e.g. Engineering, MBA Finance, Project Management) and/or experience in data governance/quality/privacy
Knowledge of Finance systems/processes
Experience in analysing large data sets using dedicated analytics tools

 

Designation – Assistant Manager TS

Location – Bangalore

Shift – 11 – 8 PM
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Science - Risk

at Rupifi

Founded 2020  •  Product  •  20-100 employees  •  Raised funding
Data Analytics
Risk Management
Risk analysis
Data Science
Machine Learning (ML)
Python
SQL
Data Visualization
Big Data
Tableau
Data Structures
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹50L / yr

Data Scientist (Risk)/Sr. Data Scientist (Risk)


As a part of the Data science/Analytics team at Rupifi, you will play a significant role in  helping define the business/product vision and deliver it from the ground up by working with  passionate high-performing individuals in a very fast-paced working environment. 


You will work closely with Data Scientists & Analysts, Engineers, Designers, Product  Managers, Ops Managers and Business Leaders, and help the team make informed data  driven decisions and deliver high business impact.


Preferred Skills & Responsibilities: 

  1. Analyze data to better understand potential risks, concerns and outcomes of decisions.
  2. Aggregate data from multiple sources to provide a comprehensive assessment.
  3. Past experience of working with business users to understand and define inputs for risk models.
  4. Ability to design and implement best in class Risk Models in Banking & Fintech domain.
  5. Ability to quickly understand changing market trends and incorporate them into model inputs.
  6. Expertise in statistical analysis and modeling.
  7. Ability to translate complex model outputs into understandable insights for business users.
  8. Collaborate with other team members to effectively analyze and present data.
  9. Conduct research into potential clients and understand the risks of accepting each one.
  10. Monitor internal and external data points that may affect the risk level of a decision.

Tech skills: 

  • Hands-on experience in Python & SQL.
  • Hands-on experience in any visualization tool preferably Tableau
  • Hands-on experience in Machine & Deep Learning area
  • Experience in handling complex data sources
  • Experience in modeling techniques in the fintech/banking domain
  • Experience of working on Big data and distributed computing.

Preferred Qualifications: 

  • A BTech/BE/MSc degree in Math, Engineering, Statistics, Economics, ML, Operations  Research, or similar quantitative field.
  • 3 to 10 years of modeling experience in the fintech/banking domain in fields like collections, underwriting, customer management, etc.
  • Strong analytical skills with good problem solving ability
  • Strong presentation and communication skills
  • Experience in working on advanced machine learning techniques
  • Quantitative and analytical skills with a demonstrated ability to understand new analytical concepts.
Job posted by
Richa Tiwari

AGM Data Engineering

at ACT FIBERNET

Founded 2008  •  Services  •  100-1000 employees  •  Profitable
Data engineering
Data Engineer
Hadoop
Informatica
Qlikview
Datapipeline
icon
Bengaluru (Bangalore)
icon
9 - 14 yrs
icon
₹20L - ₹36L / yr

Key  Responsibilities :

  • Development of proprietary processes and procedures designed to process various data streams around critical databases in the org
  • Manage technical resources around data technologies, including relational databases, NO SQL DBs, business intelligence databases, scripting languages, big data tools and technologies, visualization tools.
  • Creation of a project plan including timelines and critical milestones to success in support of the project
  • Identification of the vital skill sets/staff required to complete the project
  • Identification of crucial sources of the data needed to achieve the objective.

 

Skill Requirement :

  • Experience with data pipeline processes and tools
  • Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, ETL, ESB)
  • Experience with an existing ETL tool e.g Informatica and Ab initio etc
  • Deep understanding of big data systems like Hadoop, Spark, YARN, Hive, Ranger, Ambari
  • Deep knowledge of Qlik ecosystems like  Qlikview, Qliksense, and Nprinting
  • Python, or a similar programming language
  • Exposure to data science and machine learning
  • Comfort working in a fast-paced environment

Soft attributes :

  • Independence: Must have the ability to work on his/her own without constant direction or supervision. He/she must be self-motivated and possess a strong work ethic to strive to put forth extra effort continually
  • Creativity: Must be able to generate imaginative, innovative solutions that meet the needs of the organization. You must be a strategic thinker/solution seller and should be able to think of integrated solutions (with field force apps, customer apps, CCT solutions etc.). Hence, it would be best to approach each unique situation/challenge in different ways using the same tools.
  • Resilience: Must remain effective in high-pressure situations, using both positive and negative outcomes as an incentive to move forward toward fulfilling commitments to achieving personal and team goals.
Job posted by
Sumit Sindhwani

Senior Data Engineer

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
PostgreSQL
DevOps
Amazon Web Services (AWS)
NodeJS (Node.js)
Ruby on Rails (ROR)
React.js
Python
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Job posted by
Newali Hazarika

Data Analyst

at A modern ayurvedic nutrition brand

Agency job
via Jobdost
Data Analytics
Data Analyst
MS-Excel
SQL
Python
R Language
icon
Bengaluru (Bangalore)
icon
1.5 - 3 yrs
icon
₹5L - ₹5L / yr
About the role:
We are looking for a motivated data analyst with sound experience in handling web/ digital analytics, to join us as part of the Kapiva D2C Business Team. This team is primarily responsible for driving sales and customer engagement on our website. This channel has grown 5x in revenue over the last 12 months and is poised to grow another 5x over the next six. It represents a high-growth, important part of our overall e-commerce growth strategy.
The mandate here is to run an end-to-end sustainable e-commerce business, boost sales through marketing campaigns, and build a cutting edge product (website) that optimizes the customer’s journey as well as increases customer lifetime value.
The Data Analyst will support the business heads by providing data-backed insights in order to drive customer growth, retention and engagement. They will be required to set-up and manage reports, test various hypotheses and coordinate with various stakeholders on a day-to-day basis.


Job Responsibilities:
Strategy and planning:
● Work with the D2C functional leads and support analytics planning on a quarterly/ annual basis
● Identify reports and analytics needed to be conducted on a daily/ weekly/ monthly frequency
● Drive planning for hypothesis-led testing of key metrics across the customer funnel
Analytics:
● Interpret data, analyze results using statistical techniques and provide ongoing reports
● Analyze large amounts of information to discover trends and patterns
● Work with business teams to prioritize business and information needs
● Collaborate with engineering and product development teams to setup data infrastructure as needed

Reporting and communication:
● Prepare reports / presentations to present actionable insights that can drive business objectives
● Setup live dashboards reporting key cross-functional metrics
● Coordinate with various stakeholders to collect useful and required data
● Present findings to business stakeholders to drive action across the organization
● Propose solutions and strategies to business challenges

Requirements sought:
Must haves:
● Bachelor’s/ Masters in Mathematics, Economics, Computer Science, Information Management, Statistics or related field
● High proficiency in MS Excel and SQL
● Knowledge of one or more programming languages like Python/ R. Adept at queries, report writing and presenting findings
● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy - working knowledge of statistics and statistical methods
● Ability to work in a highly dynamic environment across cross-functional teams; good at
coordinating with different departments and managing timelines
● Exceptional English written/verbal communication
● A penchant for understanding consumer traits and behavior and a keen eye to detail

Good to have:
● Hands-on experience with one or more web analytics tools like Google Analytics, Mixpanel, Kissmetrics, Heap, Adobe Analytics, etc.
● Experience in using business intelligence tools like Metabase, Tableau, Power BI is a plus
● Experience in developing predictive models and machine learning algorithms
Job posted by
Sathish Kumar

Senior Data Engineer

at Balbix India Pvt Ltd

Founded 2015  •  Product  •  100-500 employees  •  Profitable
NOSQL Databases
MongoDB
Elastic Search
API
SQL
Python
Scala
Time series
InfluxDB
icon
Gurgaon, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹15L - ₹25L / yr
WHO WE ARE
Balbix is the world's leading platform for cybersecurity posture automation. Using Balbix, organizations can discover, prioritize and mitigate unseen risks and vulnerabilities at high velocity. With seamless data collection and petabyte-scale analysis capabilities, Balbix is deployed and operational within hours, and helps to decrease breach risk immediately. 
 
Balbix counts many global 1000 companies among its rapidly growing customer base. We are backed by John Chambers (the former CEO and Chairman of Cisco)top Silicon Valley VCs and global investors. We have been called magical, and have received raving reviews as well as customer testimonialsnumerous industry awards, and recognition by Gartner as a Cool Vendor, and by Frost & Sullivan.

ABOUT THIS ROLE
As a senior data engineer you will work on problems related to storing, analyzing, and manipulating very large cybersecurity and IT data sets. You will collaborate closely with our data scientists, threat researchers and network experts to solve real-world problems plaguing cybersecurity. This role requires excellent architecture, design, testing and programming skills as well as experience in large-scale data engineering.

You will:
  • Architect and implement modules for ingesting, storing and manipulating large data sets for a variety of cybersecurity use-cases. 
  • Write code to provide backend support for data-driven UI widgets, web dashboards, workflows, search and API connectors.  
  • Design and implement high performance APIs between our frontend and backend components, and between different backend components. 
  • Build production quality solutions that balance complexity and performance
  • Participate in the engineering life-cycle at Balbix, including designing high quality UI components, writing production code, conducting code reviews and working alongside our backend infrastructure and reliability teams
  • Stay current on the ever-evolving technology landscape of web based UIs and recommend new systems for incorporation in our technology stack.
You are:
  • Product-focused and passionate about building truly usable systems
  • Collaborative and comfortable working with across teams including data engineering, front end, product management, and DevOps
  • Responsible and like to take ownership of challenging problems
  • A good communicator, and facilitate teamwork via good documentation practices
  • Comfortable with ambiguity and able to iterate quickly in response to an evolving understanding of customer needs
  • Curious about the world and your profession, and a constant learner 
You have:
  • BS in Computer Science or related field
  • Atleast 3+ years of experience in the backend web stack (Node.js, MongoDB, Redis, Elastic Search, Postgres, Java, Python, Docker, Kubernetes, etc.)
  • SQL, no-SQL database experience
  • Experience building API (development experience using GraphQL is a plus)
  • Familiarity with issues of web performance, availability, scalability, reliability, and maintainability 
Job posted by
Garima Saraswat
Data Science
Data Scientist
R Programming
Python
SQL
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹25L - ₹28L / yr
  • Banking Domain
  • Assist the team in building Machine learning/AI/Analytics models on open-source stack using Python and the Azure cloud stack.
  • Be part of the internal data science team at fragma data - that provides data science consultation to large organizations such as Banks, e-commerce Cos, Social Media companies etc on their scalable AI/ML needs on the cloud and help build POCs, and develop Production ready solutions.
  • Candidates will be provided with opportunities for training and professional certifications on the job in these areas - Azure Machine learning services, Microsoft Customer Insights, Spark, Chatbots, DataBricks, NoSQL databases etc.
  • Assist the team in conducting AI demos, talks, and workshops occasionally to large audiences of senior stakeholders in the industry.
  • Work on large enterprise scale projects end-to-end, involving domain specific projects across banking, finance, ecommerce, social media etc.
  • Keen interest to learn new technologies and latest developments and apply them to projects assigned.
Desired Skills
  • Professional Hands-on coding experience in python for over 1 year for Data scientist, and over 3 years for Sr Data Scientist. 
  • This is primarily a programming/development-oriented role - hence strong programming skills in writing object-oriented and modular code in python and experience of pushing projects to production is important.
  • Strong foundational knowledge and professional experience in 
  • Machine learning, (Compulsory)
  • Deep Learning (Compulsory)
  • Strong knowledge of At least One of : Natural Language Processing or Computer Vision or Speech Processing or Business Analytics
  • Understanding of Database technologies and SQL. (Compulsory)
  • Knowledge of the following Frameworks:
  • Scikit-learn (Compulsory)
  • Keras/tensorflow/pytorch (At least one of these is Compulsory)
  • API development in python for ML models (good to have)
  • Excellent communication skills.
  • Excellent communication skills are necessary to succeed in this role, as this is a role with high external visibility, and with multiple opportunities to present data science results to a large external audience that will include external VPs, Directors, CXOs etc.  
  • Hence communication skills will be a key consideration in the selection process.
Job posted by
Harpreet kour

Big Data Engineer

at Product Company Chennai based

Big Data
Hadoop
kafka
Spark
Amazon Web Services (AWS)
icon
Remote only
icon
4 - 8 yrs
icon
₹10L - ₹15L / yr
  • Hands-on programming expertise in Java OR Python
  • Strong production experience with Spark (Minimum of 1-2 years)
  • Experience in data pipelines using Big Data technologies (Hadoop, Spark, Kafka, etc.,) on large scale unstructured data sets
  • Working experience and good understanding of public cloud environments (AWS OR Azure OR Google Cloud)
  • Experience with IAM policy and role management is a plus
Job posted by
Ramya D
Big Data
Hadoop
Spark
Apache Hive
Data engineering
Google Cloud Platform (GCP)
Microsoft Windows Azure
Amazon Web Services (AWS)
Apache Spark
spark
hive
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹16L - ₹40L / yr

Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem.

 

What are we looking for:

  1. 3+  years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and experience in working with Big Data technologies like Big Data frameworks/platforms/data stores like Hadoop, HDFS, Spark, Oozie, Hue, EMR, Scala, Hive, Glue, Kerberos etc.

  2. Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud

  3. 2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc

  4. 2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred

  5. Knowledge of statistical analysis tools like R, SAS etc 

  6. Familiarity with any data visualization software

  7. A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with

As a data engineer at Recko, you will:

  1. Create and maintain optimal data pipeline architecture,

  2. Assemble large, complex data sets that meet functional / non-functional business requirements.

  3. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  4. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

  5. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

  6. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

  7. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

  8. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

  9. Work with data and analytics experts to strive for greater functionality in our data systems.

 

About Recko: 

Recko was founded in 2017 to organise the world’s transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. With the Recko Platform, businesses can build, integrate and adapt innovative and complex financial use cases within the organization and  across external payment ecosystems with agility, confidence and at scale.  . Today, customer-obsessed brands such as Deliveroo, Meesho, Grofers, Dunzo, Acommerce, etc use Recko so their finance teams can optimize resources with automation and prioritize growth over repetitive and time-consuming tasks around day-to-day operations. 

 

Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We believe software is an extension of one’s capability, and it should be delightful and fun to use.

 

Working at Recko: 

We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 60+ members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals.

Job posted by
Chandrakala M

Data Engineer

at Service based company

pandas
PySpark
Big Data
Data engineering
Performance optimixation
oo concepts
SQL
Python
icon
Remote only
icon
3 - 8 yrs
icon
₹8L - ₹13L / yr
Data pre-processing, data transformation, data analysis, and feature engineering, 
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
Job posted by
Sonali Kamani

Data Visualization

at TechUnity Software Systems India Pvt Ltd;

Founded 1996  •  Services  •  20-100 employees  •  Profitable
Data Visualization
SQL
Stackless Python
R Programming
matplotlib
ggplot2
seaborn
Shiny
Dash
icon
Coimbatore
icon
2 - 5 yrs
icon
₹3L - ₹4L / yr

We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company.
We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics.
Critical thinking and problem-solving skills are essential for interpreting data.
We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Responsibilities:

  • Identify valuable data sources and automate collection processes
  • Undertake preprocessing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

Requirements:

  • Proven experience as a Data Scientist or Data Analyst
  • Experience in data mining
  • Understanding of machine-learning and operations research
  • Knowledge of SQL,Python,R,ggplot2, matplotlib, seaborn, Shiny, Dash; familiarity with Scala, Java or C++ is an asset
  • Experience using business intelligence tools (e.g. Tableau) and data frameworks
  • Analytical mind and business acumen
  • Strong math skills in statistics, algebra
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • BSc/BE in Computer Science, Engineering or relevant field;
  • graduate degree in Data Science or other quantitative field is preferred
Job posted by
Prithivi s
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at a global business process management company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort