Business Intelligence Lead

at Kaleidofin

DP
Posted by Poornima B
icon
Chennai, Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
Best in industry
icon
Full time
Skills
Business Intelligence (BI)
PowerBI
Python
SQL
R Language
Tableau
Data management
We are looking for a leader to design, develop and deliver strategic data-centric insights leveraging the next generation analytics and BI technologies. We want someone who is data-centric and insight-centric, less report centric. We are looking for someone wishing to
make an impact by enabling innovation and growth; someone with passion for what they do and a vision for the future.

Responsibilities:

  • Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
  • Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
  • Become an expert on data and trends, both internal and external to Kaleidofin.
  • Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
  • Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
  • Automate scheduling and distribution of reports and support auditing and value realization.
  • Partner with enterprise architects to define and ensure proposed.
  • Business Intelligence solutions adhere to an enterprise reference architecture.
  • Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks

Requirements:

  • Experience leading development efforts through all phases of SDLC.
  • 5+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
  • Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
  • Hands on experience in SQL, data management, and scripting (preferably Python).
  • Strong data visualisation design skills, data modeling and inference skills.
  • Hands-on and experience in managing small teams.
  • Financial services experience preferred, but not mandatory.
  • Strong knowledge of architectural principles, tools, frameworks, and best practices.
  • Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
  • Team handling preferred for 5+yrs experience candidates.
  • Notice period less than 30 days.
Read more

About Kaleidofin

Kaleidofin is a neobank for the informal sector, which provides solutions tailored to the customer’s goals and are intuitive to use. We are working towards creating fair and transparent financial solutions that can target millions of customers and enterprises in India that don’t have easy access to formal financial planning. Our name “kaleidofin” is inspired by the power of financial solutions to enable beautiful possibilities of a future life for each customer.


We believe that everyone deserves and requires access to financial solutions that are intuitive and easy to use, flexible and personalised to real goals that can make financial progress and financial freedom possible for everyone.

We believe financial solutions can provide customers powerful tools that solve their real life goals and challenges. For too long, the financial services industry has been a manufacturer producing products and fitting customers to their products. At kaleidofin, we want to flip this around, keep the customer at the centre and provide mass tailored solutions that are best suited to meet the customer’s own goals/challenges.

The demand for financial services is, therefore, a demand derived from an underlying goal and aspiration of an individual. There is an urgent need to make financial services and solutions intuitive for customers and embed them in their everyday life.

kaleidofin will leverage the full India stack, existing networks, analytics, structuring and user-centred design to drive outcomes for customers, in the process, we will also help enrich the digital asset of each such customer.

Our approach is
  • To combine distinct financial products (credit, investment, insurance, savings) to form a solution that actually both resonates with and works for the customer.
  • To build customer profiling, underwriting, solution design and machine learning suitability algorithms to solve this gigantic customer problem.
  • To leverage networks such as agents, cooperatives, self-help groups, temp agencies, MFIs to deliver suitable solutions at enormous scale.

In a very short time span, global investors such as Oiko Credit, Flourish, Omidyar Network and Blume Ventures have supported Kaleidofin’s well thought out business model with $8 million in seed and Series A funding.

The company won the Amazon AI Conclave award for Fintech, was one of only ten startups chosen for the Google LaunchPad Accelerator program in 2019, was recognized as India’s Most Innovative Wealth, Asset and Investment Management Service/Product by the Internet & Mobile Association of India (IAMAI) and was selected to present at United Nations General Assembly Special Task Force Event.

With its focus to harness mobile technology to deliver a paperless experience as well as its focus to harness technology and analytics to predict the right product for the right customer, Kaleidofin aims to become a leading FinTech player bringing financial solutions to everyone.

Read more
Founded
2018
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Series 'A' funded Silicon Valley based BI startup

Agency job
via Qrata
Data engineering
Data Engineer
Scala
Data Warehouse (DWH)
Big Data
Spark
SQL
Python
Apache Spark
Amazon Web Services (AWS)
ETL
Linux/Unix
icon
Bengaluru (Bangalore)
icon
4 - 6 yrs
icon
₹30L - ₹45L / yr
It is the leader in capturing technographics-powered buying intent, helps
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.

Role – Data Engineer

Responsibilities

 Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
 Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
 Assemble large, complex data sets from third-party vendors to meet business
requirements.
 Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
 Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
 Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.

Requirements
 5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Job posted by
Prajakta Kulkarni

GCP Data Engineer, WFH

at Multinational Company

Agency job
via Telamon HR Solutions
Data engineering
Google Cloud Platform (GCP)
Python
icon
Remote only
icon
5 - 15 yrs
icon
₹27L - ₹30L / yr

• The incumbent should have hands on experience in data engineering and GCP data technologies.

• Should Work with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform.

• Should Work with Agile and DevOps techniques and implementation approaches in the delivery.

• Showcase your GCP Data engineering experience when communicating with clients on their requirements, turning these into technical data solutions.

• Build and deliver Data solutions using GCP products and offerings.
• Have hands on Experience on Python 
Experience on SQL or MySQL. Experience on Looker is an added advantage.

Read more
Job posted by
Praveena Sagar

Data Science

at Amagi Media Labs

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Data Science
Machine Learning (ML)
Python
SQL
Artificial Intelligence (AI)
icon
Chennai
icon
10 - 12 yrs
icon
Best in industry
Job Title: Data Science Manager
Job Location: India
Job Summary
We at CondeNast are looking for a data science manager for the content intelligence
workstream primarily, although there might be some overlap with other workstreams. The
position is based out of Chennai and shall report to the head of the data science team, Chennai
Responsibilities:
1. Ideate new opportunities within the content intelligence workstream where data Science can
be applied to increase user engagement
2. Partner with business and translate business and analytics strategies into multiple short-term
and long-term projects
3. Lead data science teams to build quick prototypes to check feasibility and value to business
and present to business
4. Formulate the business problem into an machine learning/AI problem
5. Review & validate models & help improve the accuracy of model
6. Socialize & present the model insights in a manner that business can understand
7. Lead & own the entire value chain of a project/initiative life cycle - Interface with business,
understand the requirements/specifications, gather data, prepare it, train,validate, test the
model, create business presentations to communicate insights, monitor/track the performance
of the solution and suggest improvements
8. Work closely with ML engineering teams to deploy models to production
9. Work closely with data engineering/services/BI teams to help develop data stores, intuitive
visualizations for the products
10. Setup career paths & learning goals for reportees & mentor them
Required Skills:
1. 5+ years of experience in leading Data Science & Advanced analytics projects with a focus on
building recommender systems and 10-12 years of overall experience
2. Experience in leading data science teams to implement recommender systems using content
based, collaborative filtering, embedding techniques
3. Experience in building propensity models, churn prediction, NLP - language models,
embeddings, recommendation engine etc
4. Master’s degree with an emphasis in a quantitative discipline such as statistics, engineering,
economics or mathematics/ Degree programs in data science/ machine learning/ artificial
intelligence
5. Exceptional Communication Skills - verbal and written
6. Moderate level proficiency in SQL, Python
7. Needs to have demonstrated continuous learning through external certifications, degree
programs in machine learning & artificial intelligence
8. Knowledge of Machine learning algorithms & understanding of how they work
9. Knowledge of Reinforcement Learning
Preferred Qualifications
1. Expertise in libraries for data science - pyspark(Databricks), scikit-learn, pandas, numpy,
matplotlib, pytorch/tensorflow/keras etc
2. Working Knowledge of deep learning models
3. Experience in ETL/ data engineering
4. Prior experience in e-commerce, media & publishing domain is a plus
5. Experience in digital advertising is a plus
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move
to invest heavily in understanding this data and formed a whole new Data team entirely
dedicated to data processing, engineering, analytics, and visualization. This team helps drive
engagement, fuel process innovation, further content enrichment, and increase market
revenue. The Data team aimed to create a company culture where data was the common
language and facilitate an environment where insights shared in real-time could improve
performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team
at Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and
boost online revenue. We are broadly divided into four groups, Data Intelligence, Data
Engineering, Data Science, and Operations (including Product and Marketing Ops, Client
Services) along with Data Strategy and monetization. The teams built capabilities and products
to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are Condé
Nast, and It Starts Here.
Read more
Job posted by
Rajesh C

Data Engineer - Python

at Rudhra Info Solutions

Founded 2010  •  Products & Services  •  20-100 employees  •  Profitable
Data engineering
Python
Django
SQL
icon
Bengaluru (Bangalore), Chennai
icon
5 - 6 yrs
icon
₹7L - ₹15L / yr
  • Analyze and organize raw data 
  • Build data systems and pipelines
  • Evaluate business needs and objectives
  • Interpret trends and patterns
  • Conduct complex data analysis and report on results 
  • Build algorithms and prototypes
  • Combine raw information from different sources
  • Explore ways to enhance data quality and reliability
  • Identify opportunities for data acquisition
  • Should have experience in Python, Django Micro Service Senior developer with Financial Services/Investment Banking background.
  • Develop analytical tools and programs
  • Collaborate with data scientists and architects on several projects
  • Should have 5+ years of experience as a data engineer or in a similar role
  • Technical expertise with data models, data mining, and segmentation techniques
  • Should have experience programming languages such as Python
  • Hands-on experience with SQL database design
  • Great numerical and analytical skills
  • Degree in Computer Science, IT, or similar field; a Master’s is a plus
  • Data engineering certification (e.g. IBM Certified Data Engineer) is a plus
Read more
Job posted by
Monica Devi
SQL
Data engineering
Java
ETL
ELT
Python
Scala
Big Data
Hadoop
Spark
icon
Hyderabad
icon
4 - 9 yrs
icon
₹20L - ₹26L / yr

Responsibilities and Tasks:

  • Understand the Business Problem and the Relevant Data

  • Maintain an intimate understanding of company and department strategy

  • Translate analysis requirements into data requirements

  • Identify and understand the data sources that are relevant to the business problem

  • Develop conceptual models that capture the relationships within the data

  • Define the data-quality objectives for the solution

  • Be a subject matter expert in data sources and reporting options

 

Architect Data Management Systems:

  • Use understanding of the business problem and the nature of the data to select appropriate data management system (Big Data, Cloud DW,Cloud (GCP/AWS/Azure), OLTP, OLAP, etc.)

  • Design and implement optimum data structures in the appropriate data management system (Cloud DW, Cloud (GCP/AWS/Azure), Hadoop, SQL Server/Oracle, etc.) to satisfy the data requirements

  • Plan methods for archiving/deletion of information

 

Develop, Automate, and Orchestrate an Ecosystem of ETL Processes for Varying Volumes of Data:

  • Identify and select the optimum methods of access for each data source (real-time/streaming, delayed, static)

  • Determine transformation requirements and develop processes to bring structured and unstructured data from the source to a new physical data model

  • Develop processes to efficiently load the transform data into the data management system

 

Prepare Data to Meet Analysis Requirements

  • Work with the data scientist to implement strategies for cleaning and preparing data for analysis (e.g., outliers, missing data, etc.)

  • Develop and code data extracts

  • Follow standard methodologies to ensure data quality and data integrity

  • Ensure that the data is fit to use for data science applications

 

  • Qualifications and Experience:

  • 5 - 9 years of experience developing, delivering, and/or supporting data engineering, advanced analytics or business intelligence solutions

  • Experienced in developing ETL/ELT processes using Apache Ni-Fi and Snowflake

  • Significant experience with big data processing and/or developing applications and data sources via Hadoop, Yarn, Hive, Pig, Sqoop, MapReduce, HBASE, Flume, etc.

  • Significant experience with big data processing and/or developing applications and data Pipelines via Hadoop, Yarn, Hive, Spark, Pig, Sqoop, MapReduce, HBASE, Flume, etc.

  • Data Engineering and Analytics on Google Cloud Platform using BigQuery, Cloud Storage, Cloud SQL, Cloud Pub/Sub, Cloud DataFlow, Cloud Composer..etc or equivalent cloud platform.

  • Familiarity with software architecture (data structures, data schemas, etc.)

  • Strong working knowledge of databases (Oracle, MSSQL, etc.) including SQL and NoSQL.

  • Strong mathematics background, analytical, problem solving, and organizational skills

  • Strong communication skills (written, verbal and presentation)

  • Experience working in a global, multi-functional environment

  • Minimum of 2 years’ experience in any of the following: At least one high-level client, object-oriented language (e.g. JAVA/Python/Perl/Scala, etc.); at least one or more web programming language (PHP, MySQL, Python, Perl, JavaScript etc.); one or more Data Extraction Tools (Apache NiFi/Informatica/Talend etc.)

  • Software development using programming languages like Python/Java/Scala

  • Ability to travel as needed

Read more
Job posted by
sameer N

Sr Business Analyst

at Carsome

Founded 2015  •  Product  •  1000-5000 employees  •  Raised funding
SQL
Python
Business Analysis
Statistical Modeling
MS-Office
Tableau
Tier 1
Stakeholder management
Startups
Problem solving
Market Research
icon
Kuala Lumpur
icon
3 - 5 yrs
icon
₹20L - ₹25L / yr

Your Day-to-Day

  1. Derive Insights and drive major strategic projects to improve Business Metrics and take responsibility for cost efficiency and Revenue management across the country
  2. Perform Market research, Post Mortem analyses on competitor expansion and Market Penetration patterns. 
  3. Provide in-depth business analysis and data insights for internal stakeholders to help improve business. Derive and launch projects in order to reduce the gaps between targeted and projected business metrics
  4. Responsible for optimizing Carsome’s C2B and B2C customer acquisition and Dealer retention funnel. Work closely with Marketing and Tech teams to create, produce and implement creative digital marketing campaigns and drive CRM initiatives and strategies 
  5. Analyse the Revenue flows and processes large datasets to gather process insights and propose process improvement ideas for Carsome across SE-Asia
  6. Lead commercial projects & process mapping, from conceptualization to completion, to build or re-engineer business models, tools and processes.
  7. Having experience in analyses and insights in dealing on Unit Economics, COGs and P&L will be preferred ,but not mandatory
  8. Use Business Intelligence and Data Science tools to answer the appropriate business problems using SQL, Tableau or Python.
  9. Coordinate with HQ Data Insights Team and manage internal stakeholders across departments to ensure the smooth delivery of strategic projects
  10. Work across different departments/functions (BI,DE, tech, pricing, finance, operations, marketing, CS,CX) and also on high impact projects and support business expansion initiatives





Your Know-Know


  • At least a Bachelor's Degree in Accounting/Finance/Business or the equivalent. 
  • 3-5  years of experience in strategy / consulting / analytical / project management roles; experience in e-commerce, Start-ups or Unicorns(CARS24,OLA,SWIGGY,FLIPKART,OYO) or entrepreneur experience preferred + At Least 2 years of experience leading a team
  • Top-notch academics from a Tier 1 college (IIM / IIT/ NIT)
  • Must have SQL/PostgreSQL/Tableau Experience. 
  • Excellent Market Research, reporting and analytical skills, including carrying out weekly and monthly reporting
  • Holds experience in working with Data/Business Intelligence Team
  • Analytical mindset with ability to present data in a structured and informative way
  • Enjoy a fast-paced environment and can align business objectives with product priorities
  • Good to have : Financial modelling, Developing financial forecasts , development of Financial - strategic plan/framework
Read more
Job posted by
Piyush Palkar

Tableau developer

at IT Consulting, System Integrator & Software Services Company

Agency job
via Jobdost
Tableau
SQL
PL/SQL
icon
Chennai, Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹5L - ₹12L / yr
Responsibilities


In this role, candidates will be responsible for developing Tableau Reports. Should be able to write effective and scalable code. Improve functionality of existing Reports/systems.

·       Design stable, scalable code.

·       Identify potential improvements to the current design/processes.

·       Participate in multiple project discussions as a senior member of the team.

·       Serve as a coach/mentor for junior developers.


Minimum Qualifications

·       3 - 8 Years of experience

·       Excellent written and verbal communication skills

 

Must have skills

·       Meaningful work experience

·       Extensively worked on BI Reporting tool: Tableau for development of reports to fulfill the end user requirements.

·       Experienced in interacting with business users to analyze the business process and requirements and redefining requirements into visualizations and reports.

·       Must have knowledge with the selection of appropriate data visualization strategies (e.g., chart types) for specific use cases. Ability to showcase complete dashboard implementations that demonstrate       visual  standard methodologies (e.g., color themes, visualization layout, interactivity, drill-down capabilities, filtering, etc.).

·       You should be an Independent player and have experience working with senior leaders.

·       Able to explore options and suggest new solutions and visualization techniques to the customer.

·       Experience crafting joins and joins with custom SQL blending data from different data sources using Tableau Desktop.

·       Using sophisticated calculations using Tableau Desktop (Aggregate, Date, Logical, String, Table, LOD Expressions.

·       Working with relational data sources (like Oracle / SQL Server / DB2) and flat files.

·       Optimizing user queries and dashboard performance.

·       Knowledge in SQL, PL/SQL.

·       Knowledge is crafting DB views and materialized views.

·       Excellent verbal and written communication skills and interpersonal skills are required.

·       Excellent documentation and presentation skills; should be able to build business process mapping document; functional solution documents and own the acceptance/signoff process from E2E

·       Ability to make right graph choices, use of data blending feature, Connect to several DB technologies.

·       Must stay up to date on new and coming visualization technologies. 

 

Pref location: Chennai (priority)/ Bengaluru  

Read more
Job posted by
Ankitha Vyas

Data Engineer

at Surplus Hand

Agency job
via SurplusHand
Apache Hadoop
Apache Hive
PySpark
Big Data
Java
Spark
SQL
Apache HBase
icon
Remote, Hyderabad
icon
3 - 5 yrs
icon
₹10L - ₹14L / yr
Tech Skills:
• Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)
• should have good hands-on Spark (spark with java/PySpark)
• Hive
• must be good with SQL's(spark SQL/ HiveQL)
• Application design, software development and automated testing
Environment Experience:
• Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing, and Junit.
• Demonstrated experience with Agile or other rapid application development methods
• Cloud development (AWS/Azure/GCP)
• Unix / Shell scripting
• Web services , open API development, and REST concepts
Read more
Job posted by
Anju John

Data Scientist

at Pluto Seven Business Solutions Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Statistical Modeling
Data Science
TensorFlow
Python
Machine Learning (ML)
Deep Learning
Data Analytics
Google Cloud Storage
Scikit-Learn
Regression analysis
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹4L - ₹20L / yr
Data Scientist : Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, and IoT tailored solutions to accelerate business transformation.We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries. We are a Google premium partner in AI & ML, which means you'll have the opportunity to work and collaborate with folks from Google. Are you an innovator, have a passion to work with data and find insights, have the inquisitive mind with the constant yearning to learn new ideas; then we are looking for you.As a Pluto7 Data Scientist engineer, you will be one of the key members of our innovative artificial intelligence and machine learning team. You are expected to be unfazed with large volumes of data, love to apply various models, use technology to process and filter data for analysis. Responsibilities: Build and Optimize Machine Learning models. Work with large/complex datasets to solve difficult and non-routine analysis problems, applying advanced analytical methods as needed. Build and prototype data pipelines for analysis at scale. Work cross-functionally with Business Analysts and Data Engineers to help develop cutting edge and innovative artificial intelligence and machine learning models. Make recommendations for selections on machine learning models. Drive accuracy levels to the next stage of the given ML models. Experience in developing visualisation and User Good exposure in exploratory data analysis Strong experience in Statistics and ML algorithms. Minimum qualifications: 2+ years of relevant work experience in ML and advanced data analytics(e.g., as a Machine Learning Specialist / Data scientist ). Strong Experience using machine learning and artificial intelligence frameworks such as Tensorflow, sci-kit learn, Keras using python. Good in Python/R/SAS programming. Understanding of Cloud platforms like GCP, AWS, or other. Preferred qualifications: Work experience in building data pipelines to ingest, cleanse and transform data. Applied experience with machine learning on large datasets and experience translating analysis results into business recommendations. Demonstrated skills in selecting the right statistical tools given a data analysis problem. Demonstrated effective written and verbal communication skills. Demonstrated willingness to both teach others and learn new techniques Work location : Bangalore
Read more
Job posted by
Sindhu Narayan

Data ETL Engineer

at Chargebee

Founded 2011  •  Products & Services  •  100-1000 employees  •  Raised funding
ETL
Python
Relational Database (RDBMS)
RESTful APIs
Big Data
MySQL
Data Warehouse (DWH)
icon
Chennai
icon
1 - 3 yrs
icon
₹5L - ₹12L / yr
Responsibilities: Design and develop ETL Framework and Data Pipelines in Python 3. Orchestrate complex data flows from various data sources (like RDBMS, REST API, etc) to the data warehouse and vice versa. Develop app modules (in Django) for enhanced ETL monitoring. Device technical strategies for making data seamlessly available to BI and Data Sciences teams. Collaborate with engineering, marketing, sales, and finance teams across the organization and help Chargebee develop complete data solutions. Serve as a subject-matter expert for available data elements and analytic capabilities. Qualification: Expert programming skills with the ability to write clean and well-designed code. Expertise in Python, with knowledge of at least one Python web framework. Strong SQL Knowledge, and high proficiency in writing advanced SQLs. Hands on experience in modeling relational databases. Experience integrating with third-party platforms is an added advantage. Genuine curiosity, proven problem-solving ability, and a passion for programming and data.
Read more
Job posted by
Vinothini Sundaram
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Kaleidofin?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort