The Client is the world’s largest media investment company. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channel We are currently looking for a Manager Analyst – Analytics to join us. In this role, you will work on
various projects for the in-house team across data management, reporting, and analytics.
Responsibility:
• Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
• Develop data extraction and manipulation code based on business rules
• Design and construct data store and procedures for their maintenance Develop and maintain strong relationships with stakeholders Write high-quality code as per prescribed standards.
• Participate in internal projects as required
Requirements:
• 2-5 years for strong experience in working with SQL, Python, ETL development.
• Strong Experience in writing complex SQLs
• Good Communication skills
• Good experience of working with any BI tool like Tableau, Power BI.
• Familiar with various cloud technologies and their offerings within the data specialization and Data Warehousing.
• Snowflake, AWS are good to have.
Minimum qualifications:
• B. Tech./MCA or equivalent preferred
Excellent 2 years Hand on experience on Big data, ETL Development, Data Processing.
About My Client is the world’s largest media investment company.
Similar jobs
Job Title: Credit Risk Analyst
Company: FatakPay FinTech
Location: Mumbai, India
Salary Range: INR 8 - 15 Lakhs per annum
Job Description:
FatakPay, a leading player in the fintech sector, is seeking a dynamic and skilled Credit Risk Analyst to join our team in Mumbai. This position is tailored for professionals who are passionate about leveraging technology to enhance financial services. If you have a strong background in engineering and a keen eye for risk management, we invite you to be a part of our innovative journey.
Key Responsibilities:
- Conduct thorough risk assessments by analyzing borrowers' financial data, including financial statements, credit scores, and income details.
- Develop and refine predictive models using advanced statistical methods to forecast loan defaults and assess creditworthiness.
- Collaborate in the formulation and execution of credit policies and risk management strategies, ensuring compliance with regulatory standards.
- Monitor and analyze the performance of loan portfolios, identifying trends, risks, and opportunities for improvement.
- Stay updated with financial regulations and standards, ensuring all risk assessment processes are in compliance.
- Prepare comprehensive reports on credit risk analyses and present findings to senior management.
- Work closely with underwriting, finance, and sales teams to provide critical input influencing lending decisions.
- Analyze market trends and economic conditions, adjusting risk assessment models and strategies accordingly.
- Utilize cutting-edge financial technologies for more efficient and accurate data analysis.
- Engage in continual learning to stay abreast of new tools, techniques, and best practices in credit risk management.
Qualifications:
- Minimum qualification: B.Tech or Engineering degree from a reputed institution.
- 2-4 years of experience in credit risk analysis, preferably in a fintech environment.
- Proficiency in data analysis, statistical modeling, and machine learning techniques.
- Strong analytical and problem-solving skills.
- Excellent communication skills, with the ability to present complex data insights clearly.
- A proactive approach to work in a fast-paced, technology-driven environment.
- Up-to-date knowledge of financial regulations and compliance standards.
We look forward to discovering how your expertise and innovative ideas can contribute to the growth and success of FatakPay. Join us in redefining the future of fintech!
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
About Databook:-
- Great salespeople let their customers’ strategies do the talking.
Databook’s award-winning Strategic Relationship Management (SRM) platform uses advanced AI and NLP to empower the world’s largest B2B sales teams to create, manage, and maintain strategic relationships at scale. The platform ingests and interprets billions of financial and market data signals to generate actionable sales strategies that connect the seller’s solutions to a buyer’s financial pain and urgency.
The Opportunity
We're seeking Junior Engineers to support and develop Databook’s capabilities. Working closely with our seasoned engineers, you'll contribute to crafting new features and ensuring our platform's reliability. If you're eager about playing a part in building the future of customer intelligence, with a keen eye towards quality, we'd love to meet you!
Specifically, you'll
- Participate in various stages of the engineering lifecycle alongside our experienced engineers.
- Assist in maintaining and enhancing features of the Databook platform.
- Collaborate with various teams to comprehend requirements and aid in implementing technology solutions.
Please note: As you progress and grow with us, you might be introduced to on-call rotations to handle any platform challenges.
Working Arrangements:
- This position offers a hybrid work mode, allowing employees to work both remotely and in-office as mutually agreed upon.
What we're looking for
- 1-2+ years experience as a Data Engineer
- Bachelor's degree in Engineering
- Willingness to work across different time zones
- Ability to work independently
- Knowledge of cloud (AWS or Azure)
- Exposure to distributed systems such as Spark, Flink or Kafka
- Fundamental knowledge of data modeling and optimizations
- Minimum of one year of experience using Python working as a Software Engineer
- Knowledge of SQL (Postgres) databases would be beneficial
- Experience with building analytics dashboard
- Familiarity with RESTful APIs and/or GraphQL is welcomed
- Hand-on experience with Numpy, Pandas, SpaCY would be a plus
- Exposure or working experience on GenAI (LLMs in general), LLMOps would be a plus
- Highly fluent in both spoken and written English language
Ideal candidates will also have:
- Self-motivated with great organizational skills.
- Ability to focus on small and subtle details.
- Are willing to learn and adapt in a rapidly changing environment.
- Excellent written and oral communication skills.
Join us and enjoy these perks!
- Competitive salary with bonus
- Medical insurance coverage
- 5 weeks leave plus public holidays
- Employee referral bonus program
- Annual learning stipend to spend on books, courses or other training materials that help you develop skills relevant to your role or professional development
- Complimentary subscription to Masterclass
Requirements:
● Understanding our data sets and how to bring them together.
● Working with our engineering team to support custom solutions offered to the product development.
● Filling the gap between development, engineering and data ops.
● Creating, maintaining and documenting scripts to support ongoing custom solutions.
● Excellent organizational skills, including attention to precise details
● Strong multitasking skills and ability to work in a fast-paced environment
● 5+ years experience with Python to develop scripts.
● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]
● You are familiar with pulling and pushing files from SFTP and AWS S3.
● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.
● Familiarity with SQL programming to query and transform data from relational Databases.
● Familiarity to work with Linux (and Linux work environment).
● Excellent written and verbal communication skills
● Extracting, transforming, and loading data into internal databases and Hadoop
● Optimizing our new and existing data pipelines for speed and reliability
● Deploying product build and product improvements
● Documenting and managing multiple repositories of code
● Experience with SQL and NoSQL databases (Casendra, MySQL)
● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,
RedShift, Athena)
● Hands-on experience in AirFlow
● Understanding of best practices, common coding patterns and good practices around
● storing, partitioning, warehousing and indexing of data
● Experience in reading the data from Kafka topic (both live stream and offline)
● Experience in PySpark and Data frames
Responsibilities:
You’ll
● Collaborating across an agile team to continuously design, iterate, and develop big data systems.
● Extracting, transforming, and loading data into internal databases.
● Optimizing our new and existing data pipelines for speed and reliability.
● Deploying new products and product improvements.
● Documenting and managing multiple repositories of code.
We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.
• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
• Statistical programming software experience in SPSS and comfortable working with large data sets.
• R, Python, SAS & SQL are preferred but not a mandate
• Excellent time management skills
• Good written and verbal communication skills; understanding of both written and spoken English
• Strong interpersonal skills
• Ability to act autonomously, bringing structure and organization to work
• Creative and action-oriented mindset
• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged
• Ability to work under pressure and deliver on tight deadlines
Qualifications and Experience:
• Graduate degree in: Statistics/Economics/Econometrics/Computer
Science/Engineering/Mathematics/MBA (with a strong quantitative background) or
equivalent
• Strong track record work experience in the field of business intelligence, market
research, and/or Advanced Analytics
• Knowledge of data collection methods (focus groups, surveys, etc.)
• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,
and MS Office (Excel, PowerPoint, Word)
• Strong analytical and critical thinking skills
• Industry experience in Consumer Experience/Healthcare a plus
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.
Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
Big Data Engineer: 5+ yrs.
Immediate Joiner
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
- We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
Job Summary
SQL development for our Enterprise Resource Planning (ERP) Product offered to SMEs. Regular modifications , creation and validation with testing of stored procedures , views, functions on MS SQL Server.
Responsibilities and Duties
Understanding the ERP Software and use cases.
Regular Creation,modifications and testing of
- Stored Procedures
- Views
- Functions
- Nested Queries
- Table and Schema Designs
Qualifications and Skills
MS SQL
- Procedural Language
- Datatypes
- Objects
- Databases
- Schema