Cutshort logo
Metrics management Jobs in Bangalore (Bengaluru)

11+ Metrics management Jobs in Bangalore (Bengaluru) | Metrics management Job openings in Bangalore (Bengaluru)

Apply to 11+ Metrics management Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Metrics management Job opportunities across top companies like Google, Amazon & Adobe.

icon
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹35L / yr
SQL
skill iconPython
Metrics management
skill iconData Analytics

Responsibilities

  • Work with large and complex blockchain data sets and derive investment relevant metrics in close partnership with financial analysts and blockchain engineers.
  • Apply knowledge of statistics, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to the development of fundamental metrics needed to evaluate various crypto assets.
  • Build a strong understanding of existing metrics used to value various decentralized applications and protocols.
  • Build customer facing metrics and dashboards.
  • Work closely with analysts, engineers, Product Managers and provide feedback as we develop our data analytics and research platform.

Qualifications

  • Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience (or) degree in an analytical field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research, Management Science)
  • 3+ years experience with data analysis and metrics development
  • 3+ years experience analyzing and interpreting data, drawing conclusions, defining recommended actions, and reporting results across stakeholders
  • 2+ years experience writing SQL queries
  • 2+ years experience scripting in Python
  • Demonstrated curiosity in and excitement for Web3/blockchain technologies
Read more
Bengaluru (Bangalore), Gurugram
1 - 7 yrs
₹4L - ₹10L / yr
skill iconPython
skill iconR Programming
SAS
Surveying
skill iconData Analytics
+2 more

Desired Skills & Mindset:


We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.


• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions

• Statistical programming software experience in SPSS and comfortable working with large data sets.

• R, Python, SAS & SQL are preferred but not a mandate

• Excellent time management skills

• Good written and verbal communication skills; understanding of both written and spoken English

• Strong interpersonal skills

• Ability to act autonomously, bringing structure and organization to work

• Creative and action-oriented mindset

• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged

• Ability to work under pressure and deliver on tight deadlines


Qualifications and Experience:


• Graduate degree in: Statistics/Economics/Econometrics/Computer

Science/Engineering/Mathematics/MBA (with a strong quantitative background) or

equivalent

• Strong track record work experience in the field of business intelligence, market

research, and/or Advanced Analytics

• Knowledge of data collection methods (focus groups, surveys, etc.)

• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,

and MS Office (Excel, PowerPoint, Word)

• Strong analytical and critical thinking skills

• Industry experience in Consumer Experience/Healthcare a plus

Read more
Kaleidofin

at Kaleidofin

3 recruiters
Poornima B
Posted by Poornima B
Chennai, Bengaluru (Bangalore)
2 - 4 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconPython
SQL
Customer Acquisition
Big Data
+2 more
Responsibility
  • Partnering with internal business owners (product, marketing, edit, etc.) to understand needs and develop custom analysis to optimize for user engagement and retention
  • Good understanding of the underlying business and workings of cross functional teams for successful execution
  • Design and develop analyses based on business requirement needs and challenges.
  • Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, etc.
  • Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required.
  • Identify and use appropriate investigative and analytical technologies to interpret and verify results.
  • Apply and learn a wide variety of tools and languages to achieve results
  • Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs.

Requirements
  • 2 - 4 years  of relevant experience in Data science.
  • Preferred education: Bachelor's degree in a technical field or equivalent experience.
  • Experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms.
  • Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, XGBoost, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, boosting and ensemble methods.
  • Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools.
  • Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis.
  • Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts.
  • Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE).
  • Experience in risk and credit score domains preferred.
Read more
css corp
Agency job
via staff hire solutions by Purvaja Patidar
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹11L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+9 more
Design and implement cloud solutions, build MLOps on cloud (GCP) Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools; Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality. Data science models testing, validation and test automation. Communicate with a team of data scientists, data engineers and architects, and document the processes. Required Qualifications: Ability to design and implement cloud solutions and ability to build MLOps pipelines on cloud solutions (GCP) Experience with MLOps Frameworks like Kubeflow, MLFlow, DataRobot, Airflow etc., experience with Docker and Kubernetes, OpenShift. Programming languages like Python, Go, Ruby or Bash, a good understanding of Linux, and knowledge of frameworks such as sci-kit-learn, Keras, PyTorch, Tensorflow, etc. Ability to understand tools used by data scientists and experience with software development and test automation. Fluent in English, good communication skills and ability to work in a team. Desired Qualifications: Bachelor’s degree in Computer Science or Software Engineering Experience in using GCP services. Good to have Google Cloud Certification
Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Novo

at Novo

2 recruiters
Dishaa Ranjan
Posted by Dishaa Ranjan
Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹25L - ₹35L / yr
SQL
skill iconPython
pandas
Scikit-Learn
TensorFlow
+1 more

About Us: 

Small businesses are the backbone of the US economy, comprising almost half of the GDP and the private workforce. Yet, big banks don’t provide the access, assistance and modern tools that owners need to successfully grow their business. 


We started Novo to challenge the status quo—we’re on a mission to increase the GDP of the modern entrepreneur by creating the go-to banking platform for small businesses (SMBs). Novo is flipping the script of the banking world, and we’re excited to lead the small business banking revolution.


At Novo, we’re here to help entrepreneurs, freelancers, startups and SMBs achieve their financial goals by empowering them with an operating system that makes business banking as easy as iOS. We developed modern bank accounts and tools to help to save time and increase cash flow. Our unique product integrations enable easy access to tracking payments, transferring money internationally, managing business transactions and more. We’ve made a big impact in a short amount of time, helping thousands of organizations access powerfully simple business banking.  



We are looking for a Senior Data Scientist who is enthusiastic about using data and technology to solve complex business problems. If you're passionate about leading and helping to architect and develop thoughtful data solutions, then we want to chat. Are you ready to revolutionize the small business banking industry with us?


About the Role: (specific to the role-- describe the role activities/duties, who they interact with, what they are accountable for, how the role operates in the team, department and organization)


  • Build and manage predictive models focussed on credit risk, fraud, conversions, churn, consumer behaviour etc
  • Provides best practices, direction for data analytics and business decision making across multiple projects and functional areas
  • Implements performance optimizations and best practices for scalable data models, pipelines and modelling
  • Resolve blockers and help the team stay productive
  • Take part in building the team and iterating on hiring processes

Requirements for the Role: (these are specific to the role-- technical skills and requirements to fulfill the job duties, certifications, years of experience, degree)


  • 4+ years of experience in data science roles focussed on managing data processes, modelling and dashboarding
  • Strong experience in python, SQL and in-depth understanding of modelling techniques
  • Experience working with Pandas, scikit learn, visualization libraries like plotly, bokeh etc.
  • Prior experience with credit risk modelling will be preferred
  • Deep Knowledge of Python to write scripts to manipulate data and generate automated  reports

How We Define Success: (these are specific to the role-- should be tied to performance management, OKRs or general goals)


  • Expand access to data driven decision making across the organization
  • Solve problems in risk, marketing, growth, customer behaviour through analytics models that increase efficacy

Nice To Have, but Not Required:

  • Experience in dashboarding libraries like Python Dash and exposure to CI/CD 
  • Exposure to big data tools like Spark, and some core tech knowledge around API’s, data streaming etc.


Novo values diversity as a core tenant of the work we do and the businesses we serve. We are an equal opportunity employer, indiscriminate of race, religion, ethnicity, national origin, citizenship, gender, gender identity, sexual orientation, age, veteran status, disability, genetic information or any other protected characteristic. 

Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr
Data Warehouse (DWH)
Spark
Data engineering
skill iconPython
PySpark
+5 more

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Lokesh Manikappa
Posted by Lokesh Manikappa
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
+5 more

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more
NA

at NA

Agency job
via Talent folks by Rijooshri Saikia
Bengaluru (Bangalore)
15 - 25 yrs
₹10L - ₹15L / yr
skill iconJava
skill iconPython
Big Data

Job Description

  • Design, development and deployment of highly-available and fault-tolerant enterprise business software at scale.

  • Demonstrate tech expertise to go very deep or broad in solving classes of problems or creating broadly leverage-able solutions.

  • Execute large-scale projects - Provide technical leadership in architecting and building product solutions.

  • Collaborate across teams to deliver a result, from hardworking team members within your group, through smart technologists across lines of business.

  • Be a role model on acting with good judgment and responsibility, helping teams to commit and move forward.

  • Be a humble mentor and trusted advisor for both our talented team members and passionate leaders alike. Deal with differences in opinion in a mature and fair way.

  • Raise the bar by improving standard methodologies, producing best-in-class efficient solutions, code, documentation, testing, and monitoring.

Qualifications

      • 15+ years of relevant engineering experience.

  • Proven record of building and productionizing highly reliable products at scale.

  • Experience with Java and Python

  • Experience with the Big Data technologie is a plus.

  • Ability to assess new technologies and make pragmatic choices that help guide us towards a long-term vision

  • Can collaborate well with several other engineering orgs to articulate requirements and system design

Additional Information

Professional Attributes:
• Team player!

• Great interpersonal skills, deep technical ability, and a portfolio of successful execution.

• Excellent written and verbal communication skills, including the ability to write detailed technical documents.

• Passionate about helping teams grow by inspiring and mentoring engineers.



Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Remote, Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
PyTorch
skill iconDeep Learning
Natural Language Processing (NLP)
skill iconPython
skill iconMachine Learning (ML)
+8 more
ML Engineer
WE ARE GRAPHENE

Graphene is an award-winning AI company, developing customized insights and data solutions for corporate clients. With a focus on healthcare, consumer goods and financial services, our proprietary AI platform is disrupting market research with an approach that allows us to get into the mind of customers to a degree unprecedented in traditional market research.

Graphene was founded by corporate leaders from Microsoft and P&G and works closely with the Singapore Government & universities in creating cutting edge technology. We are gaining traction with many Fortune 500 companies globally.

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the few start-ups which are self-funded, yet profitable and debt free.

We already have a strong bench strength of leaders in place. Now, we are looking to groom more talents for our expansion into the US. Join us and take both our growths to the next level!

 

WHAT WILL THE ENGINEER-ML DO?

 

  • Primary Purpose: As part of a highly productive and creative AI (NLP) analytics team, optimize algorithms/models for performance and scalability, engineer & implement machine learning algorithms into services and pipelines to be consumed at web-scale
  • Daily Grind: Interface with data scientists, project managers, and the engineering team to achieve sprint goals on the product roadmap, and ensure healthy models, endpoints, CI/CD,
  • Career Progression: Senior ML Engineer, ML Architect

 

YOU CAN EXPECT TO

  • Work in a product-development team capable of independently authoring software products.
  • Guide junior programmers, set up the architecture, and follow modular development approaches.
  • Design and develop code which is well documented.
  • Optimize of the application for maximum speed and scalability
  • Adhere to the best Information security and Devops practices.
  • Research and develop new approaches to problems.
  • Design and implement schemas and databases with respect to the AI application
  • Cross-pollinated with other teams.

 

HARD AND SOFT SKILLS

Must Have

  • Problem-solving abilities
  • Extremely strong programming background – data structures and algorithm
  • Advanced Machine Learning: TensorFlow, Keras
  • Python, spaCy, NLTK, Word2Vec, Graph databases, Knowledge-graph, BERT (derived models), Hyperparameter tuning
  • Experience with OOPs and design patterns
  • Exposure to RDBMS/NoSQL
  • Test Driven Development Methodology

 

Good to Have

  • Working in cloud-native environments (preferably Azure)
  • Microservices
  • Enterprise Design Patterns
  • Microservices Architecture
  • Distributed Systems
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Sudarshini K
Posted by Sudarshini K
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹14L / yr
ETL
Big Data
Hadoop
PySpark
SQL
+4 more
Roles and Responsibilities:

• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort