MicroStrategy

at Response Informatics

DP
Posted by Swagatika Sahoo
icon
Remote, Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹5L - ₹25L / yr
icon
Full time
Skills
MicroStrategy
SQL
Business Intelligence (BI)
Microstrategy. Having knowledge on Database like SQL. And also knowledge on BI concepts.Strong communication skills.
Read more

About Response Informatics

undefined
Read more
Founded
2018
Type
Services
Size
employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at Propellor.ai

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Python
SQL
Spark
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Linear regression
Image processing
Forecasting
Time series
Object Oriented Programming (OOPs)
Apache Spark
Cluster analysis
Databricks
icon
Remote only
icon
2 - 5 yrs
icon
₹5L - ₹15L / yr

Job Description: Data Scientist

At Propellor.ai, we derive insights that allow our clients to make scientific decisions. We believe in demanding more from the fields of Mathematics, Computer Science, and Business Logic. Combine these and we show our clients a 360-degree view of their business. In this role, the Data Scientist will be expected to work on Procurement problems along with a team-based across the globe.

We are a Remote-First Company.

Read more about us here: https://www.propellor.ai/consulting" target="_blank">https://www.propellor.ai/consulting


What will help you be successful in this role

  • Articulate
  • High Energy
  • Passion to learn
  • High sense of ownership
  • Ability to work in a fast-paced and deadline-driven environment
  • Loves technology
  • Highly skilled at Data Interpretation
  • Problem solver
  • Ability to narrate the story to the business stakeholders
  • Generate insights and the ability to turn them into actions and decisions

 

Skills to work in a challenging, complex project environment

  • Need you to be naturally curious and have a passion for understanding consumer behavior
  • A high level of motivation, passion, and high sense of ownership
  • Excellent communication skills needed to manage an incredibly diverse slate of work, clients, and team personalities
  • Flexibility to work on multiple projects and deadline-driven fast-paced environment
  • Ability to work in ambiguity and manage the chaos

 

Key Responsibilities

  • Analyze data to unlock insights: Ability to identify relevant insights and actions from data.  Use regression, cluster analysis, time series, etc. to explore relationships and trends in response to stakeholder questions and business challenges.   
  • Bring in experience for AI and ML:  Bring in Industry experience and apply the same to build efficient and optimal Machine Learning solutions.
  • Exploratory Data Analysis (EDA) and Generate Insights: Analyse internal and external datasets using analytical techniques, tools, and visualization methods. Ensure pre-processing/cleansing of data and evaluate data points across the enterprise landscape and/or external data points that can be leveraged in machine learning models to generate insights. 
  • DS and ML Model Identification and Training: Identity, test, and train machine learning models that need to be leveraged for business use cases. Evaluate models based on interpretability, performance, and accuracy as required. Experiment and identify features from datasets that will help influence model outputs.  Determine what models will need to be deployed, data points that need to be fed into models, and aid in the deployment and maintenance of models.


Technical Skills

An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of them. We are open to promising candidates who are passionate about their work, fast learners and are team players.

  • Strong experience with machine learning and AI including regression, forecasting, time series, cluster analysis, classification, Image recognition, NLP, Text Analytics and Computer Vision.
  • Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, or similar.
  • Strong experience with popular database programming languages including SQL.
  • Strong experience in Spark/Pyspark
  • Experience in working in Databricks

 

What are the company benefits you get, when you join us as?

  • Permanent Work from Home Opportunity
  • Opportunity to work with Business Decision Makers and an internationally based team
  • The work environment that offers limitless learning
  • A culture void of any bureaucracy, hierarchy
  • A culture of being open, direct, and with mutual respect
  • A fun, high-caliber team that trusts you and provides the support and mentorship to help you grow
  • The opportunity to work on high-impact business problems that are already defining the future of Marketing and improving real lives

To know more about how we work: https://bit.ly/3Oy6WlE" target="_blank">https://bit.ly/3Oy6WlE

Whom will you work with?

You will closely work with other Senior Data Scientists and Data Engineers.

Immediate to 15-day Joiners will be preferred.

 

Read more
Job posted by
Anila Nair

Data Engineer

at Startup Focused on simplifying Buying Intent

Agency job
via Qrata
Big Data
Apache Spark
Spark
Hadoop
ETL
Python
Scala
MongoDB
Cassandra
Data engineering
SQL
athena
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹28L - ₹56L / yr
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Job posted by
Blessy Fernandes

Team Lead- Analytics

at Porter.in

Founded 2014  •  Services  •  100-1000 employees  •  Profitable
Python
SQL
Data modeling
Statistical Modeling
Predictive modelling
Data Visualization
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹20L - ₹28L / yr
Bangalore | Full Time
Responsibilities
This role requires a person to support business charters & accompanying products by aligning with the Analytics
Manager’s vision, understanding tactical requirements and helping in successful execution. Split would be approx.
70% management + 30% individual contributor. Responsibilities include

Project Management
- Understand business needs and objectives.
- Refine use cases and plan iterations and deliverables - able to pivot as required.
- Estimate efforts and conduct regular task updates to ensure timeline adherence.
- Set and manage stakeholder expectations as required

Quality Execution
- Help BA and SBA resources with requirement gathering and final presentations.
- Resolve blockers regarding technical challenges and decision-making.
- Check final deliverables for correctness and review codes, along with Manager.

KPIs and metrics
- Orchestrate metrics building, maintenance, and performance monitoring.
- Owns and manages data models, data sources, and data definition repo.
- Makes low-level design choices during execution.

Team Nurturing
- Help Analytics Manager during regular one-on-ones + check-ins + recruitment.
- Provide technical guidance whenever required.
- Improve benchmarking and decision-making skills at execution-level.
- Train and get new resources up-to-speed.
- Knowledge building (methodologies) to better position the team for complex problems.

Communication
- Upstream to document and discuss execution challenges, process inefficiencies, and feedback loops.
- Downstream and parallel for context-building, mentoring, stakeholder management.

Analytics Stack
- Analytics : Python / R + SQL + Excel / PPT, Colab notebooks
- Database : PostgreSQL, Amazon Redshift, DynamoDB, Aerospike
- Warehouse : Amazon Redshift
- ETL : Lots of Python + custom-made
- Business Intelligence / Visualization : Metabase + Python/R libraries (location data)
- Deployment pipeline : Docker, Git, Jenkins, AWS Lambda
Read more
Job posted by
Satyajit Mittra

SQL Developer

at NextG Apex India PvtLtd

Founded 2020  •  Products & Services  •  100-1000 employees  •  Bootstrapped
SQL
MySQL
MS SQLServer
Javascript
Microsoft Office
AngularJS (1.x)
HTML/CSS
JSON
Ionic
SQL server
Microsoft SQL Server
icon
Mumbai, Navi Mumbai
icon
5 - 10 yrs
icon
₹6L - ₹10L / yr

The SQL Server DBA will be responsible for the implementation, configuration, maintenance, and performance of critical SQL Server RDBMS systems, to ensure the availability and consistent performance of our corporate applications. This is a “hands-on” position requiring solid technical skills, as well as excellent interpersonal and communication skills.

The successful candidate will be responsible for the development and sustainment of the SQL Server Warehouse, ensuring its operational readiness (security, health and performance), executing data loads, and performing data modeling in support of multiple development teams. The data warehouse supports an enterprise application suite of program management tools. Must be capable of working independently and collaboratively.


Read more
Job posted by
Muskan Chourasia

Data Scientist

at Analytics Consulting Company | REMOTE

Agency job
via Unnati
Data Science
R Programming
Python
MongoDB
SQL
Predictive analytics
Regression analysis
Deep Learning
icon
Remote, Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹18L - ₹20L / yr
Do you want your software skills to contribute meaningfully into finding technology driven solutions for various businesses and alongside grow your career, then read on.
 
Our client provides data-based process optimization and analytics solutions to businesses worldwide. Their innovative algorithms and customized IT solutions cater to complex problems related to every field or industry, through tools that are non standard and are backed-up by extensive research. They serve startups as well as large, medium and small enterprises, a majority of their clients being industry leaders.
 
With registered offices in India, USA and UAE, their projects support various sectors and functions like logistics, IT, Retail, Ecommerce, Healthcare industry among others, across Asia, America and Europe. The founder holds a Master’s degree from IIT and a PhD in Operations Research from USA, with rich experience in Optimization and Analytics for various industries. His team of top scientists and pedagogy experts are focusing on innovative revenue generation ideas with minimum operational costs.
 
As a Data Scientist, you will apply expertise in machine-learning, data mining and statistical methods to design, prototype, and build the next-generation analytics engines and services.
 
What you will do:
  • Conducting advanced statistical analysis to provide actionable insights, identify trends, and measure performance
  • Performing data exploration, cleaning, preparation and feature engineering; in addition to executing tasks such as building a POC, validation/ AB testing
  • Collaborating with data engineers & architects to implement and deploy scalable solutions
  • Communicating results to diverse audiences with effective writing and visualizations
  • Identifying and executing on high impact projects, triage external requests, and ensure timely completion for the results to be useful
  • Providing thought leadership by researching best practices, conducting experiments, and collaborating with industry leaders

 

 

What you need to have:
  • 2-4 year experience in machine learning algorithms, predictive analytics, demand forecasting in real-world projects
  • Strong statistical background in descriptive and inferential statistics, regression, forecasting techniques.
  • Strong Programming background in Python (including packages like Tensorflow), R, D3.js , Tableau, Spark, SQL, MongoDB.
  • Preferred exposure to Optimization & Meta-heuristic algorithm and related applications
  • Background in a highly quantitative field like Data Science, Computer Science, Statistics, Applied Mathematics,Operations Research, Industrial Engineering, or similar fields.
  • Should have 2-4 years of experience in Data Science algorithm design and implementation, data analysis in different applied problems.
  • DS Mandatory skills : Python, R, SQL, Deep learning, predictive analysis, applied statistics
Read more
Job posted by
Veena Salian

Data Engineer

at Top Management Consulting Company

Python
SQL
Amazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
icon
Gurugram, Bengaluru (Bangalore)
icon
2 - 9 yrs
icon
Best in industry
Greetings!!

We are looking out for a technically driven  "Full-Stack Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. 

Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Read more
Job posted by
Naveed Mohd

SQL Developer

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
SQL
SSIS
icon
Remote only
icon
5 - 7 yrs
icon
₹10L - ₹18L / yr
SQL Developer with Relevant experience of 7 Yrs with Strong Communication Skills.
 
Key responsibilities:
 
 
  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills
 
 

Experience
Experience Range

5 Years - 10 Years

Function Information Technology
Desired Skills
Must have Skills:  SQL

Hard Skills for a Data Warehouse Developer:
 
  • Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend
  • Sound knowledge of SQL
  • Experience with SQL databases such as Oracle, DB2, and SQL
  • Experience using Data Warehouse platforms e.g., SAP, Birst
  • Experience designing, developing, and implementing Data Warehouse solutions
  • Project management and system development methodology
  • Ability to proactively research solutions and best practice
 
Soft Skills for Data Warehouse Developers:
 
  • Excellent Analytical skills
  • Excellent verbal and written communications
  • Strong organization skills
  • Ability to work on a team, as well as independently
Read more
Job posted by
Sandhya JD

Data Engineer- SQL+PySpark

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Spark
PySpark
Big Data
Python
SQL
Windows Azure
icon
Remote, Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹5L - ₹15L / yr
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Read more
Job posted by
Evelyn Charles

Product Analyst

at Magicpin (Samast Technologies)

Founded 2015  •  Services  •  100-1000 employees  •  Profitable
Data Analytics
Product Analyst
SQL
MS-Excel
icon
Gurugram
icon
1 - 5 yrs
icon
₹7L - ₹15L / yr

What you will get to do:

 

  • Understand the business, define problems and design structured approaches to solve them.
  • Perform exploratory analysis on large volumes of data to validate/disregard hypotheses.
  • Identify opportunities using insights for product/process improvement.
  • Create dashboards and automated reports to track KPIs.
  • Manage the reporting of KPIs, present analysis & findings to steer the team’s strategic vision.
  • Partner with cross-functional stakeholders (engineering, design, sales and operations teams) on a regular basis to drive product/process changes and improve business intelligence.
  • Analyze rich user and transaction data to surface patterns and trends.
  • Perform analytical deep-dives to identify problems, opportunities and actions required.
  • Process data from disparate sources using SQL, R, Python, or other scripting and statistical tools;
  • Perform ad hoc data analysis to remove roadblocks and ensure operations are running smoothly.

 

What you will need to apply:

  • Flair for numbers, strong analytical skills and structured process thinking, attention to detail
  • Basic business sense and an understanding of common statistics/analytics techniques
  • Advanced Excel Proficiency and SQL (experience of working in Python/R is preferable)
  • Strong ownership, drive and the experience of working independently in unstructured environments
  • Ability to work closely with cross-functional teams within tight timelines to execute on decisions
  • An appreciation for the connection between your work and the outcome (the impact it has on the organization and the experience it delivers to the customers)
  • Interview Focus: Puzzles, Guesstimates, Problem Solving, Analytical tool proficiency, Data Interpretation
Read more
Job posted by
Sonali kataria
Data engineering
Python
SQL
Spark
PySpark
Cassandra
Groovy
Amazon Web Services (AWS)
Amazon S3
Windows Azure
Foundry
Good Clinical Practice
E2
R
palantir
icon
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
icon
7 - 10 yrs
icon
₹20L - ₹25L / yr
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
Job posted by
RAHUL BATTA
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Response Informatics?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort