Similar jobs
Data Analytics
Data Visualization
Business Intelligence (BI)
SQL
Apache Hive
Spark
PowerBI
Informatica
OLAP
oracle BI
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹25L / yr
Main Responsibilities:

 Work closely with different Front Office and Support Function stakeholders including but not restricted to Business
Management, Accounts, Regulatory Reporting, Operations, Risk, Compliance, HR on all data collection and reporting use cases.
 Collaborate with Business and Technology teams to understand enterprise data, create an innovative narrative to explain, engage and enlighten regular staff members as well as executive leadership with data-driven storytelling
 Solve data consumption and visualization through data as a service distribution model
 Articulate findings clearly and concisely for different target use cases, including through presentations, design solutions, visualizations
 Perform Adhoc / automated report generation tasks using Power BI, Oracle BI, Informatica
 Perform data access/transfer and ETL automation tasks using Python, SQL, OLAP / OLTP, RESTful APIs, and IT tools (CFT, MQ-Series, Control-M, etc.)
 Provide support and maintain the availability of BI applications irrespective of the hosting location
 Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability, provide incident-related communications promptly
 Work with strict deadlines on high priority regulatory reports
 Serve as a liaison between business and technology to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood, and considered as part of operational
prioritization and planning
 To work for APAC Chief Data Office and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

General Skills:
 Excellent knowledge of RDBMS and hands-on experience with complex SQL is a must, some experience in NoSQL and Big Data Technologies like Hive and Spark would be a plus
 Experience with industrialized reporting on BI tools like PowerBI, Informatica
 Knowledge of data related industry best practices in the highly regulated CIB industry, experience with regulatory report generation for financial institutions
 Knowledge of industry-leading data access, data security, Master Data, and Reference Data Management, and establishing data lineage
 5+ years experience on Data Visualization / Business Intelligence / ETL developer roles
 Ability to multi-task and manage various projects simultaneously
 Attention to detail
 Ability to present to Senior Management, ExCo; excellent written and verbal communication skills
Read more
Job posted by
Naveen Taalanki
Apply for job
Founded 2020  •  Services  •  20-100 employees  •  Bootstrapped
Python
ETL
SSIS
SQL Server Integration Services (SSIS)
Microsoft Windows Azure
SQL
ADF
Apache Spark
Apache Kafka
Data engineering
Remote only
3 - 8 yrs
₹10L - ₹18L / yr

Who are we?

 

We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.

 

What we are looking for

 

We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.

 

What you’ll be doing

 

First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.

 

You will work in a product team. Building products and rapidly rolling out new features and fixes.

 

You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!

 

Skills you need in order to succeed in this role

Most Important: Integrity of character, diligence and the commitment to do your best

  • Technologies:
    • Azure Data Factory
    • MongoDB
    • SSIS/Apache NiFi (Good to have)
    • Python/Java
    • SOAP/REST Web Services
    • Test Driven Development
  • Experience with:
    • Data warehousing and data lake initiatives on the Azure cloud
    • Cloud DevOps solutions and cloud data and application migration
    • Database concepts and optimization of complex queries
    • Database versioning, backups, restores and migration, and automation of the same
    • Data security and integrity
Read more
Job posted by
Lifi Lawrance
Apply for job
Founded 2015  •  Product  •  20-100 employees  •  Raised funding
SQL
Snow flake schema
Java
MongoDB
NOSQL Databases
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹30L / yr

The Nitty-Gritties

Location: Bengaluru 

About the Role:

Freight Tiger is growing exponentially, and technology is at the centre of it. Our Engineers love solving complex industry problems by building modular and scalable solutions using cutting edge technology. Your peers will be an exceptional group of Software Engineers, Quality Assurance Engineers, DevOps Engineers, Infrastructure and Solution Architects.

This role is responsible for developing data pipelines and data engineering components to support strategic initiatives and ongoing business processes. This role works with leads, analysts, and data scientists to understand requirements, develop technical solutions, and ensure the reliability and performance of the data engineering solutions.

This role provides opportunity to directly impact business outcomes for sales, underwriting, claims and operations functions across multiple use cases by providing them data for their analytical modelling needs.

Key Responsibilities

  • Create and maintain data pipeline.
  • Build and deploy ETL infrastructure for optimal data delivery.
  • Work with various including product, design and executive team to troubleshoot data related issues.
  • Create tools for data analysts and scientists to help them build and optimise the product.
  • Implement systems and process for data access controls and guarantees.
  • Distill the knowledge from experts in the field outside the org and optimise internal data systems.




Preferred Qualifications/Skills

  • Should have 5+ year’s of relevant experience.
  • Strong analytical skills.
  • Degree in Computer Science, Statistics, Informatics, Information Systems.
  • Strong project management and organisational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • SQL guru with hands on experience on various databases.
  • NoSQL databases like Cassandra, MongoDB.
  • Experience with Snowflake, Redshift.
  • Experience with tools like Airflow, Hevo.
  • Experience with Hadoop, Spark, Kafka, Flink.
  • Programming experience in Python, Java, Scala.
Read more
Job posted by
Vineeta Bajaj
Apply for job
Founded 2015  •  Product  •  100-500 employees  •  Raised funding
Python
SQL
Tableau
Bengaluru (Bangalore), Mumbai
2 - 5 yrs
₹14L - ₹20L / yr
 

About Us

upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.

  • upGrad was awarded the Best Tech for Education by IAMAI for 2018-19

  • upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most

    sought-after startups in India

  • upGrad was earlier selected as one of the top ten most innovative companies in India

    by FastCompany.

  • We were also covered by the Financial Times along with other disruptors in Ed-Tech

  • upGrad is the official education partner for Government of India - Startup India

    program

  • Our program with IIIT B has been ranked #1 program in the country in the domain of

    Artificial Intelligence and Machine Learning

    Role Summary

    We Are looking for an analytically inclined , Insights Driven Data Analyst to make our organisation more data driven. In this role you will be responsible for creating dashboards to drive insights for product and business teams. Be it Day to Day decisions as well as long term impact assessment, Measuring the Efficacy of different products or certain teams, You'll be Empowering each of them. The growing nature of the team will require you to be in touch with all of the teams at upgrad. Are you the "Go To" person everyone looks at for getting Data, Then this role is for you.

    Roles & Responsibilities

    • Lead and own the analysis of highly complex data sources, identifying trends and patterns in data and provide insights/recommendations based on analysis results

    • Build, maintain, own and communicate detailed reports to assist Marketing, Growth/Learning Experience and Other Business/Executive Teams

    • Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.

    • Analyze data and generate insights in the form of user analysis, user segmentation, performance reports, etc.

    • Facilitate review sessions with management, business users and other team members

    • Design and create visualizations to present actionable insights related to data sets and business questions at hand

    • Develop intelligent models around channel performance, user profiling, and personalization

      Skills Required

      • Having 3-5 yrs hands-on experience with Product related analytics and reporting

      • Experience with building dashboards in Tableau or other data visualization tools

        such as D3

      • Strong data, statistics, and analytical skills with a good grasp of SQL.

      • Programming experience in Python is must

      • Comfortable managing large data sets

      • Good Excel/data management skills

Read more
Job posted by
Priyanka Muralidharan
Apply for job
Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
Artificial Intelligence (AI)
Python
Natural Language Processing (NLP)
Deep Learning
Machine Learning (ML)
Big Data
NOSQL Databases
SQL
Noida, NCR (Delhi | Gurgaon | Noida)
4 - 8 yrs
₹15L - ₹20L / yr
About ORI:-
- ORI is an end-to-end provider of AI-powered conversational tools that help enterprises simplify their customer experience, improve conversions and help them get better ROI on the marketing spend. Ori is focused on automating the customer journey through it's AI powered self-service SAAS platform, made by applying design thinking principles and Machine Learning.
- ORI's cognitive solutions provide non-intrusive customer experience for Sales, Marketing, Support & Engagement across IoT devices, sensors, web, app, social media & messaging platforms as well as AR and VR platforms.
- Founded in 2017, We've changed the way AI conversational tools are built and trained, providing a revolutionary experience. Clients who have bet on us are Tata Motors, Dishtv, Vodafone, Idea, Lenkart.com, Royal Enfield, IKEA and many more.
- At ORI, you’ll be a part of an environment that’s fast-paced, nurturing, collaborative, and challenging. We believe in 100% ownership & flexibility of how & where you work. You’ll be given complete freedom to get your creative juices flowing and implement your ideas to deliver solutions that bring about revolutionary change. We are a team that believes in working smarter and partying hard and are looking for A-players to hop on-board a rocket-ship that’s locked, loaded & ready to blast off!

Job Profile:-
We are looking for applicants who have a demonstrated research background in AI, Deep Learning and NLP, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research.The candidate will collaborate with researchers and engineers of multiple disciplines within Ori, in particular with researchers in data science and development teams to develop advanced NLP and AI solutions. Work with massive amounts of data collected from various sources.

Key Attributes you need to possess:-
- Communication Skills- Written and verbal form are a must have.You will be required to explain advanced statistical content to clients and relevant stakeholders.Therefore, you must have the ability to translate and tailor this technical content into business applicable material with clear recommendations and insights relevant to the audience at hand.
- Technological Savvy/Analytical Skills- Must be technologically adept, demonstrate exceptionally good computer skills, and demonstrate a passion for research, statistics, and data analysis as well as a demonstrated ability and passion for designing and implementing successful data analysis solutions within a business.
- Business Understanding- Someone who can understand the business's needs and develop analytics that meet those objectives through enhanced customer engagement, automation resulting in cost optimization, or business process optimization saving time and labor. However, real value comes from delivering the results that match the actual business need.
- Innovation- Someone who is always looking for the next big thing that will distinguish their offering from others already in the market and must be able to differentiate great from not-so-great analytics.

Typical work week look like:-
1. Work with product/business owners to map business requirements into products / productized solutions and/or working prototypes of NLP & ML algorithms.
2. Evaluate and compare algorithm performance based on large, real-world data sets.
3. Mine massive amounts of data from various sources to gain insights and identify patterns using machine learning techniques and complex network analysis methods.
4. Design and implement ML algorithms and models through in-depth research and experiment with neural network models, parameter optimization, and optimization algorithms.
5. Work to accelerate the distributed implementation of existing algorithms and models.
6. Conduct research to advance the state of the art in deep learning and provide technical solutions at scale for real world challenges in various scenarios.
7. Establish scalable, efficient, automated processes for model development, model validation, model implementation and large scale data analysis.
8. Optimizing pre-existing algorithms for accuracy and speed.

Our ideal candidate should have:-
- Ph.D. / Master's degree / B.Tech / B.E. from an accredited college/university in Computer Science, Statistics, Mathematics, Engineering, or related fields (strong mathematical/statistics background with the ability to understand algorithms and methods from a mathematical and intuitive viewpoint)
- 4+ years of professional experience in Artificial Intelligence, Machine Learning, Deep Learning, Natural Language Processing/Text mining or related fields.
- Technical ability and hands on expertise in Python, R, XML parsing, Big Data, NoSQL and SQL
- Preference for candidates with prior experience in deep learning tools Keras, TensorFlow, Bert, Transformers, LSTM, Python, Topic modeling, Text classification, NER,SVM, KNN, Reinforcement Learning, Summarisation etc.
- Self-starter and able to manage multiple research projects with a flexible approach and ability to develop new skills.
- Strong knowledge/experience of data extraction and data processing in a distributed cloud environment.

What you can expect from ORI:-
- Passion & happiness in the workplace with great people & open culture with amazing growth opportunities.
- An ecosystem where leadership is fostered which builds an environment where everyone is free to take necessary actions to learn from real experiences.
- Chance to work on the cutting edge of technology.- Freedom to pursue your ideas and tinker with multiple technologies- which a techie would definitely enjoy!!

If you have outstanding programming skills and a great passion for developing beautiful, innovative applications, then you will love this job!!
Read more
Job posted by
Vaishali Vishwakarma
Apply for job
Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Datawarehousing
Scala
Machine Learning (ML)
Deep Learning
SQL
Data modeling
Hadoop
Spark
Apache Hive
PySpark
Python
Amazon Web Services (AWS)
Java
Cassandra
DevOps
HDFS
Chennai
2 - 5 yrs
₹6L - ₹25L / yr

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
Job posted by
Vijay Hemnath
Apply for job
Founded 2018  •  Products & Services  •  employees  •  Profitable
PowerBI
MS-Excel
SQL
DAX
SSIS
Tableau
ETL
Bengaluru (Bangalore)
1 - 3 yrs
₹3L - ₹5L / yr
Required Skills and Experience
• • General or Strong IT background, with at least 2 to 4 years of working experience
• o Strong understanding of data integration and ETL methodologies.
• o Demonstrated ability to multi-task
• o Excellent English communication skills
• o A desire to be a part of growing company. You'll have 2 core responsibilities (Client Work, and Company Building), and we expect dedication to both.
• o Willingness to learn and work on new technologies.
• o Should be a quick and self-learner.

Tools:
1. Good Knowledge of Power Bi and Tableau
2. Good experience in handling data in Excel.
Read more
Job posted by
Jerrin Thomas
Apply for job
Founded 2020  •  Services  •  100-1000 employees  •  Profitable
SQL
SAP
Data Analytics
Hyderabad
3 - 7 yrs
₹5L - ₹8L / yr
Our growing technology firm is looking for an experienced Data Analyst who is able to turn project requirements into custom-formatted data reports. The ideal candidate for this position is able to do complete life cycle data generation and outline critical information for each Project Manager. We also need someone who is able to analyze business procedures and recommend specific types of data that can be used to improve upon them.
Read more
Job posted by
Samarth Patel
Apply for job
Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Python
PySpark
Snow flake schema
SQL
PL/SQL
Microsoft Windows Azure
Amazon Web Services (AWS)
Data Warehouse (DWH)
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
Job posted by
Vishal Sharma
Apply for job
Founded 2014  •  Product  •  20-100 employees  •  Bootstrapped
Data Science
R Programming
Python
Data Structures
Data Analytics
Data Visualization
Remote only
4 - 7 yrs
₹15L - ₹40L / yr

PriceLabs (chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand) is a cloud based software for vacation and short term rentals to help them dynamically manage prices just the way large hotels and airlines do! Our mission is to help small businesses in the travel and tourism industry by giving them access to advanced analytical systems that are often restricted to large companies. 

We're looking for someone with strong analytical capabilities who wants to understand how our current architecture and algorithms work, and help us design and develop long lasting solutions to address those. Depending on the needs of the day, the role will come with a good mix of team-work, following our best practices, introducing us to industry best practices, independent thinking, and ownership of your work.

 

Responsibilities:

  • Design, develop and enhance our pricing algorithms to enable new capabilities.
  • Process, analyze, model, and visualize findings from our market level supply and demand data.
  • Build and enhance internal and customer facing dashboards to better track metrics and trends that help customers use PriceLabs in a better way.
  • Take ownership of product ideas and design discussions.
  • Occasional travel to conferences to interact with prospective users and partners, and learn where the industry is headed.

Requirements:

  • Bachelors, Masters or Ph. D. in Operations Research, Industrial Engineering, Statistics, Computer Science or other quantitative/engineering fields.
  • Strong understanding of analysis of algorithms, data structures and statistics.
  • Solid programming experience. Including being able to quickly prototype an idea and test it out.
  • Strong communication skills, including the ability and willingness to explain complicated algorithms and concepts in simple terms.
  • Experience with relational databases and strong knowledge of SQL.
  • Experience building data heavy analytical models in the travel industry.
  • Experience in the vacation rental industry.
  • Experience developing dynamic pricing models.
  • Prior experience working at a fast paced environment.
  • Willingness to wear many hats.
Read more
Job posted by
Shareena Fernandes
Apply for job
Did not find a job you were looking for?
Search
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.