Data Analyst

at Graphene Services Pte Ltd

DP
Posted by Swetha Seshadri
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
Best in industry
icon
Full time
Skills
Python
MySQL
SQL
NOSQL Databases
PowerBI
Systems Development Life Cycle (SDLC)
cosmos

About Graphene  

Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best  

Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.  

  

Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.  

Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.  

  

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.   

  

Job title: - Data Analyst 

Job Description  

Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines. 

Responsibilities and Duties  

  • Managing end to end data pipeline from data source to visualization layer 
  • Ensure data integrity; Ability to pre-empt data errors 
  • Organized managing and storage of data 
  • Provide quality assurance of data, working with quality assurance analysts if necessary. 
  • Commissioning and decommissioning of data sets. 
  • Processing confidential data and information according to guidelines. 
  • Helping develop reports and analysis. 
  • Troubleshooting the reporting database environment and reports. 
  • Managing and designing the reporting environment, including data sources, security, and metadata. 
  • Supporting the data warehouse in identifying and revising reporting requirements. 
  • Supporting initiatives for data integrity and normalization. 
  • Evaluating changes and updates to source production systems. 
  • Training end-users on new reports and dashboards. 
  • Initiate data gathering based on data requirements 
  • Analyse the raw data to check if the requirement is satisfied 

 

Qualifications and Skills   

  

  • Technologies required: Python, SQL/ No-SQL database(CosmosDB)     
  • Experience required 2 – 5 Years. Experience in Data Analysis using Python 

  Understanding of software development life cycle   

  • Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI) 
  • Automation steps needed to transform and enrich data.   
  • Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.   
  • Excellent verbal and written communication skills   

About Graphene Services Pte Ltd

Founded
2014
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Big Data Engineer

at 6sense

Founded 2013  •  Product  •  100-500 employees  •  Raised funding
Spark
Hadoop
Big Data
Data engineering
PySpark
Apache Spark
Data Structures
Python
icon
Remote only
icon
5 - 9 yrs
icon
Best in industry

It’s no surprise that 6sense is named a top workplace year after year — we have industry-leading technology developed and taken to market by a world-class team. 6sense is Top Rated on Glassdoor with a 4.9/5 and our CEO Jason Zintak was recognized as the #1 CEO in the small & medium business category by Glassdoor’s 2021 Top CEO Employees Choice Awards.

In 2021, the company was recognized for having the Best Company for Diversity, Best Company for Women, Best CEO, Best Company Culture, Best Company Perks & Benefits and Happiest Employees from the employee feedback platform Comparably. In addition, 6sense has also won several accolades that demonstrate its reputation as an employer of choice including the Glassdoor Best Place to Work (2022), TrustRadius Tech Cares (2021) and Inc. Best Workplaces (2022, 2021, 2020, 2019).

6sense reinvents the way organizations create, manage, and convert pipeline to revenue. The 6sense Revenue AI captures anonymous buying signals, predicts the right accounts to target at the ideal time, and recommends the channels and messages to boost revenue performance. Removing guesswork, friction and wasted sales effort, 6sense empowers sales, marketing, and customer success teams to significantly improve pipeline quality, accelerate sales velocity, increase conversion rates, and grow revenue predictably.

 

6sense is seeking a Data Engineer to become part of a team designing, developing, and deploying its customer centric applications.  

A Data Engineer at 6sense will have the opportunity to 

  • Create, validate and maintain optimal data pipelines, assemble large, complex data sets that meet functional / non-functional business requirements.
  • Improving our current data pipelines i.e. improve their performance, remove redundancy, and figure out a way to test before v/s after to roll out.
  • Debug any issues that arise from data pipelines especially performance issues.
  • Experiment with new tools and new versions of hive/presto etc. etc.

Required qualifications and must have skills 

  • Excellent analytical and problem-solving skills
  • 6+ years work experience showing growth as a Data Engineer.
  • Strong hands-on experience with Big Data Platforms like Hadoop / Hive / Spark / Presto
  • Experience with writing Hive / Presto UDFs in Java
  • String experience in writing complex, optimized SQL queries across large data sets
  • Experience with optimizing queries and underlying storage
  • Comfortable with Unix / Linux command line
  • BE/BTech/BS or equivalent 

Nice to have Skills 

  • Used Key Value stores or noSQL databases 
  • Good understanding of docker and container platforms like Mesos and Kubernetes 
  • Security-first architecture approach 
  • Application benchmarking and optimization  
Job posted by
Shrutika Dhawan

Data Scientist

at Disruptive Fintech Startup

Agency job
via Unnati
Data Science
Data Analytics
R Programming
Python
Investment analysis
credit rating
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹8L - ₹12L / yr
If you are interested in joining a purpose-driven community that is dedicated to creating ambitious and inclusive workplaces, then be a part of a high growth startup with a world-class team, building a revolutionary product!
 
Our client is a vertical fintech play focused on solving industry-specific financing gaps in the food sector through the application of data. The platform provides skin-in-the-game growth capital to much-loved F&B brands. Founded in 2019, they’re VC funded and based out of Singapore and India-Bangalore.
 
Founders are the alumnus of IIT-D, IIM-B and Wharton. They’ve 12+ years of experience as Venture capital and corporate entrepreneurship at DFJ, Vertex, InMobi and VP at Snyder UAE, investment banking at Unitus Capital - leading the financial services practice, and institutional equities at Kotak. They’ve a team of high-quality professionals coming together for this mission to disrupt the convention.
 
 
AsData Scientist, you will develop a first of its kind risk engine for revenue-based financing in India and automating investment appraisals for the company's different revenue-based financing products

What you will do:
 
  • Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
  • Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
  • Back-testing investment appraisal models at regular intervals to improve the same
  • Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
  • Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
  • Identifying relevant sub-sector criteria to score and rate investment opportunities internally

 


Candidate Profile:

 

Desired Candidate Profile

 

What you need to have:
 
  • Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
  • Experience in working in lending/investing fintech (mandatory)
  • Strong Excel skills (mandatory)
  • Previous experience in credit rating or credit scoring or investment analysis (preferred)
  • Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
  • Proficiency in data analysis (preferred)
  • Good verbal and written skills
Job posted by
Sarika Tamhane

Senior Data Engineer

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
PostgreSQL
DevOps
Amazon Web Services (AWS)
NodeJS (Node.js)
Ruby on Rails (ROR)
React.js
Python
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Job posted by
Newali Hazarika

Data Engineer (Azure)

at Scry Analytics

Founded 2015  •  Product  •  100-500 employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Windows Azure
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
SQL
NOSQL Databases
Apache Kafka
icon
Remote only
icon
3 - 8 yrs
icon
₹15L - ₹20L / yr

Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)

Salary: Competitive as per Industry Standard

We are expanding our Data Engineering Team and hiring passionate professionals with extensive

knowledge and experience in building and managing large enterprise data and analytics platforms. We

are looking for creative individuals with strong programming skills, who can understand complex

business and architectural problems and develop solutions. The individual will work closely with the rest

of our data engineering and data science team in implementing and managing Scalable Smart Data

Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale

Processing Clusters, Data Mining and Search Engines.

What You’ll Need:

  • 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal

Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied

data types.

  • Proficiency in Python, Linux and shell scripting.
  • Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
    ● Strong experience in developing the infrastructure required for data ingestion, optimal

extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory,  Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).

  • Working knowledge of github or other version control tools.
  • Experience with creating Restful web services and API platforms.
  • Work with data science and infrastructure team members to implement practical machine

learning solutions and pipelines in production.

  • Experience with cloud providers like Azure/AWS/GCP.
  • Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
  • Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
  • Strong analytic skills related to working with unstructured datasets.

 

Good to have (to filter or prioritize candidates)

  • Experience with testing libraries such as pytest for writing unit-tests for the developed code.
  • Knowledge of Machine Learning algorithms and libraries would be good to have,

implementation experience would be an added advantage.

  • Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
  • Knowledge of Azure functions , Elastic search etc will be good to have.

 

  • Having experience with model versioning (mlflow) and data versioning will be beneficial
  • Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Job posted by
Siddarth Thakur

Data Engineer

at Srijan Technologies

Founded 2002  •  Products & Services  •  100-1000 employees  •  Profitable
PySpark
SQL
Data modeling
Data Warehouse (DWH)
Informatica
ETL
Python
icon
Remote only
icon
2 - 6 yrs
icon
₹8L - ₹13L / yr
3+ years of professional work experience with a reputed analytics firm
 Expertise in handling large amount of data through Python or PySpark
 Conduct data assessment, perform data quality checks and transform data using SQL
and ETL tools
 Experience of deploying ETL / data pipelines and workflows in cloud technologies and
architecture such as Azure and Amazon Web Services will be valued
 Comfort with data modelling principles (e.g. database structure, entity relationships, UID
etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
 A thoughtful and comfortable communicator (verbal and written) with the ability to
facilitate discussions and conduct training
 Track record of strong problem-solving, requirement gathering, and leading by example
 Ability to thrive in a flexible and collaborative environment
 Track record of completing projects successfully on time, within budget and as per scope
Job posted by
PriyaSaini

Data Engineer ( Only Immediate)

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Data engineering
Data Engineer
Python
Big Data
Spark
Scala
icon
Remote only
icon
2 - 15 yrs
icon
₹2L - ₹70L / yr
Proficiency in engineering practices and writing high quality code, with expertise in
either one of Java, Scala or Python
 Experience in Bigdata Technologies (Hadoop/Spark/Hive/Presto/HBase) & streaming
platforms (Kafka/NiFi/Storm)
 Experience in Distributed Search (Solr/Elastic Search), In-memory data-grid
(Redis/Ignite), Cloud native apps and Kubernetes is a plus
 Experience in building REST services and API’s following best practices of service
abstractions, Micro-services. Experience in Orchestration frameworks is a plus
 Experience in Agile methodology and CICD - tool integration, automation,
configuration management
 Added advantage for being a committer in one of the open-source Bigdata
technologies - Spark, Hive, Kafka, Yarn, Hadoop/HDFS
Job posted by
Alex P

Big Data Engineer

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Hive
Spark
Data engineering
Pig
Data Warehouse (DWH)
SQL
icon
Pune
icon
2.5 - 6 yrs
icon
₹1L - ₹8L / yr
Job Title/Designation: Big Data Engineers - Hadoop, Pig, Hive, Spark
Employment Type: Full Time, Permanent

Job Description:
 
Work Location - Pune
Work Experience - 2.5 to 6 Years
 
Note - Candidates with short notice periods will be given preference.
 
Mandatory Skills:
  • Working knowledge and hands-on experience of Big Data / Hadoop tools and technologies.
  • Experience of working in Pig, Hive, Flume, Sqoop, Kafka etc.
  • Database development experience with a solid understanding of core database concepts, relational database design, ODS & DWH.
  • Expert level knowledge of SQL and scripting preferably UNIX shell scripting, Perl scripting.
  • Working knowledge of Data integration solution and well-versed with any ETL tool (Informatica / Datastage / Abinitio/Pentaho etc).
  • Strong problem solving and logical reasoning ability.
  • Excellent understanding of all aspects of the Software Development Lifecycle.
  • Excellent written and verbal communication skills.
  • Experience in Java will be an added advantage
  • Knowledge of object oriented programming concepts
  • Exposure to ISMS policies and procedures.
Job posted by
Nikita Aher

Consulting Staff Engineer - Machine Learning

at Thinkdeeply

Founded 2014  •  Products & Services  •  20-100 employees  •  Raised funding
Machine Learning (ML)
R Programming
TensorFlow
Deep Learning
Python
Natural Language Processing (NLP)
PyTorch
icon
Hyderabad
icon
5 - 15 yrs
icon
₹5L - ₹35L / yr

Job Description

Want to make every line of code count? Tired of being a small cog in a big machine? Like a fast-paced environment where stuff get DONE? Wanna grow with a fast-growing company (both career and compensation)? Like to wear different hats? Join ThinkDeeply in our mission to create and apply Enterprise-Grade AI for all types of applications.

 

Seeking an M.L. Engineer with high aptitude toward development. Will also consider coders with high aptitude in M.L. Years of experience is important but we are also looking for interest and aptitude. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.

 

Experience

10+ Years

 

Location

Bozeman/Hyderabad

 

Skills

Required Skills:

Bachelors/Masters or Phd in Computer Science or related industry experience

3+ years of Industry Experience in Deep Learning Frameworks in PyTorch or TensorFlow

7+ Years of industry experience in scripting languages such as Python, R.

7+ years in software development doing at least some level of Researching / POCs, Prototyping, Productizing, Process improvement, Large-data processing / performance computing

Familiar with non-neural network methods such as Bayesian, SVM, Adaboost, Random Forests etc

Some experience in setting up large scale training data pipelines.

Some experience in using Cloud services such as AWS, GCP, Azure

Desired Skills:

Experience in building deep learning models for Computer Vision and Natural Language Processing domains

Experience in productionizing/serving machine learning in industry setting

Understand the principles of developing cloud native applications

 

Responsibilities

 

Collect, Organize and Process data pipelines for developing ML models

Research and develop novel prototypes for customers

Train, implement and evaluate shippable machine learning models

Deploy and iterate improvements of ML Models through feedback

Job posted by
Aditya Kanchiraju

Senior Data Engineer

at Noon Academy

Founded 2014  •  Product  •  100-500 employees  •  Bootstrapped
Python
Scala
Data engineering
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹15L - ₹35L / yr
Job Description   Be a part of the team that develops and maintains the analytics and data science platform. Perform functional, technical, and architectural role and play a key role in evaluating and improving data engineering, data warehouse design and BI systems. Develop technical architecture designs which support a robust solution and leads full-lifecycle availability of real-time Business Intelligence (BI) and enable the Data Scientists   Responsibilities  ● Construct, test and maintain data infrastructure and data pipelines to meet business requirements ● Develop process workflows for data preparations, modelling and mining Manage configurations to build reliable datasets for analysis Troubleshooting services, system bottlenecks and application integration ● Designing, integrating and documenting technical components, dependencies of big data platform Ensuring best practices that can be adopted in Big Data stack and share across teams
 
  
● Working hand in hand with application developers and data scientists to help build softwares that scales in terms of performance and stability   Skills  ● 3+ years of experience managing large scale data infrastructure and building data pipelines/ data products. ● Proficient in - Any data engineering technologies and proficient in AWS data engineering technologies is plus. ● Language - python, scala or go ● Experience in working with real time streaming systems Experience in handling millions of events per day Experience in developing and deploying data models on Cloud ● Bachelors/Masters in Computer Science or equivalent experience Ability to learn and use skills in new technologies 
Job posted by
Sudha BR

Backend Developer

at Poker Yoga

Founded  •  Product  •  0-20 employees  •  Bootstrapped
Elastic Search
MongoDB
NOSQL Databases
Redis
Relational Database (RDBMS)
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹13L - ₹18L / yr
At Poker Yoga we aim to make poker a tool towards self transformation. By providing the necessary tools to improve his skill, necessary learning frame work to bring skill to the core of his game approach and experiences to enhance his perception. We are looking at passionate coders who love building products that speak for themselves. It's an invitation to join a family, not a company. Looking forward to work with you!
Job posted by
Anuj Kumar Kodam
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Graphene Services Pte Ltd?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort