About Graphene
Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best
Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.
Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.
Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.
Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.
Job title: - Data Analyst
Job Description
Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines.
Responsibilities and Duties
- Managing end to end data pipeline from data source to visualization layer
- Ensure data integrity; Ability to pre-empt data errors
- Organized managing and storage of data
- Provide quality assurance of data, working with quality assurance analysts if necessary.
- Commissioning and decommissioning of data sets.
- Processing confidential data and information according to guidelines.
- Helping develop reports and analysis.
- Troubleshooting the reporting database environment and reports.
- Managing and designing the reporting environment, including data sources, security, and metadata.
- Supporting the data warehouse in identifying and revising reporting requirements.
- Supporting initiatives for data integrity and normalization.
- Evaluating changes and updates to source production systems.
- Training end-users on new reports and dashboards.
- Initiate data gathering based on data requirements
- Analyse the raw data to check if the requirement is satisfied
Qualifications and Skills
- Technologies required: Python, SQL/ No-SQL database(CosmosDB)
- Experience required 2 – 5 Years. Experience in Data Analysis using Python
• Understanding of software development life cycle
- Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI)
- Automation steps needed to transform and enrich data.
- Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.
- Excellent verbal and written communication skills

About Graphene Services Pte Ltd
About
Connect with the team
Similar jobs
The Design Engineer is responsible for planning, developing, and improving product designs by converting concepts and requirements into detailed engineering drawings and specifications. The role involves collaboration with production, quality, and project teams to ensure designs are functional, cost-effective, and manufacturable.
Key Responsibilities:
- Create 2D and 3D designs using CAD software such as AutoCAD, SolidWorks, CATIA, or Creo
- Prepare detailed drawings, BOMs, and technical documentation
- Analyze design feasibility, materials, tolerances, and manufacturing methods
- Coordinate with production, quality, and vendors to resolve design-related issues
- Modify existing designs to improve performance, quality, or cost
- Ensure compliance with industry standards and customer specifications
- Support prototyping, testing, and validation activities
- Participate in design reviews and continuous improvement initiatives
- Strong understanding of Core Python, data structures, OOPs, exception handling, and logical problem-solving.
- Experience in at least one Python framework (FastAPI preferred, Flask/Django acceptable).
- Good knowledge of REST API development and API authentication (JWT/OAuth).
- Experience with SQL databases (MySQL/PostgreSQL) & NoSQL databases (MongoDB/Firestore).
- Basic understanding of cloud platforms (GCP or AWS).
- Experience with Git, branching strategies, and code reviews.
- Solid understanding of performance optimization and writing clean, efficient code.
- Develop, test, and maintain high-quality Python applications using FastAPI (or Flask/Django).
- Design and implement RESTful APIs with strong understanding of request/response cycles, data validation, and authentication.
- Work with SQL (MySQL/PostgreSQL) and NoSQL (MongoDB/Firestore) databases, including schema design and query optimization.
- Experience with Google Cloud (BigQuery, Dataflow, Notebooks) will be a strong plus.
- Work with cloud environments (GCP/AWS) for deployments, storage, logging, etc.
- Use version control tools such as Git/BitBucket for collaborative development.
- Support and build data pipelines using Dataflow/Beam and BigQuery if required.
- Experience with GCP services like BigQuery, Dataflow (Apache Beam), Cloud Functions, Notebooks etc
- Good to have Exposure to microservices architecture.
- Familiarity with Redis, Elasticsearch, or message queues (Pub/Sub, RabbitMQ, Kafka).
Platform they will work on: Building multiple tech offerings for the customers.
Job Description: Should be someone who has done iOS mobile development with React Native. Should have a startup mindset and be able to build for scale. The ideal candidate would be someone with Founding Engg etc. experience.
YoE: 4-6 years
ABOUT EPISOURCE:
Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.
The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.
What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.
ABOUT THE ROLE:
We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.
This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.
You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.
During the course of a typical day with our team, expect to work on one or more projects around the following;
1. Create and maintain optimal data pipeline architectures for ML
2. Develop a strong API ecosystem for ML pipelines
3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible
4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems
5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations
6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms
7. Designing scalable implementations of the models developed by our Data Science teams
8. Big data and distributed ML with PySpark on AWS EMR, and more!
BASIC REQUIREMENTS
-
Bachelor’s degree or greater in Computer Science, IT or related fields
-
Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects
-
Strong experience with bash scripting, unix environments and building scalable/distributed systems
-
Experience with automation/configuration management using Ansible, Terraform, or equivalent
-
Very strong experience with AWS and Python
-
Experience building CI/CD systems
-
Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent
-
Ability to build and manage application and performance monitoring processes
PYTHON DEVELOPER
We are seeking a skilled experienced Python Developer with expertise in FastAPI to join our dynamic and innovative development team. As a Python Developer, you will be responsible for designing, developing, and maintaining high-quality Python applications and APIs using the FastAPI framework. Your primary focus will be on building efficient, scalable, and reliable backend systems that power our web and mobile applications.
Responsibilities:
1. Designing and developing robust Python applications using the FastAPI framework.
2. Collaborating with cross-functional teams, including front-end developers, product managers, and designers, to understand project requirements and translate them into technical specifications.
3. Optimizing application performance and scalability by implementing efficient coding practices and utilizing appropriate caching mechanisms.
4. Writing clean, maintainable, and testable code following software development best practices.
5. Conducting thorough testing and debugging of applications to ensure high-quality deliverables.
6. Integrating external services and third-party APIs into the application ecosystem.
7. Collaborating with team members to continuously improve development processes and tools.
8. Keeping up to date with the latest industry trends, technologies, and best practices related to Python development and FastAPI.
Requirements:
1. Bachelor's degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience).
2. Proven experience of 2 years as a Python Developer, with a strong understanding of Python programming language fundamentals.
3. Solid knowledge of the FastAPI framework and its features, including routing, dependency injection, validation, and asynchronous programming. Experience of 1 or more years using FastAPI framework.
4. Experience building RESTful APIs using frameworks like FastAPI, Flask, or Django.
5. Familiarity with database systems such as MySQL, PostgreSQL, or MongoDB.
6. Strong problem-solving and analytical skills with the ability to troubleshoot and debug complex applications.
If you were looking for a rocket-ship, this is it!
* 1+ years of experience with web development building good quality production software
* Proficiency in at least one of: Ruby/Rails or Elixir/Phoenix (Good working knowledge of Elixir, OTP or OO programming)
* Gained good knowledge of working with APIs and performance optimisation to identify bottlenecks and bugs, and devise solutions to these problems
* Familiar with data stores like PostgreSQL & Redis
* You write clean code, automate and continuously deploy it in fast development cycles.
* Experience in designing and building scalable and distributed systems
* Experience with CICD using Gitlab or AWS CodePipeline/CodeDeploy (Exposure to AWS is highly desirable)
* You are proactive and pay attention to the security, scalability, performance, availability and usability of systems.
This is a Contract Based role only
Not a "Fulltime opportunity"
Role Summary:
The Robotics Process Automation Senior Developer is responsible for designing and coding the automation process components. The RPA Senior developer develops the automation design to ensure it meets the specifications. The design must handle concurrency, scalability, restart and recovery, auditing & object reuse for all the automations that are designed and developed. The senor developer will also validate the automation by performing appropriate unit testing, and ensure configuration control is maintained at all times. The senior developer mentors junior developers and performs QC checks on code components developed by them.
DUTIES / ACCOUNTABILITIES
Responsibilities include, but may not be restricted to:
- Design technical specifications for RPA that meets the requirements and handled all the non functional requirements of concurrency, scalability, security requirements, restart and recovery.
- Develops and configures automation processes as per the technical design document to meet the defined requirements. Works on the coding the more complicated automations or reusable components, and delegates and mentors junior developers for the less complex components.
- Develops new processes/tasks/objects using core workflow principles that are efficient, well structured, maintainable and easy to understand.
- Complies with and helps to enforce design and coding standards, policies and procedures.
- Ensures documentation is well maintained.
- Ensures quality of coded components by performing thorough unit testing.
- Works collaboratively with test teams during the Product test and UAT phases to fix assigned bugs with quality.
- Reports status, issues and risks to tech leads on a regular basis
- Improves skills in automation products by completing automation certification.
- Mentors junior developers and performs code reviews for quality control.
Qualifications
EDUCATION
- Bachelor university degree in Engineering/Computer Science.
KEY SKILLS
- 5- 8 years of IT experience and having good understanding of programming concepts. Should be from a programming background on any coding language (.Net, Java).
- Working experience in RPA for a minimum of 2 years and having project experience of a minimum 3 RPA implementations.
- Understands development methodology and lifecycle
- Should be trained on RPA tools (Blue Prism/Automation Anywhere).
- Self-motivated, team player, action and results oriented.
- Well organized, good communication and reporting skills.
Desirable:
- Prior experience in UI automation, Robotics process automation is an added advantage.
- Certification in Robotics










