Senior Data Engineer

at SkyPoint Cloud

DP
Posted by Suraj Pattanaik
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
Best in industry
icon
Full time
Skills
Scala
Azure Data Factory
Spark
PySpark
Apache Spark
Data engineering
Big Data
Hadoop
Data modeling
Logical data model
Physical data model
Who we are:
 
SkyPoint’s mission is to bring people and data together.
 
We are the industry's first Modern Data Stack Platform with built-in data lakehouse, account 360, customer 360, entity resolution, data privacy vault, ELT / Reverse ETL, data integration, privacy compliance automation, data governance, analytics and managed services for organizations in several industries including healthcare, life sciences, senior living, retail, hospitality, business services and financial services.
 
Here is what you can expect to work on in this critical role:
 
You will lead the efforts to leverage the data to its maximum value. Our platform processes billions of rows in data every month on behalf of millions of users.
 
How do our Senior Data Engineers spend their time?
 
You can expect to spend about 50% building and scaling the SkyPoint Lakehouse, data pipelines and about 20% of your time defining and implementing DataOps methodologies. 
 
Additionally, 20% of your time will be spent writing and optimizing queries and algorithms. Lastly, you’ll spend about 10% of your time supporting and monitoring pipelines.
 
Our team values collaboration, a passion for learning and a desire to become a master of your craft. We thrive in asynchronous communication. You will have a lot of support from leadership when you communicate proactively with detailed information about any roadblocks you may encounter.
 
Qualities of Senior Data Engineers Who Thrive in This Role
 
🔥 You are a driven, self-starter type of person who isn’t afraid to dig for answers, stays up-to-date on industry trends and is always looking for ways to enhance your knowledge (yes, Databricks-related podcasts count! 🎧)
 
💡Your skill set includes a blend of Databricks-related technologies in Azure or AWS
 
🖥️ Experience with Scala is a must! (you’ve got a software engineering hat)
 
💡 Working with Scala, Spark (Databricks) interacting with Delta Lakehouse and Unity Catalog
 
Skills & Experience Required:
 
💡3+ years of industry experience
🔥 Spark (Scala), Databricks
💡 Strong backend programming skills for data processing, with practical knowledge of availability, scalability, clustering, microservices, multi-threaded development and performance patterns.
- Experience with the use of a wide array of algorithms and data structures.
-- Experience in workflow orchestration platforms like ADF/Glue/Airflow etc.
-- Strong Distributed System fundamentals
-- Strong handle on REST APIs
-- Experience in NoSQL databases
-- Most recent work experience MUST include work on Scala and Spark (Databricks)
 
Educational Level:
 
🔥 BS / BE / MS in Computer Science from a Top Tier school.

Perks of working with us:
 
§  Professional development and training opportunities
§  Company happy hours and fun team-building activities.
§  Flexi work hours plus enjoy the benefit of having your workstation at your home.
§  Add-On Internet reimbursement within the company's permissible limits.
§  Opportunity to work with a US-based SaaS start-up working on new tech stacks, Azure Cloud.
§  Meal cards and gift hampers, other incentives
§ Awards and recognition programs.
§ Industry-focused certifications and ongoing training opportunities
§  Competitive total compensation package (Salary + Equity), Performance-based bonus plans.
Read more

About SkyPoint Cloud

Founded
2019
Type
Size
Stage
Profitable
About

Skypoint Cloud:


SkyPoint’s mission is to bring people and data together.


We are the industry's first Modern Data Stack Platform with built-in data lakehouse, account 360, customer 360, entity resolution, data privacy vault, ELT / Reverse ETL, data integration, privacy compliance automation, data governance, analytics, and managed services for organizations in several industries including healthcare, life sciences, senior living, retail, hospitality, business services, and financial services.


Our platform enables organizations to take control of their customer data, deliver unmatched customer experiences and build brand loyalty.

 

Industry leaders and over 10 million end-users currently use SkyPoint.


SkyPoint delivers unmatched customer experiences with world-class AI and analytics. We offer consumption-based pricing and a wealth of features that drive the best outcomes for customers, employees, and brands. sing 200+ built-in connectors, SkyPoint helps you manage large volumes of data and gain actionable insights by unifying disparate systems and providing a single source of truth.


Trust is the most important part of your business and trust is what matters most to the SkyPoint Cloud team. Together, we solve complexities with transparency and turn your business into a trusted brand.


Specialties

Modern Data Stack Platform, Machine Learning, Identity Resolution, Data Privacy, CCPA, GDPR, Privacy Compliance, Consent Management, Big Data, HIPAA, PCI, Sensitive Data Vault, Privacy API, Data Lake, Data Warehouse, Data Lakehouse, AI, Data as a Product, Semantic Layer, Customer 360, Data Apps, and Data Transformation

Read more
Company video
Connect with the team
icon
Pooja Singh
icon
Sheetal B
icon
Suraj Pattanaik
icon
Maithri Alapati
Company social profiles
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

T500
Agency job
via Talent500 by ANSR by Raghu R
Bengaluru (Bangalore)
3 - 9 yrs
₹10L - ₹30L / yr
Informatica MDM
Data modeling
IDQ

Primary Duties and Responsibilities 

  • Experience with Informatica Multidomain MDM 10.4 tool suite preferred
  • Partnering with data architects and engineers to ensure an optimal data model design and implementation for each MDM domain in accordance with industry and MDM best practices
  • Works with data governance and business steward(s) to design, develop, and configure business rules for data validation, standardize, match, and merge
  • Implementation of Data Quality policies, procedures and standards along with Data Governance Team for maintenance of customer, location, product, and other data domains; Experience with Informatica IDQ tool suite preferred.
  • Performs data analysis and source-to-target mapping for ingest and egress of data.
  • Maintain compliance with change control, SDLC, and development standards.
  • Champion the creation and contribution to technical documentation and diagrams.
  • Establishes a technical vision and strategy with the team and works with the team to turn it into reality.
  • Emphasis on coaching and training to cultivate skill development of team members within the department.
  • Responsible for keeping up with industry best practices and trends.
  • Monitor, troubleshoot, maintain, and continuously improve the MDM ecosystem.

Secondary Duties and Responsibilities

  • May participate in off-hours on-call rotation.
  • Attends and is prepared to participate in team, department and company meetings.
  • Performs other job related duties and special projects as assigned.

Supervisory Responsibilities

This is a non-management role

Education and Experience

  • Bachelor's degree in MIS, Computer Sciences, Business Administration, or related field; or High School Degree/General Education Diploma and 4 years of relevant experience in lieu of Bachelor's degree.
  • 5+ years of experience in implementing MDM solutions using Informatica MDM.
  • 2+ years of experience in data stewardship, data governance, and data management concepts.
  • Professional working knowledge of Customer 360 solution
  • Professional working knowledge in multi domain MDM data modeling.
  • Strong understanding of company master data sets and their application in complex business processes and support data profiling, extraction, and cleansing activities using Informatica Data Quality (IDQ).
  • Strong knowledge in the installation and configuration of the Informatica MDM Hub.
  • Familiarity with real-time, near real-time and batch data integration.
  • Strong experience and understanding of Informatica toolsets including Informatica MDM Hub, Informatica Data Quality (IDQ), Informatica Customer 360, Informatica EDC, Hierarchy Manager (HM), Business Entity Service Model, Address Doctor, Customizations & Composite Services
  • Experience with event-driven architectures (e.g. Kafka, Google Pub/Sub, Azure Event Hub, etc.).
  • Professional working knowledge of CI/CD technologies such as Concourse, TeamCity, Octopus, Jenkins, and CircleCI.
  • Team player that exhibits high energy, strategic thinking, collaboration, direct communication and results orientation.

Physical Requirements

  • Visual requirements include: ability to see detail at near range with or without correction. Must be physically able to perform sedentary work: occasionally lifting or carrying objects of no more than 10 pounds, and occasionally standing or walking, reaching, handling, grasping, feeling, talking, hearing and repetitive motions.

Working Conditions

  • The duties of this position are performed through a combination of an open office setting and remote work options. Full remote work options available for employees that reside outside of the Des Moines Metro Area. There is frequent pressure to meet deadlines and handle multiple projects in a day.

Equipment Used to Perform Job

  • Windows, or Mac computer and various software solutions.

Financial Responsibility

  • Responsible for company assets including maintenance of software solutions.

Contacts

  • Has frequent contact with office personnel in other departments related to the position, as well as occasional contact with users and customers. Engages stakeholders from other area in the business.

Confidentiality

  • Has access to confidential information including trade secrets, intellectual property, various financials, and customer data.
Read more
DP
Posted by Nelson Xavier
Bengaluru (Bangalore), Pune, Hyderabad
4 - 8 yrs
₹10L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

Job responsibilities

- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges

- You will pair to write clean and iterative code based on TDD

- Leverage various continuous delivery practices to deploy, support and operate data pipelines

- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

- Create data models and speak to the tradeoffs of different modeling approaches

- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

- Encouraging open communication and advocating for shared outcomes

 

Technical skills

- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop

- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems

- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

 



Professional skills

- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

- An interest in coaching, sharing your experience and knowledge with teammates

- You enjoy influencing others and always advocate for technical excellence while being open to change when needed

- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more

Read more
DP
Posted by Varnisha Sethupathi
Chennai
5 - 8 yrs
₹15L - ₹25L / yr
SQL
Python
Analytical Skills
Data modeling
Data Visualization
+1 more

Role : Senior Customer Scientist 

Experience : 6-8 Years 

Location : Chennai (Hybrid) 
 
 

Who are we? 
 
 

A young, fast-growing AI and big data company, with an ambitious vision to simplify the world’s choices. Our clients are top-tier enterprises in the banking, e-commerce and travel spaces. They use our core AI-based choice engine  maya.ai , to deliver personal digital experiences centered around taste. The  maya.ai  platform now touches over 125M customers globally. You’ll find Crayon Boxes in Chennai and Singapore. But you’ll find Crayons in every corner of the world. Especially where our client projects are – UAE, India, SE Asia and pretty soon the US. 
 
 

Life in the Crayon Box is a little chaotic, largely dynamic and keeps us on our toes! Crayons are a diverse and passionate bunch. Challenges excite us. Our mission drives us. And good food, caffeine (for the most part) and youthful energy fuel us. Over the last year alone, Crayon has seen a growth rate of 3x, and we believe this is just the start. 
 

 
We’re looking for young and young-at-heart professionals with a relentless drive to help Crayon double its growth. Leaders, doers, innovators, dreamers, implementers and eccentric visionaries, we have a place for you all. 
 

 
 

Can you say “Yes, I have!” to the below? 
 
 

  1. Experience with exploratory analysis, statistical analysis, and model development 
     
  1. Knowledge of advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations 
     
  1. Knowledge of software packages such as SAS, R, Rapidminer for analytical modelling and data management. 
     
  1. Strong experience in SQL/ Python/R working efficiently at scale with large data sets 
     
  1. Experience in using Business Intelligence tools such as PowerBI, Tableau, Metabase for business applications 
     

 
 

 

Can you say “Yes, I will!” to the below? 
 
 

  1. Drive clarity and solve ambiguous, challenging business problems using data-driven approaches. Propose and own data analysis (including modelling, coding, analytics) to drive business insight and facilitate decisions. 
     
  1. Develop creative solutions and build prototypes to business problems using algorithms based on machine learning, statistics, and optimisation, and work with engineering to deploy those algorithms and create impact in production. 
     
  1. Perform time-series analyses, hypothesis testing, and causal analyses to statistically assess the relative impact and extract trends 
     
  1. Coordinate individual teams to fulfil client requirements and manage deliverable 
     
  1. Communicate and present complex concepts to business audiences 
     
  1. Travel to client locations when necessary  

 

 

Crayon is an equal opportunity employer. Employment is based on a person's merit and qualifications and professional competences. Crayon does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, marital status, pregnancy or related.  
 
 

More about Crayon:  https://www.crayondata.com/   
 

More about  maya.ai :  https://maya.ai/   

 

 

Read more
Technology service company
Agency job
via Jobdost by Riya Roy
Remote only
5 - 10 yrs
₹10L - ₹20L / yr
Relational Database (RDBMS)
NOSQL Databases
NOSQL
Performance tuning
SQL
+10 more

Preferred Education & Experience:

  • Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.

  • Well-versed in and 5+ years of hands-on demonstrable experience with:
    ▪ Data Analysis & Data Modeling
    ▪ Database Design & Implementation
    ▪ Database Performance Tuning & Optimization
    ▪ PL/pgSQL & SQL

  • 5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).

  • 5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.

  • Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels

  • Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.

  • Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values

  • Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus

  • Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.

Read more
Bengaluru (Bangalore), Chennai, Pune, Gurugram
4 - 8 yrs
₹9L - ₹14L / yr
ETL
Data Warehouse (DWH)
Data engineering
Data modeling
BRIEF JOB RESPONSIBILITIES:

• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions

COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
Read more
AppsTek Corp
Agency job
via Venaatics Consulting by Mastanvali Shaik
Gurugram, Chennai
6 - 10 yrs
Best in industry
Data management
Data modeling
PostgreSQL
SQL
MySQL
+3 more

Function : Sr. DB Developer

Location : India/Gurgaon/Tamilnadu

 

>> THE INDIVIDUAL

  • Have a strong background in data platform creation and management.
  • Possess in-depth knowledge of Data Management, Data Modelling, Ingestion  - Able to develop data models and ingestion frameworks based on client requirements and advise on system optimization.
  • Hands-on experience in SQL database (PostgreSQL) and No-SQL database (MongoDB) 
  • Hands-on experience in performance tuning of DB
  • Good to have knowledge of database setup in cluster node
  • Should be well versed with data security aspects and data governance framework
  • Hands-on experience in Spark, Airflow, ELK.
  • Good to have knowledge on any data cleansing tool like apache Griffin
  • Preferably getting involved during project implementation so have a background on business knowledge and technical requirement as well.
  • Strong analytical and problem-solving skills. Have exposure to data analytics skills and knowledge of advanced data analytical tools will be an advantage.
  • Strong written and verbal communication skills (presentation skills).
  • Certifications in the above technologies is preferred.

 

>> Qualification

 

  1. Tech /B.E. / MCA /M. Tech from a reputed institute.

Experience of Data Management, Data Modelling, Ingestion for more than 4 years. Total experience of 8-10 Years

Read more
Pune
1 - 5 yrs
₹3L - ₹15L / yr
Machine Learning (ML)
Artificial Intelligence (AI)
Python
Data Structures
Algorithms
+17 more
 
SD (ML and AI) job description:

Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
 
Read more
Remote only
4 - 13 yrs
₹5L - ₹15L / yr
PostgreSQL
Relational Database (RDBMS)
Data modeling
Software Development
Big Data

Qentelli is seeking a Solution Architect to untangle and redesign a huge granny old monolithic legacy system. Interesting part is that the new system should be commissioned module by module and legacy system should phase off accordingly. So your design will have a cutting edge future state and a transition state to get there. Implementation now is all Microsoft tech stack and will continue to be on newer Microsoft tech stack. Also there is a critical component of API management to be introduced into the solution. Performance and scalability will be at the center of your solution architecture. Data modelling is one thing that is of super high importance to know.

 

You’ll have a distributed team with onshore in the US and offshore in India. As a Solution Architect, you should be able to wear multiple hats of working with client on solutioning and getting it implemented by engineering and infrastructure teams that are both onshore and offshore. Right candidate will be awesome at fleshing out and documenting every finer detail of the solution, elaborate at communicating with your teams, disciplined at getting it implemented and passionate for client success.

 

TECHNOLOGIES YOU’LL NEED TO KNOW

Greetings from Qentelli Solutions Private Limited!

 

We are hiring for PostgreSQL Developer

Experience: 4 to 12 years

Job Location: Hyderabad

 

Job Description:

  • Experience in RDBMS (PostgreSQL preferred), Database Backend development, Data Modelling, Performance Tuning, exposure to NoSQL DB, Kubernetes or Cloud (AWS/Azure/GCS)

 

Skillset for Developer-II:

  • Experience on any Big Data Tools (Nifi, Kafka, Spark, sqoop, storm, snowflake), Database Backend development, Python, No SQL DB, API Exposure, cloud or Kubernetes exposure

 

Skillset for API Developer:

  • API Development with extensive knowledge on any RDBMS (preferred PostgreSQL), exposure to cloud or Kubernetes
Read more
DP
Posted by Shipra Agrawal
NCR (Delhi | Gurgaon | Noida)
3 - 5 yrs
₹7L - ₹9L / yr
PowerBI
power bi
Business Intelligence (BI)
DAX
Data modeling
+3 more

 

Job Title: Power BI Developer(Onsite)

Location: Park Centra, Sec 30, Gurgaon

CTC:        8 LPA

Time:       1:00 PM - 10:00 PM

  

Must Have Skills: 

  • Power BI Desktop Software
  • Dax Queries
  • Data modeling
  • Row-level security
  • Visualizations
  • Data Transformations and filtering
  • SSAS and SQL

 

Job description:

 

We are looking for a PBI Analytics Lead responsible for efficient Data Visualization/ DAX Queries and Data Modeling. The candidate will work on creating complex Power BI reports. He will be involved in creating complex M, Dax Queries and working on data modeling, Row-level security, Visualizations, Data Transformations, and filtering. He will be closely working with the client team to provide solutions and suggestions on Power BI.

 

Roles and Responsibilities:

 

  • Accurate, intuitive, and aesthetic Visual Display of Quantitative Information: We generate data, information, and insights through our business, product, brand, research, and talent teams. You would assist in transforming this data into visualizations that represent easy-to-consume visual summaries, dashboards and storyboards. Every graph tells a story.
  • Understanding Data: You would be performing and documenting data analysis, data validation, and data mapping/design. You would be mining large datasets to determine its characteristics and select appropriate visualizations.
  • Project Owner: You would develop, maintain, and manage advanced reporting, analytics, dashboards and other BI solutions, and would be continuously reviewing and improving existing systems and collaborating with teams to integrate new systems. You would also contribute to the overall data analytics strategy by knowledge sharing and mentoring end users.
  • Perform ongoing maintenance & production of Management dashboards, data flows, and automated reporting.
  • Manage upstream and downstream impact of all changes on automated reporting/dashboards
  • Independently apply problem-solving ability to identify meaningful insights to business
  • Identify automation opportunities and work with a wide range of stakeholders to implement the same.
  • The ability and self-confidence to work independently and increase the scope of the service line

 

Requirements: 

  • 3+ years of work experience as an Analytics Lead / Senior Analyst / Sr. PBI Developer.
  • Sound understanding and knowledge of PBI Visualization and Data Modeling with DAX queries
  • Experience in leading and mentoring a small team.

 

 

 

Read more
IdeaSpark
Agency job
via work-o-hire by sayli shiralkar
Remote, Hyderabad
0 - 3 yrs
₹1L - ₹6L / yr
Tableau
Dashboard
Data modeling
Essential Responsibilities • Techno-Functional who can set up new reports, manage Business Reporting, Templated Analytics and Automation of reports • Build reporting framework within the Bank for seamless reporting experience for stakeholders from frontline Branch personnel to Top Management • Design and build robust and scalable models/architecture which can be leveraged for reporting and analytics • Develop and build highly collaborative and close working relationships with Business Partners and Sr. Leadership stakeholders • Testing the reports for formatting and data correctness and publishing the workbooks on the Tableau server for scheduled refreshes. • Documenting the dashboard needs, the critical success factors of the business, related KPIs and measures in the dataset, and the designs.
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at SkyPoint Cloud?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort