Data Platform Engineer

at Hypersonix Inc

DP
Posted by Manu Panwar
icon
Remote, Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹15L - ₹30L / yr
icon
Full time
Skills
Python
Java
Scala
Apache Kafka
Datawarehousing
Data Warehouse (DWH)
Hadoop
Data migration
API
Spark
NOSQL Databases
data engineer
At HypersoniX our platform technology is aimed to solve regular and persistent problem in data platform domain. We’ve established ourselves as a leading developer of innovative software solutions. We’re looking for a highly-skilled Data-Platform engineer to join our program and platform design team. Our ideal candidate will have expert knowledge of software development processes and solid experience in designing/developing/evaluating/troubleshooting data platform and data driven applications If finding issues and fixing them with beautiful, meticulous code are among the talents that make you tick, we’d like to hear from you.

Objectives of this Role:
• Design, and develop creative and innovative frameworks/components for data platforms, as we continue to experience dramatic growth in the usage and visibility of our products
• work closely with data scientist and product owners to come up with better design/development approach for application and platform to scale and serve the needs.
• Examine existing systems, identifying flaws and creating solutions to improve service uptime and time-to-resolve through monitoring and automated remediation
• Plan and execute full software development life cycles (SDLC) for each assigned project, adhering to company standards and expectations Daily and Monthly Responsibilities:
• Design and build tools/frameworks/scripts to automate development, testing deployment, management and monitoring of the company’s 24x7 services and products
• Plan and scale distributed software and applications, applying synchronous and asynchronous design patterns, write code, and deliver with urgency and quality
• Collaborate with global team, producing project work plans and analyzing the efficiency and feasibility of project operations,
• manage large volume of data and process them on Realtime and batch orientation as needed.
• while leveraging global technology stack and making localized improvements Track, document, and maintain software system functionality—both internally and externally, leveraging opportunities to improve engineering productivity
• Code review, Git operation, CI-CD, Mentor and assign task to junior team members

Responsibilities:
• Writing reusable, testable, and efficient code
• Design and implementation of low-latency, high-availability, and performant applications
• Integration of user-facing elements developed by front-end developers with server-side logic
• Implementation of security and data protection
• Integration of data storage solutions

Skills and Qualifications
• Bachelor’s degree in software engineering or information technology
• 5-7 years’ experience engineering software and networking platforms
• 5+ years professional experience with Python or Java or Scala.
• Strong experience in API development and API integration.
• proven knowledge on data migration, platform migration, CI-CD process, orchestration workflows like Airflow or Luigi or Azkaban etc.
• Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Hadoop, No-SQl platform
• Prior experience in Datawarehouse and OLAP design and deployment.
• Proven ability to document design processes, including development, tests, analytics, and troubleshooting
• Experience with rapid development cycles in a web-based/Multi Cloud environment
• Strong scripting and test automation abilities Good to have Qualifications
• Working knowledge of relational databases as well as ORM and SQL technologies
• Proficiency with Multi OS env, Docker and Kubernetes
• Proven experience designing interactive applications and largescale platforms
• Desire to continue to grow professional capabilities with ongoing training and educational opportunities.

About Hypersonix Inc

Hypersonix offers a unified, AI-Powered Intelligent Enterprise Platform designed to help leading enterprises drive profitable revenue growth.
Founded
2018
Type
Product
Size
100-500 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Analytics Head

at Brand Manufacturer for Bearded Men

Agency job
via Qrata
Analytics
Business Intelligence (BI)
Business Analysis
Python
SQL
Relational Database (RDBMS)
Data architecture
icon
Ahmedabad
icon
3 - 10 yrs
icon
₹15L - ₹30L / yr
Analytics Head

Technical must haves:

● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must

Technical Ideal to have:

● Exposure to our tech stack – PHP
● Microsoft workflows knowledge

Behavioural Pen Portrait:

● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate

Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
Job posted by
Prajakta Kulkarni

Game Optimisation Analyst

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Analytics
Data Science
SQL
NOSQL Databases
Python
icon
Bengaluru (Bangalore)
icon
1 - 8 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Data Analyst (Games Optimisation) you will be responsible for optimising in-game features and design, utilising A-B testing and multivariate testing of in-game components.


What you will be doing 

  • Investigate how millions of players interact with Kwalee games.

  • Perform statistical analysis to quantify the relationships between game elements and player engagement.

  • Design experiments which extract the most valuable information in the shortest time.

  • Develop testing plans which reveal complex interactions between game elements. 

  • Collaborate with the design team to come up with the most effective tests.

  • Regularly communicate results with development, management and data science teams.

 
How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.

  • You'll think creatively and be motivated by challenges and constantly striving for the best.

  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!


Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.


Skills and requirements

  • A degree in a numerically focussed degree discipline such as, Maths, Physics, Economics, Chemistry, Engineering, Biological Sciences

  • A record of outstanding problem solving ability in a commercial or academic setting

  • Experience using Python for data analysis and visualisation.

  • An excellent knowledge of statistical testing and experiment design.

  • Experience manipulating data in SQL and/or NoSQL databases.


We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment

  • In addition to a competitive salary we also offer private medical cover and life assurance

  • Creative Wednesdays! (Design and make your own games every Wednesday)

  • 20 days of paid holidays plus bank holidays 

  • Hybrid model available depending on the department and the role

  • Relocation support available 

  • Great work-life balance with flexible working hours

  • Quarterly team building days - work hard, play hard!

  • Monthly employee awards

  • Free snacks, fruit and drinks


Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Job posted by
Michael Hoppitt

Software Architect/CTO

at Blenheim Chalcot IT Services India Pvt Ltd

SQL Azure
ADF
Azure data factory
Azure Datalake
Azure Databricks
ETL
PowerBI
Apache Synapse
Data Warehouse (DWH)
API
SFTP
JSON
Java
Python
C#
Javascript
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
icon
Mumbai
icon
5 - 8 yrs
icon
₹25L - ₹30L / yr
As a hands-on Data Architect, you will be part of a team responsible for building enterprise-grade
Data Warehouse and Analytics solutions that aggregate data across diverse sources and data types
including text, video and audio through to live stream and IoT in an agile project delivery
environment with a focus on DataOps and Data Observability. You will work with Azure SQL
Databases, Synapse Analytics, Azure Data Factory, Azure Datalake Gen2, Azure Databricks, Azure
Machine Learning, Azure Service Bus, Azure Serverless (LogicApps, FunctionApps), Azure Data
Catalogue and Purview among other tools, gaining opportunities to learn some of the most
advanced and innovative techniques in the cloud data space.
You will be building Power BI based analytics solutions to provide actionable insights into customer
data, and to measure operational efficiencies and other key business performance metrics.
You will be involved in the development, build, deployment, and testing of customer solutions, with
responsibility for the design, implementation and documentation of the technical aspects, including
integration to ensure the solution meets customer requirements. You will be working closely with
fellow architects, engineers, analysts, and team leads and project managers to plan, build and roll
out data driven solutions
Expertise:
Proven expertise in developing data solutions with Azure SQL Server and Azure SQL Data Warehouse (now
Synapse Analytics)
Demonstrated expertise of data modelling and data warehouse methodologies and best practices.
Ability to write efficient data pipelines for ETL using Azure Data Factory or equivalent tools.
Integration of data feeds utilising both structured (ex XML/JSON) and flat schemas (ex CSV,TXT,XLSX)
across a wide range of electronic delivery mechanisms (API/SFTP/etc )
Azure DevOps knowledge essential for CI/CD of data ingestion pipelines and integrations.
Experience with object-oriented/object function scripting languages such as Python, Java, JavaScript, C#,
Scala, etc is required.
Expertise in creating technical and Architecture documentation (ex: HLD/LLD) is a must.
Proven ability to rapidly analyse and design solution architecture in client proposals is an added advantage.
Expertise with big data tools: Hadoop, Spark, Kafka, NoSQL databases, stream-processing systems is a plus.
Essential Experience:
5 or more years of hands-on experience in a data architect role with the development of ingestion,
integration, data auditing, reporting, and testing with Azure SQL tech stack.
full data and analytics project lifecycle experience (including costing and cost management of data
solutions) in Azure PaaS environment is essential.
Microsoft Azure and Data Certifications, at least fundamentals, are a must.
Experience using agile development methodologies, version control systems and repositories is a must.
A good, applied understanding of the end-to-end data process development life cycle.
A good working knowledge of data warehouse methodology using Azure SQL.
A good working knowledge of the Azure platform, it’s components, and the ability to leverage it’s
resources to implement solutions is a must.
Experience working in the Public sector or in an organisation servicing Public sector is a must,
Ability to work to demanding deadlines, keep momentum and deal with conflicting priorities in an
environment undergoing a programme of transformational change.
The ability to contribute and adhere to standards, have excellent attention to detail and be strongly driven
by quality.
Desirables:
Experience with AWS or google cloud platforms will be an added advantage.
Experience with Azure ML services will be an added advantage Personal Attributes
Articulated and clear in communications to mixed audiences- in writing, through presentations and one-toone.
Ability to present highly technical concepts and ideas in a business-friendly language.
Ability to effectively prioritise and execute tasks in a high-pressure environment.
Calm and adaptable in the face of ambiguity and in a fast-paced, quick-changing environment
Extensive experience working in a team-oriented, collaborative environment as well as working
independently.
Comfortable with multi project multi-tasking consulting Data Architect lifestyle
Excellent interpersonal skills with teams and building trust with clients
Ability to support and work with cross-functional teams in a dynamic environment.
A passion for achieving business transformation; the ability to energise and excite those you work with
Initiative; the ability to work flexibly in a team, working comfortably without direct supervision.
Job posted by
VIJAYAKIRON ABBINENI

Senior Big Data Engineer

at 6sense

Founded 2013  •  Product  •  100-500 employees  •  Raised funding
Spark
Hadoop
Big Data
Data engineering
PySpark
Apache Spark
Data Structures
Python
icon
Remote only
icon
5 - 9 yrs
icon
Best in industry

It’s no surprise that 6sense is named a top workplace year after year — we have industry-leading technology developed and taken to market by a world-class team. 6sense is Top Rated on Glassdoor with a 4.9/5 and our CEO Jason Zintak was recognized as the #1 CEO in the small & medium business category by Glassdoor’s 2021 Top CEO Employees Choice Awards.

In 2021, the company was recognized for having the Best Company for Diversity, Best Company for Women, Best CEO, Best Company Culture, Best Company Perks & Benefits and Happiest Employees from the employee feedback platform Comparably. In addition, 6sense has also won several accolades that demonstrate its reputation as an employer of choice including the Glassdoor Best Place to Work (2022), TrustRadius Tech Cares (2021) and Inc. Best Workplaces (2022, 2021, 2020, 2019).

6sense reinvents the way organizations create, manage, and convert pipeline to revenue. The 6sense Revenue AI captures anonymous buying signals, predicts the right accounts to target at the ideal time, and recommends the channels and messages to boost revenue performance. Removing guesswork, friction and wasted sales effort, 6sense empowers sales, marketing, and customer success teams to significantly improve pipeline quality, accelerate sales velocity, increase conversion rates, and grow revenue predictably.

 

6sense is seeking a Data Engineer to become part of a team designing, developing, and deploying its customer centric applications.  

A Data Engineer at 6sense will have the opportunity to 

  • Create, validate and maintain optimal data pipelines, assemble large, complex data sets that meet functional / non-functional business requirements.
  • Improving our current data pipelines i.e. improve their performance, remove redundancy, and figure out a way to test before v/s after to roll out.
  • Debug any issues that arise from data pipelines especially performance issues.
  • Experiment with new tools and new versions of hive/presto etc. etc.

Required qualifications and must have skills 

  • Excellent analytical and problem-solving skills
  • 6+ years work experience showing growth as a Data Engineer.
  • Strong hands-on experience with Big Data Platforms like Hadoop / Hive / Spark / Presto
  • Experience with writing Hive / Presto UDFs in Java
  • String experience in writing complex, optimized SQL queries across large data sets
  • Experience with optimizing queries and underlying storage
  • Comfortable with Unix / Linux command line
  • BE/BTech/BS or equivalent 

Nice to have Skills 

  • Used Key Value stores or noSQL databases 
  • Good understanding of docker and container platforms like Mesos and Kubernetes 
  • Security-first architecture approach 
  • Application benchmarking and optimization  
Job posted by
Shrutika Dhawan

Senior Data Engineer

at Curl Analytics

Agency job
via wrackle
ETL
Big Data
Data engineering
Apache Kafka
PySpark
Python
Pipeline management
Spark
Apache Hive
Docker
Kubernetes
MongoDB
SQL server
Oracle
Machine Learning (ML)
BigQuery
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹15L - ₹30L / yr
What you will do
  • Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
    • programmatically ingesting data from several static and real-time sources (incl. web scraping)
    • rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
    • performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
  • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
  • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
  • Build data tools to facilitate fast data cleaning and statistical analysis
  • Ensure data architecture is secure and compliant
  • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
  • Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

You should be

  •  Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
  • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
  • Expert in shell scripting and writing schedulers
  • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
  • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
  • Strong knowledge of data security best practices
  • 5+ years experience in a data engineering role
  • Science / Engineering graduate from a Tier-1 university in the country
  • And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Job posted by
Naveen Taalanki

Data Engineer (Fresher)

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
SQL
Data engineering
Data Engineer
Python
Big Data
PySpark
icon
Remote, Bengaluru (Bangalore), Hyderabad
icon
0 - 1 yrs
icon
₹3L - ₹3.5L / yr
Strong Programmer with expertise in Python and SQL
 
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Job posted by
Evelyn Charles

Data Engineer

at NeenOpal Intelligent Solutions Private Limited

Founded 2016  •  Services  •  20-100 employees  •  Bootstrapped
ETL
Python
Amazon Web Services (AWS)
SQL
PostgreSQL
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹6L - ₹12L / yr

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.

 

Requirements
  • 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
  • Experience using Python to automate ETL/Data Processes jobs.
  • Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
  • Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
  • Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
  • Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
  • Solid experience with data modeling, business logic, and RESTful APIs.
  • Solid experience in the Linux environment.
  • Experience with NoSQL / PostgreSQL preferred
  • Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
  • Experience with NGINX and SSL.
  • Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
Job posted by
Pavel Gupta

Machine Learning Engineer

at Centime

Agency job
via FlexAbility
Machine Learning (ML)
Artificial Intelligence (AI)
Deep Learning
Java
Python
icon
Hyderabad
icon
8 - 14 yrs
icon
₹15L - ₹35L / yr

Required skill

  • Around 6- 8.5 years of experience and around 4+ years in AI / Machine learning space
  • Extensive experience in designing large scale machine learning solution for the ML use case,  large scale deployments and establishing continues automated improvement / retraining framework.
  • Strong experience in Python and Java is required.
  • Hands on experience on Scikit-learn, Pandas, NLTK
  • Experience in Handling of Timeseries data and associated techniques like Prophet, LSTM
  • Experience in Regression, Clustering, classification algorithms
  • Extensive experience in buildings traditional Machine Learning SVM, XGBoost, Decision tree and Deep Neural Network models like RNN, Feedforward is required.
  • Experience in AutoML like TPOT or other
  • Must have strong hands on experience in Deep learning frameworks like Keras, TensorFlow or PyTorch 
  • Knowledge of Capsule Network or reinforcement learning, SageMaker is a desirable skill
  • Understanding of Financial domain is desirable skill

 Responsibilities 

  • Design and implementation of solutions for ML Use cases
  • Productionize System and Maintain those
  • Lead and implement data acquisition process for ML work
  • Learn new methods and model quickly and utilize those in solving use cases
Job posted by
srikanth voona

Data Scientist

at India's largest conglomerate

Data Science
Data Analytics
R Programming
Python
Statistical Modeling
Machine Learning (ML)
Logistic regression
icon
Remote only
icon
3 - 7 yrs
icon
₹6L - ₹15L / yr
Key Responsibilities:• Apply Data Mining/ Data Analysis methods using a variety of data tools, building andimplementing models using algorithms and creating/ running simulations to drive optimisation and improvement across business functions• Assess accuracy of new data sources and data gathering techniques• PerformExploratory Data Analysis, detailed analysis of business problems and technical environments indesigning the solution• Apply Supervised, Unsupervised, Reinforcement Learning and Deep Learning algorithms• Apply advanced Machine Learning Algorithms and Statistics:o Regression, Simulation, Scenario Analysiso Time Series Modellingo Classification -Logistic Regression, Decision Trees, SVM, KNN, Naive Bayeso Clustering, K-Means, Aprioprio Ensemble Models -Random Forest, Boosting, Baggingo Neural Networks• Lead and manage Proof of Concepts and demonstrate the outcomes quickly• Document use cases, solutions and recommendations• Work analytically in a problem-solving environment• Work in a fast-paced agile development environment• Coordinate with different functional teams to implement models and monitor outcomes• Work with stakeholders throughout the organization to identify opportunities for leveraging organisationdata and apply Predictive Modelling techniques to gain insights across businessfunctions -Operations,Products, Sales, Marketing, HR and Finance teams• Help program and project managers in the design, planning and governance of implementing DataScience solutions
Job posted by
Vidushi Singh

Data Scientist

at Monexo FinTech (P) Ltd

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data Science
Python
R Programming
icon
Mumbai, Chennai
icon
1 - 3 yrs
icon
₹3L - ₹5L / yr
The Candidate should be have: - good understanding of Statistical concepts - worked on Data Analysis and Model building for 1 year - ability to implement Data warehouse and Visualisation tools (IBM, Amazon or Tableau) - use of ETL tools - understanding of scoring models The candidate will be required: - to build models for approval or rejection of loans - build various reports (standard for monthly reporting) to optimise business - implement datawarehosue The candidate should be self-starter as well as work without supervision. You will be the 1st and only employee for this role for the next 6 months.
Job posted by
Mukesh Bubna
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Hypersonix Inc?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort