Cutshort logo

11+ ITL Jobs in India

Apply to 11+ ITL Jobs on CutShort.io. Find your next job, effortlessly. Browse ITL Jobs and apply today!

icon
Encubate Tech Private Ltd
Mumbai
5 - 6 yrs
₹15L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon Redshift
Data modeling
ITL
Agile/Scrum
+7 more

Roles and

Responsibilities

Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to

help us in configure and develop new AWS environments for our Enterprise Data Lake,

migrate the on-premise traditional workloads to cloud. Must have a sound

understanding of BI best practices, relational structures, dimensional data modelling,

structured query language (SQL) skills, data warehouse and reporting techniques.

 Extensive experience in providing AWS Cloud solutions to various business

use cases.

 Creating star schema data models, performing ETLs and validating results with

business representatives

 Supporting implemented BI solutions by: monitoring and tuning queries and

data loads, addressing user questions concerning data integrity, monitoring

performance and communicating functional and technical issues.

Job Description: -

This position is responsible for the successful delivery of business intelligence

information to the entire organization and is experienced in BI development and

implementations, data architecture and data warehousing.

Requisite Qualification

Essential

-

AWS Certified Database Specialty or -

AWS Certified Data Analytics

Preferred

Any other Data Engineer Certification

Requisite Experience

Essential 4 -7 yrs of experience

Preferred 2+ yrs of experience in ETL & data pipelines

Skills Required

Special Skills Required

 AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.

 Bigdata: Databricks, Spark, Glue and Athena

 Expertise in Lake Formation, Python programming, Spark, Shell scripting

 Minimum Bachelor’s degree with 5+ years of experience in designing, building,

and maintaining AWS data components

 3+ years of experience in data component configuration, related roles and

access setup

 Expertise in Python programming

 Knowledge in all aspects of DevOps (source control, continuous integration,

deployments, etc.)

 Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD

 Hands on ETL development experience, preferably using or SSIS

 SQL Server experience required

 Strong analytical skills to solve and model complex business requirements

 Sound understanding of BI Best Practices/Methodologies, relational structures,

dimensional data modelling, structured query language (SQL) skills, data

warehouse and reporting techniques

Preferred Skills

Required

 Experience working in the SCRUM Environment.

 Experience in Administration (Windows/Unix/Network/Database/Hadoop) is a

plus.

 Experience in SQL Server, SSIS, SSAS, SSRS

 Comfortable with creating data models and visualization using Power BI

 Hands on experience in relational and multi-dimensional data modelling,

including multiple source systems from databases and flat files, and the use of

standard data modelling tools

 Ability to collaborate on a team with infrastructure, BI report development and

business analyst resources, and clearly communicate solutions to both

technical and non-technical team members

Read more
An AI based company
Agency job
via Qrata by Prajakta Kulkarni
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
5 - 10 yrs
₹25L - ₹70L / yr
Computer Vision
OpenCV
skill iconPython
TensorFlow
PyTorch
Job Title : Lead Computer Vision Engineer
Location : Gurgaon

About the company:
The company is changing the way cataloging is done across the Globe. Our vision is to empower the smallest of sellers, situated in the farthest of corners, to create superior product images and videos, without the need for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly compose High-Quality "tuned-in" product visuals, instantly. The company has built the world’s leading image editing AI software, to capture and process beautiful product images for online selling. We are also fortunate and proud to be backed by the biggest names in the investment community including the likes of Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that there is an intelligent and efficient way of doing Digital Production than how the world operates currently.

Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of e
commerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.

Job Requirements:
- Min 3+ years of work experience in Computer Vision with 5-10 years work experience overall
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
Read more
Hyderabad
4 - 8 yrs
₹5L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus
Read more
xoxoday

at xoxoday

2 recruiters
Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹15L / yr
MySQL
skill iconMongoDB
Data modeling
API
Apache Kafka
+2 more

What is the role?

You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.

Key Responsibilities

  • Design and develop resilient data pipelines.
  • Write efficient queries to fetch data from the report database.
  • Work closely with application backend engineers on data requirements for their stories.
  • Designing and developing report APIs for the front end to consume.
  • Focus on building highly available, fault-tolerant report systems.
  • Constantly improve the architecture of the application by clearing the technical backlog. 
  • Adopt a culture of learning and development to constantly keep pace with and adopt new technolgies.

What are we looking for?

An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.

  • Education - BE/MCA or equivalent
  • Overall 8+ years of experience
  • Expert level understanding of database concepts and BI.
  • Well verse in databases such as MySQL, MongoDB and hands on experience in creating data models. 
  • Must have designed and implemented low latency data warehouse systems.
  • Must have strong understanding of Kafka and related systems.
  • Experience in clickhouse database preferred.
  • Must have good knowledge of APIs and should be able to build interfaces for frontend engineers.
  • Should be innovative and communicative in approach
  • Will be responsible for functional/technical track of a project

Whom will you work with?

You will work with a top-notch tech team, working closely with the CTO and product team.  

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.

We are

Xoxoday is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II, Xoxoday offers a suite of three products - Plum, Empuls, and Compass. Xoxoday works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, Xoxoday is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

 

Read more
Pune
5 - 8 yrs
₹1L - ₹15L / yr
Informatica
Informatica PowerCenter
Spark
Hadoop
Big Data
+6 more

Technical/Core skills

  1. Minimum 3 yrs of exp in Informatica Big data Developer(BDM) in Hadoop environment.
  2. Have knowledge of informatica Power exchange (PWX).
  3. Minimum 3 yrs of exp in big data querying tool like Hive and Impala.
  4. Ability to designing/development of complex mappings using informatica Big data Developer.
  5. Create and manage Informatica power exchange and CDC real time implementation
  6. Strong Unix knowledge skills for writing shell scripts and troubleshoot of existing scripts.
  7. Good knowledge of big data platforms and its framework.
  8. Good to have an experience in cloudera data platform (CDP)
  9. Experience with building stream processing systems using Kafka and spark
  10. Excellent SQL knowledge

 

Soft skills :

  1. Ability to work independently 
  2. Strong analytical and problem solving skills
  3. Attitude of learning new technology
  4. Regular interaction with vendors, partners and stakeholders
Read more
Remote only
1 - 8 yrs
₹8L - ₹18L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
Data Structures
Data modeling
+3 more

This person MUST have:

  • B.E Computer Science or equivalent
  • 5 years experience with the Django framework
  • Experience with building APIs (REST or GraphQL) 
  • Strong Troubleshooting and debugging skills
  • React.js knowledge would be an added bonus 
  • Understanding on how to use a database like Postgres (prefered choice), SQLite, MongoDB, MySQL.
  • Sound knowledge of object-oriented design and analysis.
  • A strong passion for writing simple, clean and efficient code.
  • Proficient understanding of code versioning tools Git.
  • Strong communication skills.

Experience:

  • Min 5 year experience
  • Startup experience is a must. 

Location:

  • Remote developer

Timings:

  • 40 hours a week but with 4 hours a day overlapping with client timezone.  Typically clients are in California PST Timezone.

Position:

  • Full time/Direct
  • We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
  • We dont believe in locking in people with large notice periods.  You will stay here because you love the company.  We have only a 15 days notice period.
Read more
Bengaluru (Bangalore)
2 - 3 yrs
₹15L - ₹20L / yr
skill iconPython
skill iconScala
Hadoop
Spark
Data Engineer
+4 more
  • We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
  • The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
  • The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
  • Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
  • You should be able to work in a high-volume environment, have outstanding planning and organisational skills.

 

Qualifications for Data Engineer

 

  • Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.

 

What we're looking for?

 

  • Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
  • Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
  • Experience with data pipeline and workflow management tools: Luigi, Airflow.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Read more
Busigence Technologies

at Busigence Technologies

1 video
1 recruiter
Seema Verma
Posted by Seema Verma
Bengaluru (Bangalore)
0 - 10 yrs
₹3L - ₹9L / yr
skill iconData Science
Big Data
skill iconMachine Learning (ML)
Statistical Analysis
skill iconDeep Learning
+3 more
APPLY LINK: http://bit.ly/2yipqSE Go through the entire job post thoroughly before pressing Apply. There is an eleven characters french word v*n*i*r*t*e mentioned somewhere in the whole text which is irrelevant to the context. You shall be required to enter this word while applying else application won't be considered submitted. ````````````````````````````````````````````````````````````````````````````````````````````````````` Aspirant - Data Science & AI Team: Sciences Full-Time, Trainee Bangaluru, India Relevant Exp: 0 - 10 Years Background: Top Tier institute Compensation: Above Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions. Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world’s most pressing problems differently thereby significantly impacting thousands of souls, everyday. We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products. In past we have served fifty five customers and presently developing our second product, Robonate. First was emmoQ - an emotion intelligence platform. Third offering, H2HData, an innovation lab where we solve hard problems through data, science, & design. We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence. First-and-Foremost Before you dive-in exploring this opportunity and press Apply, we wish you to evaluate yourself - We are looking for right candidate, not the best candidate. We love to work with someone who can mandatorily gel with our vision, beliefs, thoughts, methods, and values --- which are aligned with what can be expected in a true startup with ambitious goals. Skills are always secondary to us. Primarily, you must be someone who is not essentially looking for a job or career, rather starving for a challenge, you yourself probably don't know since when. A book can be written on what an applicant must have before joining a . For brevity, in nutshell, we need these three in you: 1. You must be [super sharp] (Just an analogue, but Irodov, Mensa, Feynman, Polya, ACM, NIPS, ICAAC, BattleCode, DOTA etc should have been your Done stuff. Can you relate solution 1 to problem 2? or Do you get confused even when solved similar problem in past? Are you able to grasp problem statement in one go? or get hanged?) 2. You must be [extremely energetic] (Do you raise eyebrows when asked to stretch your limits, both in terms of complexity or extra hours to put in? What comes first in your mind, let's finish it today or this can be done tomorrow too? Its Friday 10 PM at work -Tired?) 3. You must be [honourably honest] (Do you tell others what you think, or what they want to hear? Later is good for sales team for their customers, not for this role. Are you honest with your work? intrinsically with yourself first?) You know yourself the best. If not ask your loved ones and then decide. We clearly need exceedingly motivated people with entrepreneurial traits, not employee mindset - not at all. This is an immediate requirement. We shall have an accelerated interview process for fast closure - you would be required to be proactive and responsive. Real ROLE We are looking for students, graduates, and experienced folks with real passion for algorithms, computing, and analysis. You would be required to work with our sciences team on complex cases from data science, machine learning, and business analytics. Mandatory R1. Must know in-and-out of functional programming (https://docs.python.org/2/howto/functional.html) in Python with strong flair for data structures, linear algebra, & algorithms implementation. Only oops cannot not be accepted. R2. Must have soiled hands on methods, functions, and workarounds in NumPy, Pandas, Scikit-learn, SciPy, Stasmodels - collectively you should have implemented atleast 100 different techniques (we averaged out this figure with our past aspirants who have worked on this role) R3. Must have implemented complex mathematical logics through functional map-reduce framework in Python R4. Must have understanding on EDA cycle, machine learning algorithms, hyper-parameter optimization, ensemble learning, regularization, predictions, clustering, associations - at essential level R5. Must have solved atleast five problems through data science & machine learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted Preferred R6. Good to have required callibre to learn PySpark within four weeks once joined us R7. Good to have required callibre to grasp underlying business for a problem to be solved R8. Good to have understanding on CNNs, RNNs, MLP, Auto-Encoders - at basic level R9. Good to have solved atleast three problems through deep learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted R10. Good to have worked on pre-processing techniques for images, audio, and text - OpenCV, Librosa, NLTK R11. Good to have used pre-trained models - VGGNET, Inception, ResNet, WaveNet, Word2Vec Ideal YOU Y1. Degree in engineering, or any other data-heavy field at Bachelors level or above from a top tier institute Y2. Relevant experience of 0 - 10 years working on real-world problems in a reputed company or a proven startup Y3. You are a fanatical implementer who love to spend time with content, codes & workarounds, more than your loved ones Y4. You are true believer that human intelligence can be augmented through computer science & mathematics and your survival vinaigrette depends on getting the most from the data Y5. You are an entrepreneur mindset with ownership, intellectuality, & creativity as way to work. These are not fancy words, we mean it Actual WE W1. Real startup with Meaningful products W2. Revolutionary not just disruptive W3. Rules creators not followers W4. Small teams with real brains not herd of blockheads W5. Completely trust us and should be trusted back Why Us In addition to the regular stuff which every good startup offers – Lots of learning, Food, Parties, Open culture, Flexible working hours, and what not…. We offer you: You shall be working on our revolutionary products which are pioneer in their respective categories. This is a fact. We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace. How to Apply You should apply online by clicking "Apply Now". For queries regarding an open position, please write to [email protected] For more information, visit http://www.busigence.com Careers: http://careers.busigence.com Research: http://research.busigence.com Jobs: http://careers.busigence.com/jobs/data-science Feel right fit for the position, mandatorily attach PDF resume highlighting your A. Key Skills B. Knowledge Inputs C. Major Accomplishments D. Problems Solved E. Submissions – Github/ StackOverflow/ Kaggle/ Euler Project etc. (if applicable) If you don't see this open position that interests you, join our Talent Pool and let us know how you can make a difference here. Referrals are more than welcome. Keep us in loop.
Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
Data Engineer
Big Data
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
+2 more
  •  We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
  • The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
  • The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
  • The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Read more
Rely

at Rely

1 video
3 recruiters
Hizam Ismail
Posted by Hizam Ismail
Bengaluru (Bangalore)
2 - 10 yrs
₹8L - ₹35L / yr
skill iconPython
Hadoop
Spark
skill iconAmazon Web Services (AWS)
Big Data
+2 more

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Read more
LatentView Analytics
Bengaluru (Bangalore), Chennai
9 - 14 yrs
₹9L - ₹14L / yr
Data Structures
Business Development
skill iconData Analytics
Regression Testing
skill iconMachine Learning (ML)
+4 more
Required Skill Set: -5+ years of hands-on experience in delivering results-driven analytics solutions with proven business value - Great consulting and quantitative skills, detail-oriented approach, with proven expertise in developing solutions using SQL, R, Python or such tools - A background in Statistics / Econometrics / Applied Math / Operations Research would be considered a plus -Exposure to working with globally dispersed teams based out of India or other offshore locations Role Description/ Responsibilities: Be the face of LatentView in the client's organization and help define analytics-driven consulting solutions to business problems -Translate business problems into analytic solution requirements and work with the LatentView team to develop high-quality solutions "- Communicate effectively with client / offshore team to manage client expectations and ensure timeliness and quality of insights -Develop expertise in clients business and help translate that into increasingly high value-added advisory solutions to client -Oversee Project Delivery to ensure the team meets the quality, productivity and SLA objectives - Grow the Account in terms of revenue and the size of the team You should Apply if you want to: - Change the world with Math and Models: At the core, we believe that analytics can help drive business transformation and lasting competitive advantage. We work with a heavy mix of algorithms, analysis, large databases and ROI to positively transform many a client- business performance - Make a direct impact on business: Your contribution to delivering results-driven solutions can potentially lead to millions of dollars of additional revenue or profit for our clients - Thrive in a Fast-pace Environment: You work in small teams, in an entrepreneurial environment, and a meritorious culture that values speed, growth, diversity and contribution - Work with great people: Our selection process ensures that we hire only the very best, while more than 50% of our analysts and 90% of our managers are alumni/alumna of prestigious global institutions
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort