Cutshort logo
Orboai logo
Senior Computer Vision Engineer
Senior Computer Vision Engineer
Orboai's logo

Senior Computer Vision Engineer

Neha T's profile picture
Posted by Neha T
4 - 7 yrs
₹8L - ₹22L / yr
Mumbai, Delhi
Skills
OpenCV
Image Processing
Image segmentation
Deep Learning
Python
OpenGL
C++
Computer Vision
TensorFlow
Keras
Machine Learning (ML)
OCR

Who Are We

 

A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.

 

ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.

 

WHY US

  • Join top AI company
  • Grow with your best companions
  • Continuous pursuit of excellence, equality, respect
  • Competitive compensation and benefits

You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.

 

To learn more about how we work, please check out

https://www.orbo.ai/.

 

Description:

We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.

 

Responsibilities:

  • Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
  • Lead a team of ML engineers in developing an industrial AI product from scratch
  • Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
  • Tune the models to achieve high accuracy rates and minimum latency
  • Deploying developed computer vision models on edge devices after optimization to meet customer requirements

 

 

Requirements:

  • Bachelor’s degree
  • Understanding about depth and breadth of computer vision and deep learning algorithms.
  • 4+ years of industrial experience in computer vision and/or deep learning
  • Experience in taking an AI product from scratch to commercial deployment.
  • Experience in Image enhancement, object detection, image segmentation, image classification algorithms
  • Experience in deployment with OpenVINO, ONNXruntime and TensorRT
  • Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
  • Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
  • Proficient understanding of code versioning tools, such as Git

Our perfect candidate is someone that:

  • is proactive and an independent problem solver
  • is a constant learner. We are a fast growing start-up. We want you to grow with us!
  • is a team player and good communicator

 

What We Offer:

  • You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
  • You will be in charge of what you build and be an integral part of the product development process
  • Technical and financial growth!
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Orboai

Founded :
2017
Type
Size
Stage :
Raised funding
About
Orbo is a disruptive visual intelligence company that harnesses Computer vision and Deep Learning as our core technologies. We are dedicated towards spearheading break-through research and development in Deep Learning technology, so as to provide unique solutions for artificial intelligence applications and transformational visual perception.
Read more
Connect with the team
Profile picture
Danish Jamil
Profile picture
Neha T
Profile picture
Hardika Bhansali
Profile picture
Mina Prajapat
Company social profiles
linkedintwitterfacebook

Similar jobs

Acrivision Technologies Pvt Ltd
vinayak patil
Posted by vinayak patil
Pune
2 - 10 yrs
₹2L - ₹15L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
  • Lead the data science, ML, product analytics, and insights functions by translating sparse and decentralized datasets to develop metrics, standardize processes, and lead the path from data to insights.
  • Building visualizations, models, pipelines, alerts/insights systems, and recommendations in Python/Java to support business decisions and operational experiences.
  • Advising executives on calibration strategy, DEI, and workforce planning. 
Read more
Miracle Software Systems, Inc
Ratnakumari Modhalavalasa
Posted by Ratnakumari Modhalavalasa
Visakhapatnam
3 - 5 yrs
₹2L - ₹4L / yr
Hadoop
Apache Sqoop
Apache Hive
Apache Spark
Apache Pig
+9 more
Position : Data Engineer

Duration : Full Time

Location : Vishakhapatnam, Bangalore, Chennai

years of experience : 3+ years

Job Description :

- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.

- Strong communications (written and verbal) along with being a good team player.

- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)

- 2+ years of strong experience with SQL and Python (Data Engineering focused).

- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.

- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Read more
Propellor.ai
at Propellor.ai
5 candid answers
1 video
Anila Nair
Posted by Anila Nair
Remote only
2 - 5 yrs
₹5L - ₹15L / yr
SQL
API
Python
Spark

Job Description - Data Engineer

About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable. 

 

What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data

Qualifications:

• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.

Whom will you work with?
You will closely work with the engineering team and support the Product Team.

Hiring Process includes : 

a. Written Test on Python and SQL

b. 2 - 3 rounds of Interviews

Immediate Joiners will be preferred

Read more
8om Internet
at 8om Internet
1 recruiter
Harsh Maur
Posted by Harsh Maur
Remote only
1 - 3 yrs
₹2.4L - ₹3.5L / yr
Web Scraping
Python
Selenium
Scrapy
Job Description

1. Use Python Scrapy to crawl the website
2. Work on dynamic websites and solve crawling challenges
3. Work in a fast-paced startup environment
Read more
Srijan Technologies
at Srijan Technologies
6 recruiters
PriyaSaini
Posted by PriyaSaini
Remote only
2 - 6 yrs
₹8L - ₹13L / yr
PySpark
SQL
Data modeling
Data Warehouse (DWH)
Informatica
+2 more
3+ years of professional work experience with a reputed analytics firm
 Expertise in handling large amount of data through Python or PySpark
 Conduct data assessment, perform data quality checks and transform data using SQL
and ETL tools
 Experience of deploying ETL / data pipelines and workflows in cloud technologies and
architecture such as Azure and Amazon Web Services will be valued
 Comfort with data modelling principles (e.g. database structure, entity relationships, UID
etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
 A thoughtful and comfortable communicator (verbal and written) with the ability to
facilitate discussions and conduct training
 Track record of strong problem-solving, requirement gathering, and leading by example
 Ability to thrive in a flexible and collaborative environment
 Track record of completing projects successfully on time, within budget and as per scope
Read more
Venture Highway
at Venture Highway
3 recruiters
Nipun Gupta
Posted by Nipun Gupta
Bengaluru (Bangalore)
2 - 6 yrs
₹10L - ₹30L / yr
Python
Data engineering
Data Engineer
MySQL
MongoDB
+5 more
-Experience with Python and Data Scraping.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.

Preference for candidates working in tech product companies
Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹25L - ₹28L / yr
Data Science
Machine Learning (ML)
Data Scientist
Python
Logistic regression
+2 more
  • Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
  • Implement data pipelines, new features, and algorithms that are critical to our production models
  • Create scalable strategies to deploy and execute your models
  • Write well designed, testable, efficient code
  • Identify valuable data sources and automate collection processes.
  • Undertake to preprocess of structured and unstructured data.
  • Analyze large amounts of information to discover trends and patterns.

 

Requirements:

  • 2+ years of experience in applied data science or engineering with a focus on machine learning
  • Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor, etc)
  • strong quantitative and programming skills with a product-driven sensibility

 

 

Read more
Remote only
3 - 7 yrs
₹7L - ₹12L / yr
Transact-SQL
SQL
MySQL
MySQL DBA
NOSQL Databases
+3 more
Roles and responsibilities:
1. To strive for continuous improvement of both technical and business knowledge
2. Engage proactively with stakeholders and other cross-functional team members in the
development of IT solutions
3. Ensure new database code meets company standards for readability,reliability and performance
4. Passionate about technology and applying it to business solutions
5. Work closely with business
The Ideal Candidate:
1. 3-6 years development experience with SQL SERVER / MySQL platforms
2. Strong Knowledge of T-SQL
3. A good understanding of data normalization,data modelling and database schema design
principals
4. Knowlegde of database optimization techniques
5. Monitor database performance , implement changes and apply new patches and versions when
required
6. Automate regular process, track issues and document changes
7. Assist developers with complex query tuning and schema refinement
8. Proven working experience as a Database administrator
9. Experience with backups,restores and recovery models

10. Knowledge of High-Availability (HA) and Disaster Recovery(DR) options for SQL Server
11. Experience on AWS technologies (RDS,EC2,EBS,S3,RedShift,Glue)
12. Experience in implementation & support of NoSQL technologies
redis,memcached,mongodb,elasticcache)
13. Hands on programming experience in Python
14. Ability to multi-task and work independently
15. Proficient in Waterfall & Agile development methodology especially when applied to stae-full
data environments
Good to have :
1. Experience with Financial Data, Capital Markets and Mutual funds industry/data
1. Working experience on Data Warehouse projects or exhibits sound knowledge over the concepts
1. Understanding of Distributed Systems and Parallel Processing architecture
Read more
One Labs
at One Labs
1 recruiter
Rahul Gupta
Posted by Rahul Gupta
NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹3L - ₹6L / yr
Data Science
Deep Learning
Python
Keras
TensorFlow
+1 more

Job Description


We are looking for a data scientist that will help us to discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. 

Responsibilities

  • Selecting features, building and optimizing classifiers using machine learning techniques
  • Data mining using state-of-the-art methods
  • Extending company’s data with third party sources of information when needed
  • Enhancing data collection procedures to include information that is relevant for building analytic systems
  • Processing, cleansing, and verifying the integrity of data used for analysis
  • Doing ad-hoc analysis and presenting results in a clear manner
  • Creating automated anomaly detection systems and constant tracking of its performance

Skills and Qualifications

  • Excellent understanding of machine learning techniques and algorithms, such as Linear regression, SVM, Decision Forests, LSTM, CNN etc.
  • Experience with Deep Learning preferred.
  • Experience with common data science toolkits, such as R, NumPy, MatLab, etc. Excellence in at least one of these is highly desirable
  • Great communication skills
  • Proficiency in using query languages such as SQL, Hive, Pig 
  • Good applied statistics skills, such as statistical testing, regression, etc.
  • Good scripting and programming skills 
  • Data-oriented personality
Read more
Foghorn Systems
at Foghorn Systems
1 recruiter
Abhishek Vijayvargia
Posted by Abhishek Vijayvargia
Pune
0 - 7 yrs
₹15L - ₹50L / yr
R Programming
Python
Data Science

Role and Responsibilities

  • Execute data mining projects, training and deploying models over a typical duration of 2 -12 months.
  • The ideal candidate should be able to innovate, analyze the customer requirement, develop a solution in the time box of the project plan, execute and deploy the solution.
  • Integrate the data mining projects embedded data mining applications in the FogHorn platform (on Docker or Android).

Core Qualifications
Candidates must meet ALL of the following qualifications:

  • Have analyzed, trained and deployed at least three data mining models in the past. If the candidate did not directly deploy their own models, they will have worked with others who have put their models into production. The models should have been validated as robust over at least an initial time period.
  • Three years of industry work experience, developing data mining models which were deployed and used.
  • Programming experience in Python is core using data mining related libraries like Scikit-Learn. Other relevant Python mining libraries include NumPy, SciPy and Pandas.
  • Data mining algorithm experience in at least 3 algorithms across: prediction (statistical regression, neural nets, deep learning, decision trees, SVM, ensembles), clustering (k-means, DBSCAN or other) or Bayesian networks

Bonus Qualifications
Any of the following extra qualifications will make a candidate more competitive:

  • Soft Skills
    • Sets expectations, develops project plans and meets expectations.
    • Experience adapting technical dialogue to the right level for the audience (i.e. executives) or specific jargon for a given vertical market and job function.
  • Technical skills
    • Commonly, candidates have a MS or Ph.D. in Computer Science, Math, Statistics or an engineering technical discipline. BS candidates with experience are considered.
    • Have managed past models in production over their full life cycle until model replacement is needed. Have developed automated model refreshing on newer data. Have developed frameworks for model automation as a prototype for product.
    • Training or experience in Deep Learning, such as TensorFlow, Keras, convolutional neural networks (CNN) or Long Short Term Memory (LSTM) neural network architectures. If you don’t have deep learning experience, we will train you on the job.
    • Shrinking deep learning models, optimizing to speed up execution time of scoring or inference.
    • OpenCV or other image processing tools or libraries
    • Cloud computing: Google Cloud, Amazon AWS or Microsoft Azure. We have integration with Google Cloud and are working on other integrations.
    • Decision trees like XGBoost or Random Forests is helpful.
    • Complex Event Processing (CEP) or other streaming data as a data source for data mining analysis
    • Time series algorithms from ARIMA to LSTM to Digital Signal Processing (DSP).
    • Bayesian Networks (BN), a.k.a. Bayesian Belief Networks (BBN) or Graphical Belief Networks (GBN)
    • Experience with PMML is of interest (see www.DMG.org).
  • Vertical experience in Industrial Internet of Things (IoT) applications:
    • Energy: Oil and Gas, Wind Turbines
    • Manufacturing: Motors, chemical processes, tools, automotive
    • Smart Cities: Elevators, cameras on population or cars, power grid
    • Transportation: Cars, truck fleets, trains

 

About FogHorn Systems
FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT application solutions. FogHorn’s Lightning software platform brings the power of advanced analytics and machine learning to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as Smart Grid, Smart City, Smart Building and connected vehicle applications.

Press:  https://www.foghorn.io/press-room/">https://www.foghorn.io/press-room/

Awards: https://www.foghorn.io/awards-and-recognition/">https://www.foghorn.io/awards-and-recognition/

  • 2019 Edge Computing Company of the Year – Compass Intelligence
  • 2019 Internet of Things 50: 10 Coolest Industrial IoT Companies – CRN
  • 2018 IoT Planforms Leadership Award & Edge Computing Excellence – IoT Evolution World Magazine
  • 2018 10 Hot IoT Startups to Watch – Network World. (Gartner estimated 20 billion connected things in use worldwide by 2020)
  • 2018 Winner in Artificial Intelligence and Machine Learning – Globe Awards
  • 2018 Ten Edge Computing Vendors to Watch – ZDNet & 451 Research
  • 2018 The 10 Most Innovative AI Solution Providers – Insights Success
  • 2018 The AI 100 – CB Insights
  • 2017 Cool Vendor in IoT Edge Computing – Gartner
  • 2017 20 Most Promising AI Service Providers – CIO Review

Our Series A round was for $15 million.  Our Series B round was for $30 million October 2017.  Investors include: Saudi Aramco Energy Ventures, Intel Capital, GE, Dell, Bosch, Honeywell and The Hive.

About the Data Science Solutions team
In 2018, our Data Science Solutions team grew from 4 to 9.  We are growing again from 11. We work on revenue generating projects for clients, such as predictive maintenance, time to failure, manufacturing defects.  About half of our projects have been related to vision recognition or deep learning. We are not only working on consulting projects but developing vertical solution applications that run on our Lightning platform, with embedded data mining.

Our data scientists like our team because:

  • We care about “best practices”
  • Have a direct impact on the company’s revenue
  • Give or receive mentoring as part of the collaborative process
  • Questions and challenging the status quo with data is safe
  • Intellectual curiosity balanced with humility
  • Present papers or projects in our “Thought Leadership” meeting series, to support continuous learning

 

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos