Cutshort logo
Sizzle logo
Computer Vision Engineer
Computer Vision Engineer
Sizzle's logo

Computer Vision Engineer

Vijay Koduri's profile picture
Posted by Vijay Koduri
2 - 10 yrs
₹6L - ₹15L / yr
Full time
Bengaluru (Bangalore)
Skills
Machine Learning (ML)
Data Science
Computer Vision
Artificial Intelligence (AI)
TensorFlow
Python
PyTorch
Neural networks
yolo

Sizzle is an exciting new startup that’s changing the world of gaming.  At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with computer vision and AI technologies around image and video analysis.  


You will be responsible for:

  • Developing computer vision algorithms to detect key moments within popular online games
  • Leveraging baseline technologies such as TensorFlow, OpenCV, and others -- and building models on top of them
  • Building neural network (CNN) architectures for image and video analysis, as it pertains to popular games
  • Specifying exact requirements for training data sets, and working with analysts to create the data sets
  • Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
  • Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation

You should have the following qualities:

  • Solid understanding of computer vision and AI frameworks and algorithms, especially pertaining to image and video analysis
  • Experience using Python, TensorFlow, OpenCV and other computer vision tools
  • Understand common computer vision object detection models in use today e.g. Inception, R-CNN, Yolo, MobileNet SSD, etc.
  • Demonstrated understanding of various algorithms for image and video analysis, such as CNNs, LSTM for motion and inter-frame analysis, and others
  • Familiarity with AWS environments
  • Excited about working in a fast-changing startup environment
  • Willingness to learn rapidly on the job, try different things, and deliver results
  • Ideally a gamer or someone interested in watching gaming content online

Skills:

Machine Learning, Computer Vision, Image Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python


Seniority: We are open to junior or senior engineers. We're more interested in the proper skillsets.


Salary: Will be commensurate with experience. 


Who Should Apply:

If you have the right experience, regardless of your seniority, please apply. However, if you don't have AI or computer vision experience, please do not apply.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Sizzle

Founded :
2018
Type
Size
Stage :
Raised funding
About
At Sizzle, we take all the Fortnite games from top streamers and condense them to 5 minute “sizzles”. Just the action, without the boring parts.
Read more
Connect with the team
Profile picture
Vijay Koduri
Company social profiles
instagramtwitterfacebook

Similar jobs

Thoughtworks
at Thoughtworks
1 video
34 recruiters
Ramya S
Posted by Ramya S
Bengaluru (Bangalore), Hyderabad, Chennai, Pune, Gurugram
2.8 - 5 yrs
Best in industry
Spark
Scala
Python
Amazon Web Services (AWS)
SQL Azure
+6 more

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.


Job Responsibilities


  • You will partner with teammates to create complex data processing pipelines to solve our clients' most complex challenges
  • You will collaborate with Data Scientists to design scalable implementations of their models
  • You will pair to write clean and iterative code based on TDD
  • Leverage various continuous delivery practices to deploy, support and operate data pipelines
  • Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
  • Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
  • Create data models and speak to the tradeoffs of different modeling approaches
  • Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
  • Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes


Job Qualifications


Technical skills


  • You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
  • You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
  • Hands-on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
  • You are comfortable taking data-driven approaches and applying data security strategies to solve business problems
  • Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
  • You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments


Professional skills


  • You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
  • An interest in coaching, sharing your experience and knowledge with teammates
  • You enjoy influencing others and always advocate for technical excellence while being open to change when needed
  • Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more


Other things to know


Learning & Development


There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career journeys.


About Thoughtworks


Thoughtworks is a global technology consultancy that integrates strategy, design and engineering to drive digital innovation. For over 30 years, our clients have trusted our autonomous teams to build solutions that look past the obvious. Here, computer science grads come together with seasoned technologists, self-taught developers, midlife career changers and more to learn from and challenge each other. Career journeys flourish with the strength of our cultivation culture, which has won numerous awards around the world.


Join Thoughtworks and thrive. Together, our extra curiosity, innovation, passion and dedication overcomes ordinary.

Read more
Wonder Worth Solutions Pvt Ltd
Vellore
4 - 7 yrs
₹3.5L - ₹5L / yr
Machine Learning (ML)
Data Science
Python
Java
C++
+1 more

As a part of WWS, your expertise in machine learning helps us extract value from our data. You will lead all the processes from data collection, cleaning, and preprocessing, to training models and situating them to production. The ideal candidate will be passionate about artificial intelligence and stay up-to-date with the latest developments in the field.

What We Expect

  • A Bachelor's/Master's degree in IT, computer science, or an advanced related field is preferred.
  • At least 3+ years of experience working with ML libraries and packages.
  • Familiarity with coding and programming languages, including Python, Java, C++, and SAS.
  • Strong experience in programming and statistics.
  • Well-versed in Data Science and neural schematics in networking and software.
  • Flexibility in shifts is appreciated.

A Full Stack Developer’s Ideal Day At WWS

Design and Develop. The primary protocols include implementing machine learning algorithms and running AI systems experiments and tests. The Designing and development of machine learning systems along with performing statistical analyses falls under day to day activities of the developer.

Algorithm Assertion. The engineers act as critical members of the data science team as their tasks involve researching, asserting, and designing the artificial intelligence responsible for machine learning and maintaining and improving existing artificial intelligence systems.

Research and Development. To analyze large, complex datasets and extract insights and decide on the appropriate technique. research and implement best practices to improve the existing machine learning infrastructure. At most providing support to engineers and product managers in implementing machine learning as the product.

What You Can Expect

  • Full-time, salaried positions creamed with welfare programs.
  • Competitive salary and tailored training in the core space with recognition potential and annual bonus.
  • Periodic performance appraisals.
  • Attendance Incentives.
  • Working with the best and budding talent in the industry.
  • A conducive intangible environment with dynamic benefits.

Why Consider Machine Learning Engineer as a career with WWS?

With a very appealing work environment at WWS, our setting made it easier to build relationships with other staff members and clients. You may also have an opportunity to learn other aspects of environmental office work on the job, which can enhance your experience and qualifications.

Many businesses must proactively react to changing factors — like patterns of customer behavior or prices. Tracking model performance and retraining it once fresher data is available is key to success. This falls under the MLE range of responsibilities for which the requirement has been crucial for many organizations.

Please attach your resume and let us know through email your current address, phone number, and the best time to contact you by phone.

                                          Apply To this Job

Read more
A fast growing Big Data company
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
Python
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Gurugram, Bengaluru (Bangalore), Chennai
2 - 9 yrs
₹9L - ₹27L / yr
DevOps
Microsoft Windows Azure
gitlab
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
+15 more
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
RandomTrees
at RandomTrees
1 recruiter
Amareswarreddt yaddula
Posted by Amareswarreddt yaddula
Hyderabad
5 - 16 yrs
₹1L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
Amazon Web Services (AWS)
SQL
+3 more

We are #hiring for AWS Data Engineer expert to join our team


Job Title: AWS Data Engineer

Experience: 5 Yrs to 10Yrs

Location: Remote

Notice: Immediate or Max 20 Days

Role: Permanent Role


Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.


Job Description:

 Able to develop ETL jobs.

Able to help with data curation/cleanup, data transformation, and building ETL pipelines.

Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.

Sql, Python, and Pyspark is a must.

Communication should be good





Read more
Docsumo
at Docsumo
6 recruiters
Vaidehi Tipnis
Posted by Vaidehi Tipnis
Remote only
4 - 6 yrs
₹15L - ₹30L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
OCR

About Us :

Docsumo is Document AI software that helps enterprises capture data and analyze customer documents. We convert documents such as invoices, ID cards, and bank statements into actionable data. We are work with clients such as PayU, Arbor and Hitachi and backed by Sequoia, Barclays, Techstars, and Better Capital.

 

As a Senior Machine Learning you will be working directly with the CTO to develop end to end API products for the US market in the information extraction domain.

 

Responsibilities :

  • You will be designing and building systems that help Docsumo process visual data i.e. as PDF & images of documents.
  • You'll work in our Machine Intelligence team, a close-knit group of scientists and engineers who incubate new capabilities from whiteboard sketches all the way to finished apps.
  • You will get to learn the ins and outs of building core capabilities & API products that can scale globally.
  • Should have hands-on experience applying advanced statistical learning techniques to different types of data.
  • Should be able to design, build and work with RESTful Web Services in JSON and XML formats. (Flask preferred)
  • Should follow Agile principles and processes including (but not limited to) standup meetings, sprints and retrospectives.

 

Skills / Requirements :

  • Minimum 3+ years experience working in machine learning, text processing, data science, information retrieval, deep learning, natural language processing, text mining, regression, classification, etc.
  • Must have a full-time degree in Computer Science or similar (Statistics/Mathematics)
  • Working with OpenCV, TensorFlow and Keras
  • Working with Python: Numpy, Scikit-learn, Matplotlib, Panda
  • Familiarity with Version Control tools such as Git
  • Theoretical and practical knowledge of SQL / NoSQL databases with hands-on experience in at least one database system.
  • Must be self-motivated, flexible, collaborative, with an eagerness to learn
 
Read more
Orboai
at Orboai
4 recruiters
Hardika Bhansali
Posted by Hardika Bhansali
Noida, Mumbai
1 - 3 yrs
₹6L - ₹15L / yr
TensorFlow
OpenCV
OCR
PyTorch
Keras
+10 more

Who Are We

 

A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.

 

ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.

 

WHY US

  • Join top AI company
  • Grow with your best companions
  • Continuous pursuit of excellence, equality, respect
  • Competitive compensation and benefits

You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.

 

To learn more about how we work, please check out

https://www.orbo.ai/.

 

Description:

We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.

 

Responsibilities:

  • Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
  • Lead a team of ML engineers in developing an industrial AI product from scratch
  • Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
  • Tune the models to achieve high accuracy rates and minimum latency
  • Deploying developed computer vision models on edge devices after optimization to meet customer requirements

 

 

Requirements:

  • Bachelor’s degree
  • Understanding about depth and breadth of computer vision and deep learning algorithms.
  • Experience in taking an AI product from scratch to commercial deployment.
  • Experience in Image enhancement, object detection, image segmentation, image classification algorithms
  • Experience in deployment with OpenVINO, ONNXruntime and TensorRT
  • Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
  • Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
  • Proficient understanding of code versioning tools, such as Git

Our perfect candidate is someone that:

  • is proactive and an independent problem solver
  • is a constant learner. We are a fast growing start-up. We want you to grow with us!
  • is a team player and good communicator

 

What We Offer:

  • You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
  • You will be in charge of what you build and be an integral part of the product development process
  • Technical and financial growth!
Read more
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹2.5L - ₹4L / yr
SQL
Data engineering
Big Data
Python
● Hands-on Work experience as a Python Developer
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
Discite Analytics Private Limited
Uma Sravya B
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
A2Tech Consultants
at A2Tech Consultants
3 recruiters
Dhaval B
Posted by Dhaval B
Pune
4 - 12 yrs
₹6L - ₹15L / yr
Data engineering
Data Engineer
ETL
Spark
Apache Kafka
+5 more
We are looking for a smart candidate with:
  • Strong Python Coding skills and OOP skills
  • Should have worked on Big Data product Architecture
  • Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
  • NoSQL-based databases such as Cassandra, Elasticsearch etc.
  • Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
  • Experience on development of ETL for data product
  • Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
  • Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
  • Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
  • Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
Key Skills:
  • Python and Scala (Optional), Spark / PySpark, Parallel programming
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos