Backend Engineer

at Venture Highway

DP
Posted by Nipun Gupta
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹10L - ₹30L / yr
icon
Full time
Skills
Python
Data engineering
Data Engineer
MySQL
MongoDB
Celery
Apache
Data modeling
RESTful APIs
Natural Language Processing (NLP)
-Experience with Python and Data Scraping.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.

Preference for candidates working in tech product companies

About Venture Highway

VH is an ex-Googlers founded India tech-seed fund. Leadership includes ex-founders, women and deep Silicon Valley roots. Venture Highway teams up with the next generation of entrepreneurs by providing guidance and capital to early stage nascent ideas in technology. 

We are building a unique product that will be used by our investment team to souce, qualify & mange early stage startups. We have a small tech team of 5 people working towards building a robust recommendation engine for startup engagement.

We have an open culture, anyone can question and give suggestions on any features we are working on. We always try to better understand the business side of the aspect, not just sticking to technical architecture and building APIs. We respect individuality & believe that best ideas should win irrespective of who they come from. We have a flat structure with no hierarchies.

You will get to know, meet & interact with different founders/entrepreneurs and interesting problem statements they are working on. 
Founded
2015
Type
Products & Services
Size
0-20 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

ETL-Database Developer/Lead

at Wissen Technology

Founded 2000  •  Products & Services  •  1000-5000 employees  •  Profitable
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
Databases
Shell Scripting
Perl
Python
KDB
icon
Bengaluru (Bangalore)
icon
5 - 12 yrs
icon
₹15L - ₹35L / yr

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Job posted by
Lokesh Manikappa

Sr. Data Engineer

at TIGI HR Solution Pvt. Ltd.

Founded 2014  •  Services  •  employees  •  Profitable
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Amazon Web Services (AWS)
AWS Lambda
EMR
Apache Kafka
HiveQL
recommendation algorithm
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹10L - ₹15L / yr

Roles and Responsibilities

  • Build High level technical design both for Streaming and batch processing systems
  • Design and build reusable components, frameworks and libraries at scale to support analytics data products
  • Perform POCs on new technology, architecture patterns
  • Design and implement product features in collaboration with business and Technology stakeholders
  • Anticipate, identify and solve issues concerning data management to improve data quality
  • Clean, prepare and optimize data at scale for ingestion and consumption
  • Drive the implementation of new data management projects and re-structure of the current data architecture
  • Implement complex automated workflows and routines using workflow scheduling tools
  • Build continuous integration, test-driven development and production deployment frameworks
  • Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards
  • Analyze and profile data for the purpose of designing scalable solutions
  • Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues
  • Lead, Mentor and develop other Sr Data Engineers and Data engineers in adopting best practices and deliver data products.
  • Partner closely with product management to understand business requirements, breakdown Epics,
  • Partner with Engineering Managers to define technology roadmaps, align on design, architecture and enterprise strategy

 

Capabilities Required

  • Expert level expertise in building big data solutions
  • Hands-on experience building cloud scalable, real time and high-performance data lake solutions using AWS, EMR, S3, Hive & Spark, Athena
  • Hands-on experience in delivering batch and streaming jobs
  • Expertise in an agile and iterative model
  • Expert level expertise relational SQL
  • Experience with scripting languages such as Shell, Python
  • Experience with source control tools such as GitHub and related dev process
  • Experience with workflow sc
  • Scheduling tools like Airflow
  • In-depth understanding of micro services architecture
  • Strong understanding of developing complex data solutions
  • Experience working on end-to-end solution design
  • Able to lead others in solving complex Data and Analytics problems
  • Strong understanding of data structures and algorithms
  • Strong hands-on experience in solution and technical design
  • Has a strong problem solving and analytical mindset
  • Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
  • Able to quickly pick up new programming languages, technologies, and frameworks
Job posted by
Himanshu Chavla

Data Analyst

at Rooter Sports Technologies Pvt Ltd

Founded 2016  •  Product  •  100-500 employees  •  Raised funding
Python
SQL
Data Analytics
Tableau
Google Analytics
CleverTap
icon
Delhi
icon
1 - 3 yrs
icon
₹6L - ₹16L / yr

About Rooter

Rooter is India’s leading Game and Esports Streaming platform. With more than 450 mn gamers playing games like BGMI, Call of Duty, Minecraft, etc, India is one of the largest mobile gaming markets in the world.

 

Rooter considered the "Twitch of India" has managed to create a strong leadership position with more than 40 mn downloads with 10 mn monthly active users and 1 mn daily active users. Rooter works closely with top esports teams and esports tournament organizers providing exclusive gaming content to its users. We also work with more than 10,000 professional streamers and our unique UGC features have led to 1 mn users creating content on the Rooter app every month. We are the no.1 app in the Sports category on Google Playstore in India since Oct 2020.

We recently announced a $25 mn Series A funding with investors like Lightbox, March Capital, Duane Park Ventures, Advantage, Paytm, Goal Ventures, and IeAD.

Gaming is going to be the focal point of Metaverse and we look forward to having you with us as we lead that revolution in India and the rest of the world.

Website: https://web.rooter.gg/

LinkedIn: https://www.linkedin.com/company/hello-rooter/

 

Product- Check our app on Google Playstore, Appstore or https://www.rootersports.com/

 

Job Description: Data Analyst

Location: New Delhi

 

What are we looking for?

We are looking at a Data enthusiast who will-

●Work closely with product leaders as a data-driven advisor on the strategic issues.

●Will work collaboratively with product teams to deliver actionable insights into our product to further increase engagement and retention of users.

●Will proactively perform a wide range of analyses to identify trends, issues, and opportunities across product

●Will answer business-related questions through exploratory data analyses and ad-hoc reporting

 

Skillsets:-

●BA/BS in Math, Economics, Statistics, or other quantitative fields

●2+ years of work experience doing quantitative analysis

●Strong knowledge of SQL and Python

●Ability to synthesize and communicate complex concepts and analyses in easy-to-understand ways

● Expert experience pulling large and complex data using SQL

●Experience with a data visualization tool (e.g. Mode Analytics/Chartio/Tableau)

●Familiarity with web analytics tools (e.g. Google Analytics/CleverTap)

●Excellent verbal and written communication skills

●Self-motivated, detail-oriented, learn autonomously and highly organized

 

What can you expect?

● Access to all possible data development and mining tools

● Translate product strategy into reality backed by strong data trends

● Scope and prioritize activities based on business and customer impact.

● Work closely with product and engineering teams to deliver with quick time-to-market and optimal resources.

 

Qualification:

  • BA/BS in Math, Economics, Statistics, or other quantitative fields

 

Team & Office

We are a strong 100+ member team, working out of our Head Office in East of Kailash, New Delhi. And a great working environment around Sports.

Job posted by
Rashleen Kaur

Data Scientist

at Billeasy

Founded 2016  •  Products & Services  •  0-20 employees  •  Raised funding
Python
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
recommendation algorithm
icon
Mumbai
icon
5 - 7 yrs
icon
₹5L - ₹9L / yr

Job Details:-

Designation - Data Scientist

Urgently required. (NP of maximum 15 days)

Location:- Mumbai

Experience:- 5-7 years.

Package Offered:- Rs.5,00,000/- to Rs.9,00,000/- pa.

 

Data Scientist

 

Job Description:-

Responsibilities:

  • Identify valuable data sources and automate collection processes
  • Undertake preprocessing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

 

Requirements:

  • Proven experience as a Data Scientist or Data Analyst
  • Experience in data mining
  • Understanding of machine-learning and operations research
  • Knowledge of R, SQL and Python; familiarity with Scala, Java is an asset
  • Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
  • Analytical mind and business acumen
  • Strong math skills (e.g. statistics, algebra)
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
Job posted by
Kayum Ansari

Data Analyst (Game Optimisation)

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Analytics
Data Science
SQL
NOSQL Databases
Python
icon
Bengaluru (Bangalore)
icon
1 - 8 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Data Analyst (Games Optimisation) you will be responsible for optimising in-game features and design, utilising A-B testing and multivariate testing of in-game components.


What you will be doing 

  • Investigate how millions of players interact with Kwalee games.

  • Perform statistical analysis to quantify the relationships between game elements and player engagement.

  • Design experiments which extract the most valuable information in the shortest time.

  • Develop testing plans which reveal complex interactions between game elements. 

  • Collaborate with the design team to come up with the most effective tests.

  • Regularly communicate results with development, management and data science teams.

 
How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.

  • You'll think creatively and be motivated by challenges and constantly striving for the best.

  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!


Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.


Skills and requirements

  • A degree in a numerically focussed degree discipline such as, Maths, Physics, Economics, Chemistry, Engineering, Biological Sciences

  • A record of outstanding problem solving ability in a commercial or academic setting

  • Experience using Python for data analysis and visualisation.

  • An excellent knowledge of statistical testing and experiment design.

  • Experience manipulating data in SQL and/or NoSQL databases.


We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment

  • In addition to a competitive salary we also offer private medical cover and life assurance

  • Creative Wednesdays! (Design and make your own games every Wednesday)

  • 20 days of paid holidays plus bank holidays 

  • Hybrid model available depending on the department and the role

  • Relocation support available 

  • Great work-life balance with flexible working hours

  • Quarterly team building days - work hard, play hard!

  • Monthly employee awards

  • Free snacks, fruit and drinks


Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Job posted by
Michael Hoppitt

Data Scientist

at a software product company working on petabyte scale data

Agency job
via RS Consultants
Data Science
Data Scientist
Python
Java
Apache Kafka
pandas
NumPy
Scikit-Learn
Amazon Web Services (AWS)
Go Programming (Golang)
airflow
icon
Pune
icon
7 - 15 yrs
icon
₹30L - ₹50L / yr

We are looking for an exceptional Data Scientist who is passionate about data and motivated to build large scale machine learning solutions. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modelling of terabytes of daily data for various use cases

 

Typical persona: Data Science Manager / Architect

 

Experience: 8+ years programming/engineering experience (with at least last 4 years in big data, Data science)

 

Must have:

  • Hands-on Python: Pandas, Scikit-Learn
  • Working knowledge of Kafka
  • Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
  • Good on analytical & debugging skills
  • Strong communication skills

Desired (in order of priorities):

  • Go (Strong advantage)
  • Airflow (Strong advantage)
  • Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
  • Data structures, Algorithms
  • Experience with multi-threaded and thread sync concepts
  • AWS Sagemaker
  • Keras
  • Should have strong experience in Python programming minimum 4 Years
Job posted by
Rahul Inamdar

Data Engineer

at Servian

Founded 2008  •  Products & Services  •  100-1000 employees  •  Raised funding
Data engineering
ETL
Data Warehouse (DWH)
Powershell
DA
SQL
Python
Cloud Computing
Data modeling
Data migration
Data Visualization
Scripting
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹10L - ₹25L / yr
Who we are
 
We are a consultant led organisation. We invest heavily in our consultants to ensure they have the technical skills and commercial acumen to be successful in their work.
 
Our consultants have a passion for data and solving complex problems. They are curious, ambitious and experts in their fields. We have developed a first rate team so you will be supported and learn from the best

About the role

  • Collaborating with a team of like-minded and experienced engineers for Tier 1 customers, you will focus on data engineering on large complex data projects. Your work will have an impact on platforms that handle crores of customers and millions of transactions daily.

  • As an engineer, you will use the latest cloud services to design and develop reusable core components and frameworks to modernise data integrations in a cloud first world and own those integrations end to end working closely with business units. You will design and build for efficiency, reliability, security and scalability. As a consultant, you will help drive a data engineering culture and advocate best practices.

Mandatory experience

    • 1-6 years of relevant experience
    • Strong SQL skills and data literacy
    • Hands-on experience designing and developing data integrations, either in ETL tools, cloud native tools or in custom software
    • Proficiency in scripting and automation (e.g. PowerShell, Bash, Python)
    • Experience in an enterprise data environment
    • Strong communication skills

Desirable experience

    • Ability to work on data architecture, data models, data migration, integration and pipelines
    • Ability to work on data platform modernisation from on-premise to cloud-native
    • Proficiency in data security best practices
    • Stakeholder management experience
    • Positive attitude with the flexibility and ability to adapt to an ever-changing technology landscape
    • Desire to gain breadth and depth of technologies to support customer's vision and project objectives

What to expect if you join Servian?

    • Learning & Development: We invest heavily in our consultants and offer internal training weekly (both technical and non-technical alike!) and abide by a ‘You Pass We Pay” policy.
    • Career progression: We take a longer term view of every hire. We have a flat org structure and promote from within. Every hire is developed as a future leader and client adviser.
    • Variety of projects: As a consultant, you will have the opportunity to work across multiple projects across our client base significantly increasing your skills and exposure in the industry.
    • Great culture: Working on the latest Apple MacBook pro in our custom designed offices in the heart of leafy Jayanagar, we provide a peaceful and productive work environment close to shops, parks and metro station.
    • Professional development: We invest heavily in professional development both technically, through training and guided certification pathways, and in consulting, through workshops in client engagement and communication. Growth in our organisation happens from the growth of our people.
Job posted by
sakshi nigam

Big Data Engineer

at Netmeds.com

Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Hadoop
Apache Hive
Scala
Spark
Datawarehousing
Machine Learning (ML)
Deep Learning
SQL
Data modeling
PySpark
Python
Amazon Web Services (AWS)
Java
Cassandra
DevOps
HDFS
icon
Chennai
icon
2 - 5 yrs
icon
₹6L - ₹25L / yr

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Job posted by
Vijay Hemnath

Computer Vision Engineer - Machine Learning/Open Source

at Artivatic.ai

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
OpenCV
Machine Learning (ML)
Deep Learning
Python
Artificial Intelligence (AI)
Data Science
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹5L - ₹12L / yr
About Artivatic :Artivatic is technology startup that uses AI/ML/Deeplearning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 20+ team focus on technology. Artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access and health benefits with alternative data sources to increase their productivity, efficiency, automation power and profitability, hence improving their way of doing business more intelligently & seamlessly. Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more. We have raised US $300K earlier and built products successfully and also done few PoCs successfully with some top enterprises in Insurance, Banking & Health sector. Currently, 4 months away from generating continuous revenue.Skills : - We at artivatic are seeking for passionate, talented and research focused computer engineer with strong machine learning and computer vision background to help build industry-leading technology with a focus in document text extraction and parsing using OCR across different languages.Qualifications :- Bachelors or Master degree in Computer Science, Computer vision or related field with specialization in Image Processing or machine learning.- Research experience in Deep Learning models for Image processing or OCR related field is preferred.- Publication record in Deep Learning models for Computer Vision conferences/journals is a plus.Required Skills :- Excellent skills developing in Python in Linux environment. Programming skills with multi-threaded GPU Cuda computing and API Solutions.- Experience applying machine learning and computer vision principles to real-world data and working in Scanned and Documented Images.- Good knowledge of Computer Science, math and statistics fundamentals (algorithms and data structures, meshing, sampling theory, linear algebra, etc.)- Knowledge of data science technologies such as Python, Pandas, Scipy, Numpy, matplotlib, etc.- Broad Computer Vision knowledge - Construction, Feature Detection, Segmentation, Classification; Machine/Deep Learning - Algorithm Evaluation, Preparation, Analysis, Modeling and Execution.- Familiarity with OpenCV, Dlib, Yolo, Capslule Network or similar and Open Source AR platforms and products- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed in software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in- Document and Text Extraction.- Image recognition, Object Identification and Visual Recognition - Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.- Be responsible for measuring and optimizing the quality of your algorithmsExperience : 3 Years+ Location : Sony World Signal, Koramangala 4th Block, Bangalore
Job posted by
Layak Singh

Data Scientist

at Woodcutter Film Technologies Pvt. Ltd.

Founded 2018  •  Products & Services  •  0-20 employees  •  Bootstrapped
Data Science
R Programming
Python
icon
Hyderabad
icon
1 - 5 yrs
icon
₹3L - ₹6L / yr
We're an early stage film-tech startup with a mission to empower filmmakers and independent content creators with data-driven decision-making tools. We're looking for a data person to join the core team. Please get in touch if you would be excited to join us on this super exciting journey of disrupting the film production and distribution business. We are currently collaborating with Rana Daggubatt's Suresh Productions, and work out of their studio in Hyderabad - so exposure and opportunities to work on real issues faced by the media industry will be in plenty.
Job posted by
Athul Krishnan
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Venture Highway?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort