Cutshort logo
Angel One logo
AWS Service Developer
AWS Service Developer
Angel One's logo

AWS Service Developer

Andleeb Mujeeb's profile picture
Posted by Andleeb Mujeeb
2 - 6 yrs
₹12L - ₹18L / yr
Remote only
Skills
Amazon Web Services (AWS)
PySpark
Python
Scala
Go Programming (Golang)
Java
AWS Lambda
ECS
NLB
Amazon S3
Apache Aurora
Spark
Apache Kafka
Redis
Amazon VPC
athena
Amazon EMR
Serverless
Kubernetes
fargate
ALB
Glue
cloudwatch
container

Designation: Specialist - Cloud Service Developer (ABL_SS_600)

Position description:

  • The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
  • Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
  • Monitor & Optimize the performance using AWS dashboards and logs
  • Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements 
  • Work with the cloud team in agile approach and develop cost optimized solutions

 

Primary Responsibilities:

  • Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.

 

Reporting Team

  • Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
  • Reporting Department: Application Development (2487)

Required Skills:

  • AWS certification would be preferred
  • Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
  • Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services. 
  • Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
  • Good in data structure, programming in (pyspark / python / golang / Scala)
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Angel One

Founded :
1987
Type
Size
Stage :
Profitable
About

We are Angel One (formerly known as Angel Broking). India's most trusted Fintech company and an all-in-One financial house. Founded in 1996 Angel One offers a world-class experience across all digital channels including web, trading software and mobile applications, to help make millions of Indians informed investment decisions.


Certified as a Great Place To Work for six-consecutive years, we are driven by technology and a mission to become the No. 1 fintech organization in India. With a 9.2 Million+ registered client base and more than 18+ million app downloads, we are onboarding more than 400,000 new users every month. We are working to build personalized financial journeys for customers via a single app, powered by new-age engineering tech and Machine Learning.


We are a group of self-driven, motivated individuals who enjoy taking ownership and believe in providing the best value for money to investors through innovative products and investment strategies. We apply and amplify design thinking with our products and solution.


It is a flat structure, with ample opportunity to showcase your talent and a growth path for engineers to the very top. We are remote-first, with people spread across Bangalore, Mumbai and UAE. Here are some of the perks that you'll enjoy as an Angelite,


  • Work with world-class peer group from leading organizations
  • Exciting, dynamic and agile work environment
  • Freedom to ideate, innovate, express, solve and create customer experience through #Fintech & #ConsumerTech
  • Cutting edge technology and Products / Digital Platforms of future
  • Continuous learning interventions and upskilling
  • Open culture to collaborate where failing fast is encouraged to invent new ways and methods, join our Failure Club to experience it
  • 6-time certified as a Great Place To Work culture
  • Highly competitive pay structures, one of the best


Come say Hello to ideas and goodbye to hierarchies at Angel One!

Read more
Photos
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Connect with the team
Profile picture
Andleeb Mujeeb
Profile picture
Shriya Tak
Profile picture
Ananda Pandey
Profile picture
Vineeta Singh
Company social profiles
instagramlinkedintwitterfacebook

Similar jobs

Signdesk
Anandhu Krishna
Posted by Anandhu Krishna
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
ETL architecture
MongoDB
Business Intelligence (BI)
Amazon Web Services (AWS)
Snow flake schema

We are seeking a skilled AWS ETL/ELT Data Architect with a specialization in MongoDB to join our team. The ideal candidate will possess comprehensive knowledge and hands-on experience

in designing, implementing, and managing ETL/ELT processes within AWS while also demonstrating proficiency in MongoDB database management.

This role requires expertise in data architecture, AWS services, and MongoDB to optimize data solutions effectively.


Responsibilities:


● Design, architect, and implement ETL/ELT processes within AWS, integrating data from various sources into data lakes or warehouses, and utilising MongoDB as part of the data ecosystem.

● Collaborate cross-functionally to assess data requirements, analyze sources, and strategize effective data integration within AWS environments, considering MongoDB's role in the architecture.

● Construct scalable and high-performance data pipelines within AWS while integrating MongoDB for optimal data storage, retrieval, and manipulation.

● Develop comprehensive documentation covering data architecture, flows, and the interplay between AWS services, MongoDB, and ETL/ELT processes from scratch.

● Perform thorough data profiling, validation, and troubleshooting, ensuring data accuracy, consistency, and integrity in conjunction with MongoDB management.

● Stay updated with AWS and MongoDB best practices, emerging technologies, and industry trends to propose innovative data solutions and implementations.

● Provide mentorship to junior team members and foster collaboration with stakeholders to deliver robust data solutions.

● Analyze data issues, identify and articulate the business impact of data problems

● Perform code reviews and ensure that all solutions are aligned with pre-defined architectural standards, guidelines, and best practices, and meet quality standards


Qualifications:


● Bachelor's or Master’s degree in Computer Science, Information Technology, or related field.

● Minimum 5 years of hands-on experience in ETL/ELT development, data architecture, or similar roles.

● Having implemented more than a minimum of 3-4 live projects in a similar field would be desirable.

● Expertise in designing and implementing AWS-based ETL/ELT processes using tools like AWS Glue, AWS Data Pipeline, etc.

Read more
Mumbai
5 - 10 yrs
₹8L - ₹20L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+6 more


Data Scientist – Delivery & New Frontiers Manager 

Job Description:   

We are seeking highly skilled and motivated data scientist to join our Data Science team. The successful candidate will play a pivotal role in our data-driven initiatives and be responsible for designing, developing, and deploying data science solutions that drives business values for stakeholders. This role involves mapping business problems to a formal data science solution, working with wide range of structured and unstructured data, architecture design, creating sophisticated models, setting up operations for the data science product with the support from MLOps team and facilitating business workshops. In a nutshell, this person will represent data science and provide expertise in the full project cycle. Expectation of the successful candidate will be above that of a typical data scientist. Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. 

Responsibilities: 

  • Collaborate with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand business needs and identify data science opportunities. 
  • Map complex business problems to data science problem, design data science solution using GCP/Azure Databricks platform. 
  • Collect, clean, and preprocess large datasets from various internal and external sources.  
  • Streamlining data science process working with Data Engineering, and Technology teams. 
  • Managing multiple analytics projects within a Function to deliver end-to-end data science solutions, creation of insights and identify patterns.  
  • Develop and maintain data pipelines and infrastructure to support the data science projects 
  • Communicate findings and recommendations to stakeholders through data visualizations and presentations. 
  • Stay up to date with the latest data science trends and technologies, specifically for GCP companies 

 

Education / Certifications:  

Bachelor’s or Master’s in Computer Science, Engineering, Computational Statistics, Mathematics. 

Job specific requirements:  

  • Brings 5+ years of deep data science experience 

∙       Strong knowledge of machine learning and statistical modeling techniques in a in a clouds-based environment such as GCP, Azure, Amazon 

  • Experience with programming languages such as Python, R, Spark 
  • Experience with data visualization tools such as Tableau, Power BI, and D3.js 
  • Strong understanding of data structures, algorithms, and software design principles 
  • Experience with GCP platforms and services such as Big Query, Cloud ML Engine, and Cloud Storage 
  • Experience in configuring and setting up the version control on Code, Data, and Machine Learning Models using GitHub. 
  • Self-driven, be able to work with cross-functional teams in a fast-paced environment, adaptability to the changing business needs. 
  • Strong analytical and problem-solving skills 
  • Excellent verbal and written communication skills 
  • Working knowledge with application architecture, data security and compliance team. 


Read more
India's best Short Video App
Bengaluru (Bangalore)
4 - 12 yrs
₹25L - ₹50L / yr
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
+26 more
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Episource
at Episource
11 recruiters
Ahamed Riaz
Posted by Ahamed Riaz
Mumbai
5 - 12 yrs
₹18L - ₹30L / yr
Big Data
Python
Amazon Web Services (AWS)
Serverless
DevOps
+4 more

ABOUT EPISOURCE:


Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.


The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.


What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.


ABOUT THE ROLE:


We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.


This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.


You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.


During the course of a typical day with our team, expect to work on one or more projects around the following;


1. Create and maintain optimal data pipeline architectures for ML


2. Develop a strong API ecosystem for ML pipelines


3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible


4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems


5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations  


6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms

 

7. Designing scalable implementations of the models developed by our Data Science teams  


8. Big data and distributed ML with PySpark on AWS EMR, and more!



BASIC REQUIREMENTS 


  1.  Bachelor’s degree or greater in Computer Science, IT or related fields

  2.  Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects

  3. Strong experience with bash scripting, unix environments and building scalable/distributed systems

  4. Experience with automation/configuration management using Ansible, Terraform, or equivalent

  5. Very strong experience with AWS and Python

  6. Experience building CI/CD systems

  7. Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent

  8. Ability to build and manage application and performance monitoring processes

Read more
Digital Banking Firm
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹40L / yr
Apache Kafka
Hadoop
Spark
Apache Hadoop
Big Data
+5 more
Location - Bangalore (Remote for now)
 
Designation - Sr. SDE (Platform Data Science)
 
About Platform Data Science Team

The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.
 
About the role:

We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you!
This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.

Key Responsibilities:

Participate, Own & Influence in architecting & designing of systems
Collaborate with other engineers, data scientists, product managers
Build intelligent systems that drive decisions
Build systems that enable us to perform experiments and iterate quickly
Build platforms that enable scientists to train, deploy and monitor models at scale
Build analytical systems that drives better decision making
 

Required Skills:

Programming experience with at least one modern language such as Java, Scala including object-oriented design
Experience in contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
Bachelor’s degree in Computer Science or related field
Computer Science fundamentals in object-oriented design
Computer Science fundamentals in data structures
Computer Science fundamentals in algorithm design, problem solving, and complexity analysis
Experience in databases, analytics, big data systems or business intelligence products:
Data lake, data warehouse, ETL, ML platform
Big data tech like: Hadoop, Apache Spark
Read more
Kloud9 Technologies
Remote only
8 - 14 yrs
₹35L - ₹45L / yr
Google Cloud Platform (GCP)
Agile/Scrum
SQL
Python
Apache Kafka
+1 more

Senior Data Engineer


Responsibilities:

●      Clean, prepare and optimize data at scale for ingestion and consumption by machine learning models

●      Drive the implementation of new data management projects and re-structure of the current data architecture

●      Implement complex automated workflows and routines using workflow scheduling tools 

●      Build continuous integration, test-driven development and production deployment frameworks

●      Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards 

●      Anticipate, identify and solve issues concerning data management to improve data quality

●      Design and build reusable components, frameworks and libraries at scale to support machine learning products 

●      Design and implement product features in collaboration with business and Technology stakeholders 

●      Analyze and profile data for the purpose of designing scalable solutions 

●      Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues

●      Mentor and develop other data engineers in adopting best practices 

●      Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders






Qualifications:

●      8+ years of experience developing scalable Big Data applications or solutions on distributed platforms

●      Experience in Google Cloud Platform (GCP) and good to have other cloud platform tools

●      Experience working with Data warehousing tools, including DynamoDB, SQL, and Snowflake

●      Experience architecting data products in Streaming, Serverless and Microservices Architecture and platform.

●      Experience with Spark (Scala/Python/Java) and Kafka

●      Work experience with using Databricks (Data Engineering and Delta Lake components

●      Experience working with Big Data platforms, including Dataproc, Data Bricks etc

●      Experience working with distributed technology tools including Spark, Presto, Databricks, Airflow

●      Working knowledge of Data warehousing, Data modeling


●      Experience working in Agile and Scrum development process

●      Bachelor's degree in Computer Science, Information Systems, Business, or other relevant subject area


Role:

Senior Data Engineer

Total No. of Years:

8+ years of relevant experience 

To be onboarded by:

Immediate

Notice Period:

 

Skills

Mandatory / Desirable

Min years (Project Exp)

Max years (Project Exp)

GCP Exposure 

Mandatory Min 3 to 7

BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep .Spark and PySpark

Mandatory Min 5 to 9

Relational SQL

Mandatory Min 4 to 8

Shell scripting language 

Mandatory Min 4 to 8

Python /scala language 

Mandatory Min 4 to 8

Airflow/Kubeflow workflow scheduling tool 

Mandatory Min 3 to 7

Kubernetes

Desirable 1 to 6

Scala

Mandatory Min 2 to 6

Databricks

Desirable Min 1 to 6

Google Cloud Functions

Mandatory Min 2 to 6

GitHub source control tool 

Mandatory Min 4 to 8

Machine Learning

Desirable 1 to 6

Deep Learning

Desirable Min 1to 6

Data structures and algorithms

Mandatory Min 4 to 8

Read more
Rakuten
at Rakuten
1 video
1 recruiter
Agency job
via zyoin by RAKESH RANJAN
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹38L / yr
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
+4 more

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/">https://www.rakuten.com/

Crunchbase : https://www.crunchbase.com/organization/rakuten">Rakuten has raised a total of https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/rakuten">$42.4M in funding over https://www.crunchbase.com/search/funding_rounds/field/organizations/num_funding_rounds/rakuten">2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more
Ideapoke Technologies
at Ideapoke Technologies
7 recruiters
Divya D
Posted by Divya D
Remote, Bengaluru (Bangalore)
4 - 7 yrs
₹12L - ₹25L / yr
Data Science
Data Scientist
R Programming
Python

Job Description

We are looking for applicants who have a demonstrated research background in machine learning, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research. The candidate will collaborate with researchers and engineers of multiple disciplines within Ideapoke, in particular with researchers in data collection and development teams to develop advanced data analytics solutions. Work with massive amounts of data collected from various sources.
-4  to 5 years of academic or professional experience in Artificial Intelligence and Data Analytics, Machine Learning, Natural Language Processing/Text mining or related field.
-Technical ability and hands on expertise in Python, R, XML parsing, Big Data, NoSQL and SQL
Read more
Largest Analytical firm
Bengaluru (Bangalore)
4 - 14 yrs
₹10L - ₹28L / yr
Hadoop
Big Data
Spark
Scala
Python
+2 more

·        Advanced Spark Programming Skills

·        Advanced Python Skills

·        Data Engineering ETL and ELT Skills

·        Expertise on Streaming data

·        Experience in Hadoop eco system

·        Basic understanding of Cloud Platforms

·        Technical Design Skills, Alternative approaches

·        Hands on expertise on writing UDF’s

·        Hands on expertise on streaming data ingestion

·        Be able to independently tune spark scripts

·        Advanced Debugging skills & Large Volume data handling.

·        Independently breakdown and plan technical Tasks

Read more
Woodcutter Film Technologies Pvt. Ltd.
Athul Krishnan
Posted by Athul Krishnan
Hyderabad
1 - 5 yrs
₹3L - ₹6L / yr
Data Science
R Programming
Python
We're an early stage film-tech startup with a mission to empower filmmakers and independent content creators with data-driven decision-making tools. We're looking for a data person to join the core team. Please get in touch if you would be excited to join us on this super exciting journey of disrupting the film production and distribution business. We are currently collaborating with Rana Daggubatt's Suresh Productions, and work out of their studio in Hyderabad - so exposure and opportunities to work on real issues faced by the media industry will be in plenty.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos