Cutshort logo
MediaMelon Inc logo
Big Data Developer
Big Data Developer
MediaMelon Inc's logo

Big Data Developer

Katreddi Kiran Kumar's profile picture
Posted by Katreddi Kiran Kumar
1 - 7 yrs
₹0L / yr
Bengaluru (Bangalore), Bengaluru (Bangalore)
Skills
skill iconScala
Spark Streaming
Aero spike
Cassandra
Apache Kafka
Big Data
skill iconElastic Search
Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About MediaMelon Inc

Founded :
2012
Type
Size
Stage :
Raised funding
About
MediaMelon is a company that offers content-aware streaming technology for multi-screen and OTT video services, which helps in providing premium viewing experiences while reducing operational costs, increasing revenues, and improving profitability. MediaMelon is a technology company that provides content-aware streaming solutions for multi-screen and OTT video services. The company's unique approach enables premium viewing experiences while minimizing operational costs, increasing revenues, and improving profitability, all without changing the existing video workflow. MediaMelon's technology optimizes video delivery by analyzing the content and viewer behavior in real-time, allowing for the efficient use of network resources and reducing buffering and start-up delays. The company's solutions are used by video service providers, including broadcasters, cable operators, and streaming services, to deliver high-quality video content to their customers. MediaMelon's technology is applicable to a wide range of industries, including entertainment, sports, news, and education.
Read more
Company video
MediaMelon Inc's video section
MediaMelon Inc's video section
Connect with the team
Profile picture
Ranjana Badyal
Profile picture
Katreddi Kiran Kumar
Company social profiles
blog

Similar jobs

Kloud9 Technologies
Remote only
8 - 14 yrs
₹35L - ₹45L / yr
Google Cloud Platform (GCP)
Agile/Scrum
SQL
skill iconPython
Apache Kafka
+1 more

Senior Data Engineer


Responsibilities:

●      Clean, prepare and optimize data at scale for ingestion and consumption by machine learning models

●      Drive the implementation of new data management projects and re-structure of the current data architecture

●      Implement complex automated workflows and routines using workflow scheduling tools 

●      Build continuous integration, test-driven development and production deployment frameworks

●      Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards 

●      Anticipate, identify and solve issues concerning data management to improve data quality

●      Design and build reusable components, frameworks and libraries at scale to support machine learning products 

●      Design and implement product features in collaboration with business and Technology stakeholders 

●      Analyze and profile data for the purpose of designing scalable solutions 

●      Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues

●      Mentor and develop other data engineers in adopting best practices 

●      Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders






Qualifications:

●      8+ years of experience developing scalable Big Data applications or solutions on distributed platforms

●      Experience in Google Cloud Platform (GCP) and good to have other cloud platform tools

●      Experience working with Data warehousing tools, including DynamoDB, SQL, and Snowflake

●      Experience architecting data products in Streaming, Serverless and Microservices Architecture and platform.

●      Experience with Spark (Scala/Python/Java) and Kafka

●      Work experience with using Databricks (Data Engineering and Delta Lake components

●      Experience working with Big Data platforms, including Dataproc, Data Bricks etc

●      Experience working with distributed technology tools including Spark, Presto, Databricks, Airflow

●      Working knowledge of Data warehousing, Data modeling


●      Experience working in Agile and Scrum development process

●      Bachelor's degree in Computer Science, Information Systems, Business, or other relevant subject area


Role:

Senior Data Engineer

Total No. of Years:

8+ years of relevant experience 

To be onboarded by:

Immediate

Notice Period:

 

Skills

Mandatory / Desirable

Min years (Project Exp)

Max years (Project Exp)

GCP Exposure 

Mandatory Min 3 to 7

BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep .Spark and PySpark

Mandatory Min 5 to 9

Relational SQL

Mandatory Min 4 to 8

Shell scripting language 

Mandatory Min 4 to 8

Python /scala language 

Mandatory Min 4 to 8

Airflow/Kubeflow workflow scheduling tool 

Mandatory Min 3 to 7

Kubernetes

Desirable 1 to 6

Scala

Mandatory Min 2 to 6

Databricks

Desirable Min 1 to 6

Google Cloud Functions

Mandatory Min 2 to 6

GitHub source control tool 

Mandatory Min 4 to 8

Machine Learning

Desirable 1 to 6

Deep Learning

Desirable Min 1to 6

Data structures and algorithms

Mandatory Min 4 to 8

Read more
Persistent Systems
at Persistent Systems
1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur
4 - 9 yrs
₹4L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Greetings..

We have an urgent requirements of Big Data Developer profiles in our reputed MNC company.

Location: Pune/Bangalore/Hyderabad/Nagpur
Experience: 4-9yrs

Skills: Pyspark,AWS
or Spark,Scala,AWS
or Python Aws
Read more
Pune
5 - 9 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more
This role is for a developer with strong core application or system programming skills in Scala, java and
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with

For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
Read more
Hyderabad
5 - 12 yrs
₹10L - ₹35L / yr
Analytics
skill iconKubernetes
Apache Kafka
skill iconData Analytics
skill iconPython
+3 more
  • 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
  • Strong industry expertise with containerization technologies including kubernetes, docker-compose
  • 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • Experience with scripting languages. Python experience highly desirable.
  • 2+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Demonstrated expertise of building cloud native applications
  • Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
  • Experience in API development using Swagger
  • Strong expertise with containerization technologies including kubernetes, docker-compose
  • Experience with cloud platform services such as AWS, Azure or GCP.
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins
Responsibilities
  • Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
  • Assist in dev ops operations
  • Develop data ingestion processes and ETLs
  • Design and Implement APIs
  • Assist in dev ops operations
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
Service based company
Remote only
3 - 8 yrs
₹8L - ₹13L / yr
pandas
PySpark
Big Data
Data engineering
Performance optimixation
+3 more
Data pre-processing, data transformation, data analysis, and feature engineering, 
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
Read more
Pune
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Big Data
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more
  • Must have 5-8 years of experience in handling data
  • Must have the ability to interpret large amounts of data and to multi-task
  • Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
  • Must have strong analytical and critical thinking to resolve business problems using data and tech
  •  Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
  • Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Must have good communication skills
  • Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Read more
Rely
at Rely
1 video
3 recruiters
Hizam Ismail
Posted by Hizam Ismail
Bengaluru (Bangalore)
2 - 10 yrs
₹8L - ₹35L / yr
skill iconPython
Hadoop
Spark
skill iconAmazon Web Services (AWS)
Big Data
+2 more

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Read more
Dubai, Anywhere
12 - 18 yrs
₹50L - ₹70L / yr
Big Data
Security architecture
Cyber Security
• Design secure solutions in line with the business strategy and security requirements • Contribute to the enterprise security architecture through developing Strategies, Reference Architectures, Roadmaps, Architectural Principles, Technology Standards, Security Non-Functional Requirements, Architectural Decisions and Design Patterns. • Deliver cyber security architectural artifacts such as High Level Designs and Solution Blueprints. • Ensure the enforcement of security requirements in solution architecture • Contribute to educating other architects and engineering teams in designing and implementing secure solutions Technologies The candidate should have knowledge and experience in designing and implementing the following technologies and related domains • Cloud security • Identity and Access Management • Encryption, Masking and Key Management • Data Classification, Data Privacy and Data Leakage Prevention • Infrastructure security (Network/Servers/Virtualization) • Application Security • Endpoint Security • SIEM and Log Management • Forward and Reverse Proxy • Big Data Security • IoT Security • SAP Security (Preferred) Architecture Skills • Solid experience in developing security solution architecture • Solid experience and knowledge in TOGAF and SABSA or other Enterprise Architecture frameworks. • Strong experience in developing architectural artifacts including reference architectures, roadmaps, architectural principles, technology standards, security non-functional requirements, architectural decisions and design patterns • Strong experience in documenting existing, transition and target architectures. Cyber Security Skills • Solid experience in performing security risk assessments and controls implementation • Strong experience in designing and implementing security controls by utilizing the technologies mentioned in the technologies section above • Strong knowledge in offensive and defensive aspects of cybesecurity with solid understanding of attack techniques and abuse cases. • Strong knowledge and implementation experience of cyber security standards, frameworks and regulations such as ISO27001, NIST CSF, CSA CCM, PCI-DSS, GDPR
Read more
Intelliswift Software
at Intelliswift Software
12 recruiters
Pratish Mishra
Posted by Pratish Mishra
Chennai
4 - 8 yrs
₹8L - ₹17L / yr
Big Data
Spark
skill iconScala
SQL
Greetings from Intelliswift! Intelliswift Software Inc. is a premier software solutions and Services Company headquartered in the Silicon Valley, with offices across the United States, India, and Singapore. The company has a proven track record of delivering results through its global delivery centers and flexible engagement models for over 450 brands ranging from Fortune 100 to growing companies. Intelliswift provides a variety of services including Enterprise Applications, Mobility, Big Data / BI, Staffing Services, and Cloud Solutions. Growing at an outstanding rate, it has been recognized as the second largest private IT Company in the East Bay. Domains: IT, Retail, Pharma, Healthcare, BFSI, and Internet & E-commerce website https://www.intelliswift.com/ Experience: 4-8 Years Job Location: Chennai Job Description: Skills: Spark, Scala, Big data, Hive · Strong Working experience in Spark, Scala, big data, h base and hive. · Should have good working experience in SQL and Spark SQL. · Good to have knowledge or experience in Teradata. · Familiar with General engineering Git, jenkins, sbt, maven.
Read more
zeotap India Pvt Ltd
at zeotap India Pvt Ltd
2 recruiters
Projjol Banerjea
Posted by Projjol Banerjea
Bengaluru (Bangalore)
6 - 10 yrs
₹5L - ₹40L / yr
skill iconPython
Big Data
Hadoop
skill iconScala
Spark
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos