About MediaMelon Inc
Similar jobs
Senior Data Engineer
Responsibilities:
● Clean, prepare and optimize data at scale for ingestion and consumption by machine learning models
● Drive the implementation of new data management projects and re-structure of the current data architecture
● Implement complex automated workflows and routines using workflow scheduling tools
● Build continuous integration, test-driven development and production deployment frameworks
● Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards
● Anticipate, identify and solve issues concerning data management to improve data quality
● Design and build reusable components, frameworks and libraries at scale to support machine learning products
● Design and implement product features in collaboration with business and Technology stakeholders
● Analyze and profile data for the purpose of designing scalable solutions
● Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues
● Mentor and develop other data engineers in adopting best practices
● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
Qualifications:
● 8+ years of experience developing scalable Big Data applications or solutions on distributed platforms
● Experience in Google Cloud Platform (GCP) and good to have other cloud platform tools
● Experience working with Data warehousing tools, including DynamoDB, SQL, and Snowflake
● Experience architecting data products in Streaming, Serverless and Microservices Architecture and platform.
● Experience with Spark (Scala/Python/Java) and Kafka
● Work experience with using Databricks (Data Engineering and Delta Lake components)
● Experience working with Big Data platforms, including Dataproc, Data Bricks etc
● Experience working with distributed technology tools including Spark, Presto, Databricks, Airflow
● Working knowledge of Data warehousing, Data modeling
● Experience working in Agile and Scrum development process
● Bachelor's degree in Computer Science, Information Systems, Business, or other relevant subject area
Role:
Senior Data Engineer
Total No. of Years:
8+ years of relevant experience
To be onboarded by:
Immediate
Notice Period:
Skills
Mandatory / Desirable
Min years (Project Exp)
Max years (Project Exp)
GCP Exposure
Mandatory Min 3 to 7
BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep .Spark and PySpark
Mandatory Min 5 to 9
Relational SQL
Mandatory Min 4 to 8
Shell scripting language
Mandatory Min 4 to 8
Python /scala language
Mandatory Min 4 to 8
Airflow/Kubeflow workflow scheduling tool
Mandatory Min 3 to 7
Kubernetes
Desirable 1 to 6
Scala
Mandatory Min 2 to 6
Databricks
Desirable Min 1 to 6
Google Cloud Functions
Mandatory Min 2 to 6
GitHub source control tool
Mandatory Min 4 to 8
Machine Learning
Desirable 1 to 6
Deep Learning
Desirable Min 1to 6
Data structures and algorithms
Mandatory Min 4 to 8
We have an urgent requirements of Big Data Developer profiles in our reputed MNC company.
Location: Pune/Bangalore/Hyderabad/Nagpur
Experience: 4-9yrs
Skills: Pyspark,AWS
or Spark,Scala,AWS
or Python Aws
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with
For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
- 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
- Strong industry expertise with containerization technologies including kubernetes, docker-compose
- 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- Experience with scripting languages. Python experience highly desirable.
- 2+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Demonstrated expertise of building cloud native applications
- Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
- Experience in API development using Swagger
- Strong expertise with containerization technologies including kubernetes, docker-compose
- Experience with cloud platform services such as AWS, Azure or GCP.
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
- Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
- Assist in dev ops operations
- Develop data ingestion processes and ETLs
- Design and Implement APIs
- Assist in dev ops operations
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
- Must have 5-8 years of experience in handling data
- Must have the ability to interpret large amounts of data and to multi-task
- Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
- Must have strong analytical and critical thinking to resolve business problems using data and tech
- Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
- Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
- Must have good communication skills
- Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Intro
Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.
What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.
• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.
- Create and maintain optimal data pipeline architecture and ETL processes
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Develop data pipeline and infrastructure to support real-time decisions
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale
- Proficiency in writing and debugging complex SQLs
- Experience working with AWS big data tools
• Ability to lead the project and implement best data practises and technology
Data Pipelining
- Strong command in building & optimizing data pipelines, architectures and data sets
- Strong command on relational SQL & noSQL databases including Postgres
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Big Data: Strong experience in big data tools & applications
- Tools: Hadoop, Spark, HDFS etc
- AWS cloud services: EC2, EMR, RDS, Redshift
- Stream-processing systems: Storm, Spark-Streaming, Flink etc.
- Message queuing: RabbitMQ, Spark etc
Software Development & Debugging
- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
- Strong hold on data structures & algorithms
What would be a bonus
- Prior experience working in a fast-growth Startup
- Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data