Big Data Developer

at Maveric Systems Limited

DP
Posted by Rashmi Poovaiah
icon
Bengaluru (Bangalore), Chennai, Pune
icon
4 - 10 yrs
icon
₹8L - ₹15L / yr (ESOP available)
icon
Full time
Skills
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
Scala
SQL

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more

About Maveric Systems Limited

Started in 2000, Maveric Systems helps global banking and fintech leaders drive business agility through effective integration of development, operations and quality engineering initiatives. Our strong banking domain competency combined with expertise across legacy and new age technology landscapes makes us a preferred partner for customers worldwide.

 

We offer Product Implementation, Integration and Quality Engineering services across Digital platforms, Banking solutions and Regulatory systems. Our insight led engagement approach helps our clients quickly adapt to dynamic technology and competitive landscapes with a sharp focus on quality.

 

We have a dedicated offshore delivery and research center in Chennai, India. Our 1200+ software development life cycle professionals operate across centers in UK, US, Europe, Middle East and APAC and provide services to more than 50 financial services organisations across the globe.

Read more
Founded
2000
Type
Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Product and Service based company

Agency job
via Jobdost
Amazon Web Services (AWS)
Apache
Snow flake schema
Python
Spark
Apache Hive
PostgreSQL
Cassandra
ETL
Java
Scala
C#
HDFS
yarn
CI/CD
Jenkins
JIRA
Apache Kafka
icon
Hyderabad, Ahmedabad
icon
4 - 8 yrs
icon
₹15L - ₹30L / yr

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Job posted by
Sathish Kumar

Lead Data Engineer

at Top 3 Fintech Startup

Agency job
via Jobdost
SQL
Amazon Web Services (AWS)
Spark
PySpark
Apache Hive
icon
Bengaluru (Bangalore)
icon
6 - 9 yrs
icon
₹16L - ₹24L / yr

We are looking for an exceptionally talented Lead data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Candidate with both hands-on and leadership capabilities will be ideal for this position.

 

Qualification: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Preferred Masters degree

 

Job Responsibilities:

• Total 6+ years of experience as a Data Engineer and 2+ years of experience in managing a team

• Have minimum 3 years of AWS Cloud experience.

• Well versed in languages such as Python, PySpark, SQL, NodeJS etc

• Has extensive experience in the real-timeSpark ecosystem and has worked on both real time and batch processing

• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.

• Experience with modern Database systems such as Redshift, Presto, Hive etc.

• Worked on building data lakes in the past on S3 or Apache Hudi

• Solid understanding of Data Warehousing Concepts

• Good to have experience on tools such as Kafka or Kinesis

• Good to have AWS Developer Associate or Solutions Architect Associate Certification

• Have experience in managing a team

Read more
Job posted by
Sathish Kumar

Data Engineer

at Climate Connect Digital

Founded 2010  •  Products & Services  •  20-100 employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
Big Data
PySpark
Apache Hadoop
Apache Hive
icon
Remote only
icon
1 - 4 yrs
icon
₹8L - ₹15L / yr

About Climate Connect Digital


Our team is inspired to change the world by making energy greener, and more affordable. Established in 2011 in London, UK, and now headquartered in Gurgaon, India. From unassuming beginnings, we have become a leading energy-AI software player, at the vanguard of accelerating the global energy transition.


Today we are a remote first organization, building digital tools for modern enterprises to reduce their carbon footprint and help the industry to get to carbon zero.



About the Role - Data Engineer


As we start into our first strong growth phase, we are looking for a Data Engineer to build the data infrastructure to support business and product growth.

You are someone who can see projects through from beginning to end, coach others, and self-manage. We’re looking for an eager individual who can guide our data stack using AWS services with technical knowledge, communication skills, and real-world experience.


The data flowing through our platform directly contributes to decision-making by algorithms & all levels of leadership alike. If you’re passionate about building tools that enhance productivity, improve green energy, reduce waste, and improve work-life harmony for a large and rapidly growing finance user base, come join us!


Job Responsibilities

  • Iterate, build, and implement our data model, data warehousing, and data integration architecture using AWS & GCP services
  • Build solutions that ingest data from source and partner systems into our data infrastructure, where the data is transformed, intelligently curated and made available for consumption by downstream operational and analytical processes
  • Integrate data from source systems using common ETL tools or programming languages (e.g. Ruby, Python, Scala, AWS Data Pipeline, etc.)
  • Develop tailor-made strategies, concepts and solutions for the efficient handling of our growing amounts of data
  • Work iteratively with our data scientist to build up fact tables (e.g. container ship movements), dimension tables (e.g. weather data), ETL processes, and build the data catalog

Job Requirements


  • Experience designing, building and maintaining data architecture and warehousing using AWS services
  • Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C# and/or similar technologies
  • Experience managing AWS resources using Terraform
  • Experience in Data engineering and infrastructure work for analytical and machine learning processes
  • Experience with ETL tooling, migrating ETL code from one technology to another will be a benefit
  • Experience with Data visualisation / dashboarding tools as QA/QC data processes
  • Independent, self-starter who thrives in a fast pace environment

What’s in it for you


We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:


  • Flexible working hours and leave policy
  • Learning and development opportunities
  • Medical insurance/Term insurance, Gratuity benefits over and above the salaries
  • Access to industry and domain thought leaders.

At Climate Connect, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.


We are building a remote-first organisation ingrained in the team ethos. We understand its importance for the success of any next-generation technology company. The team includes passionate and self-driven people with unconventional backgrounds, and we’re seeking a similar spirit with the right potential.

 

What it’s ​like to work with us

 

You become part of a strong network and an accomplished legacy from leading technology and business schools worldwide when you join us. Such as the Indian Institute of Technology, Oxford University, University of Cambridge, University College London, and many more.

 

We don’t believe in constrained traditional hierarchies and instead work in flexible teams with the freedom to achieve successful business outcomes. We want more people who can thrive in a fast-paced, collaborative environment. Our comprehensive support system comprises a global network of advisors and experts, providing unparalleled opportunities for learning and growth.

Read more
Job posted by
Hrushikesh Mande

Senior Data Engineer

at Curl Analytics

Agency job
via wrackle
ETL
Big Data
Data engineering
Apache Kafka
PySpark
Python
Pipeline management
Spark
Apache Hive
Docker
Kubernetes
MongoDB
SQL server
Oracle
Machine Learning (ML)
BigQuery
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹15L - ₹30L / yr
What you will do
  • Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
    • programmatically ingesting data from several static and real-time sources (incl. web scraping)
    • rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
    • performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
  • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
  • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
  • Build data tools to facilitate fast data cleaning and statistical analysis
  • Ensure data architecture is secure and compliant
  • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
  • Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

You should be

  •  Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
  • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
  • Expert in shell scripting and writing schedulers
  • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
  • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
  • Strong knowledge of data security best practices
  • 5+ years experience in a data engineering role
  • Science / Engineering graduate from a Tier-1 university in the country
  • And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Read more
Job posted by
Naveen Taalanki

Applied Data Scientist

at dunnhumby

Founded 2000  •  Products & Services  •  1000-5000 employees  •  Profitable
Python
SQL
Machine Learning (ML)
Forecasting
icon
Gurugram
icon
2 - 5 yrs
icon
₹2L - ₹9L / yr

Most companies try to meet expectations, dunnhumby exists to defy them. Using big data, deep expertise and AI-driven platforms to decode the 21st century human experience – then redefine it in meaningful and surprising ways that put customers first. Across digital, mobile and retail. For brands like Tesco, Coca-Cola, Procter & Gamble and PepsiCo.

We’re looking for an Applied Data Scientist who expects more from their career. It’s a chance to apply your expertise to distil complex problems into compelling insights using the best of machine learning and human creativity to deliver effective and impactful solutions for clients. Joining our advanced data science team, you’ll investigate, develop, implement and deploy a range of complex applications and components while working alongside super-smart colleagues challenging and rewriting the rules, not just following them.

What we expect from you 

  • Degree in Statistics, Maths, Physics, Economics or similar field
  • Programming skills (Python and SQL are a must have)
  • Analytical Techniques and Technology
  • Experience with and passion for connecting your work directly to the customer experience, making a real and tangible impact.
  • Logical thinking and problem solving
  • Strong communication skills
  • Statistical Modelling and experience of applying data science into client problems
  • 2 to 5 years of experience required


What you can expect from us

We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not.

Plus, thoughtful perks, like early finish Friday and your birthday off.

You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn.

And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Women’s Network, dh Proud, dh Parent’s & Carer’s, dh One and dh Thrive as the living proof. Everyone’s invited.

Our approach to Flexible Working

At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work.

We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process.


 

Read more
Job posted by
Yamini Rawat
Data Science
Data Scientist
R Programming
Python
SQL
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹25L - ₹28L / yr
  • Banking Domain
  • Assist the team in building Machine learning/AI/Analytics models on open-source stack using Python and the Azure cloud stack.
  • Be part of the internal data science team at fragma data - that provides data science consultation to large organizations such as Banks, e-commerce Cos, Social Media companies etc on their scalable AI/ML needs on the cloud and help build POCs, and develop Production ready solutions.
  • Candidates will be provided with opportunities for training and professional certifications on the job in these areas - Azure Machine learning services, Microsoft Customer Insights, Spark, Chatbots, DataBricks, NoSQL databases etc.
  • Assist the team in conducting AI demos, talks, and workshops occasionally to large audiences of senior stakeholders in the industry.
  • Work on large enterprise scale projects end-to-end, involving domain specific projects across banking, finance, ecommerce, social media etc.
  • Keen interest to learn new technologies and latest developments and apply them to projects assigned.
Desired Skills
  • Professional Hands-on coding experience in python for over 1 year for Data scientist, and over 3 years for Sr Data Scientist. 
  • This is primarily a programming/development-oriented role - hence strong programming skills in writing object-oriented and modular code in python and experience of pushing projects to production is important.
  • Strong foundational knowledge and professional experience in 
  • Machine learning, (Compulsory)
  • Deep Learning (Compulsory)
  • Strong knowledge of At least One of : Natural Language Processing or Computer Vision or Speech Processing or Business Analytics
  • Understanding of Database technologies and SQL. (Compulsory)
  • Knowledge of the following Frameworks:
  • Scikit-learn (Compulsory)
  • Keras/tensorflow/pytorch (At least one of these is Compulsory)
  • API development in python for ML models (good to have)
  • Excellent communication skills.
  • Excellent communication skills are necessary to succeed in this role, as this is a role with high external visibility, and with multiple opportunities to present data science results to a large external audience that will include external VPs, Directors, CXOs etc.  
  • Hence communication skills will be a key consideration in the selection process.
Read more
Job posted by
Harpreet kour

Senior Data Scientist

at Kaleidofin

Founded 2018  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
Machine Learning (ML)
Python
SQL
Natural Language Processing (NLP)
icon
Chennai, Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
Best in industry
4+ year experience in advanced analytics, model building, statistical modeling,
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
Read more
Job posted by
Poornima B

Senior Developer - Spark

at jhjkhhk

Agency job
via CareerBabu
Apache Spark
Big Data
Java
Spring
Data Structures
Algorithms
Design patterns
Spark
API
ETL
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹10L - ₹40L / yr
  • Owns the end to end implementation of the assigned data processing components/product features  i.e. design, development, deployment, and testing of the data processing components and associated flows conforming to best coding practices 

  • Creation and optimization of data engineering pipelines for analytics projects. 

  • Support data and cloud transformation initiatives 

  • Contribute to our cloud strategy based on prior experience 

  • Independently work with all stakeholders across the organization to deliver enhanced functionalities 

  • Create and maintain automated ETL processes with a special focus on data flow, error recovery, and exception handling and reporting 

  • Gather and understand data requirements, work in the team to achieve high-quality data ingestion and build systems that can process the data, transform the data 

  • Be able to comprehend the application of database index and transactions 

  • Involve in the design and development of a Big Data predictive analytics SaaS-based customer data platform using object-oriented analysis, design and programming skills, and design patterns 

  • Implement ETL workflows for data matching, data cleansing, data integration, and management 

  • Maintain existing data pipelines, and develop new data pipeline using big data technologies 

  • Responsible for leading the effort of continuously improving reliability, scalability, and stability of microservices and platform

Read more
Job posted by
Tanisha Takkar

ETL developer

at fintech

Agency job
via Talentojcom
ETL
Druid Database
Java
Scala
SQL
Tableau
Python
icon
Remote only
icon
2 - 6 yrs
icon
₹9L - ₹30L / yr
● Education in a science, technology, engineering, or mathematics discipline, preferably a
bachelor’s degree or equivalent experience
● Knowledge of database fundamentals and fluency in advanced SQL, including concepts
such as windowing functions
● Knowledge of popular scripting languages for data processing such as Python, as well as
familiarity with common frameworks such as Pandas
● Experience building streaming ETL pipelines with tools such as Apache Flink, Apache
Beam, Google Cloud Dataflow, DBT and equivalents
● Experience building batch ETL pipelines with tools such as Apache Airflow, Spark, DBT, or
custom scripts
● Experience working with messaging systems such as Apache Kafka (and hosted
equivalents such as Amazon MSK), Apache Pulsar
● Familiarity with BI applications such as Tableau, Looker, or Superset
● Hands on coding experience in Java or Scala
Read more
Job posted by
Raksha Pant

Data Engineer

at Paisabazaar.com

Founded 2014  •  Products & Services  •  100-1000 employees  •  Profitable
Spark
MapReduce
Hadoop
ETL
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 5 yrs
icon
₹6L - ₹18L / yr
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Read more
Job posted by
Amit Gupta
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Maveric Systems Limited?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort