Cutshort logo
Maveric Systems logo
Big Data Developer
Big Data Developer
Maveric Systems's logo

Big Data Developer

Rashmi Poovaiah's profile picture
Posted by Rashmi Poovaiah
4 - 10 yrs
₹8L - ₹15L / yr (ESOP available)
Bengaluru (Bangalore), Chennai, Pune
Skills
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
skill iconScala
SQL

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Maveric Systems

Founded :
2000
Type
Size :
100-1000
Stage :
Profitable
About
Started in 2000, Maveric Systems is a niche, domain-led, BankTech specialist, that partners with global banks to solve their business challenges through emerging technology. Our 3000+ technology specialists and proven frameworks help our customers navigate a rapidly changing environment, enabling sharper definition of their goals and measures to achieve them. . We accelerate digital transformation in retail, corporate & wealth management lines of business through ● Inherent banking domain strength ● A customer intimacy led delivery model ● Differentiated talent, with layered competency – deep domain and tech leadership, supported by a culture of ownership, energy, and commitment to customer success. ● Our global presence spans across 15 countries with regional delivery capabilities in Bangalore, Chennai, Dubai, London, Poland, Riyadh and Singapore. We have specialized competencies across Data, Digital, Core Banking and Quality Engineering. Through our commitment to finding the best solutions for customers, we have been able to establish ourselves as a “trusted partner” to many global banks who expect us to deliver their challenging digital transformations.
Read more
Connect with the team
Profile picture
Aravinda M
Profile picture
Rashmi Poovaiah
Profile picture
Salitha N P
Company social profiles
bloglinkedin

Similar jobs

Pion Global Solutions LTD
Sheela P
Posted by Sheela P
Mumbai
3 - 100 yrs
₹4L - ₹15L / yr
Spark
Big Data
Hadoop
HDFS
Apache Sqoop
+2 more
Looking for Big data Developers in Mumbai Location
Read more
Bengaluru (Bangalore), Gurugram
1 - 7 yrs
₹4L - ₹10L / yr
skill iconPython
skill iconR Programming
SAS
Surveying
skill iconData Analytics
+2 more

Desired Skills & Mindset:


We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.


• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions

• Statistical programming software experience in SPSS and comfortable working with large data sets.

• R, Python, SAS & SQL are preferred but not a mandate

• Excellent time management skills

• Good written and verbal communication skills; understanding of both written and spoken English

• Strong interpersonal skills

• Ability to act autonomously, bringing structure and organization to work

• Creative and action-oriented mindset

• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged

• Ability to work under pressure and deliver on tight deadlines


Qualifications and Experience:


• Graduate degree in: Statistics/Economics/Econometrics/Computer

Science/Engineering/Mathematics/MBA (with a strong quantitative background) or

equivalent

• Strong track record work experience in the field of business intelligence, market

research, and/or Advanced Analytics

• Knowledge of data collection methods (focus groups, surveys, etc.)

• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,

and MS Office (Excel, PowerPoint, Word)

• Strong analytical and critical thinking skills

• Industry experience in Consumer Experience/Healthcare a plus

Read more
IntraEdge
at IntraEdge
1 recruiter
Poornima V
Posted by Poornima V
Remote only
4 - 16 yrs
₹11L - ₹27L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Company Name: Intraedge Technologies Ltd (https://intraedge.com/" target="_blank">https://intraedge.com/)

Type: Permanent, Full time

Location: Any

A Bachelor’s degree in computer science, computer engineering, other technical discipline, or equivalent work experience

  • 4+ years of software development experience
  • 4+ years exp in programming languages- Python, spark, Scala, Hadoop, hive
  • Demonstrated experience with Agile or other rapid application development methods
  • Demonstrated experience with object-oriented design and coding.

Please mail you rresume to poornimakattherateintraedgedotcomalong with NP, how soon can you join, ECTC, Availability for interview, Location
Read more
Getinz
at Getinz
11 recruiters
kousalya k
Posted by kousalya k
Remote only
4 - 8 yrs
₹10L - ₹15L / yr
Penetration testing
skill iconPython
Powershell
Bash
Spark
+5 more
-3 + years of Red Team experience
-5+ years hands on experience with penetration testing would be added plus
-Strong Knowledge of programming or scripting languages, such as Python, PowerShell, Bash
-Industry certifications like OSCP and AWS are highly desired for this role
-Well-rounded knowledge in security tools, software and processes
Read more
HL
Bengaluru (Bangalore)
6 - 15 yrs
₹1L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
• 8+ years of experience in developing Big Data applications
• Strong experience working with Big Data technologies like Spark (Scala/Java),
• Apache Solr, HIVE, HBase, ElasticSearch, MongoDB, Airflow, Oozie, etc.
• Experience working with Relational databases like MySQL, SQLServer, Oracle etc.
• Good understanding of large system architecture and design
• Experience working in AWS/Azure cloud environment is a plus
• Experience using Version Control tools such as Bitbucket/GIT code repository
• Experience using tools like Maven/Jenkins, JIRA
• Experience working in an Agile software delivery environment, with exposure to
continuous integration and continuous delivery tools
• Passionate about technology and delivering solutions to solve complex business
problems
• Great collaboration and interpersonal skills
• Ability to work with team members and lead by example in code, feature
development, and knowledge sharing
Read more
Sopra Steria
Chennai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 8 yrs
₹2L - ₹12L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
Good hands-on experience on Spark and Scala.
Should have experience in Big Data, Hadoop.
Currently providing WFH.
immediate joiner or 30 days
Read more
UAE Client
Agency job
via Fragma Data Systems by Harpreet kour
Dubai, Bengaluru (Bangalore)
4 - 8 yrs
₹6L - ₹16L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Apache Spark
+3 more
• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Bengaluru (Bangalore)
9 - 20 yrs
₹40L - ₹44L / yr
skill iconPython
SQL
Lead Technical Trainer
Trainer
IT Trainer
+1 more

This is the first senior person we are bringing for this role. This person will start with the training program but will go on to build a team and eventually also be responsible for the entire training program + Bootcamp.

 

We are looking for someone fairly senior and has experience in data + tech. At some level, we have all the technical expertise to teach you the data stack as needed. So it's not super important you know all the tools. However, having basic knowledge of the stack requirement. The training program covers 2 parts - Technology (our stack) and Process (How we work with clients). Both of which are super important.

  • Full-time flexible working schedule and own end-to-end training
  • Self-starter - who can communicate effectively and proactively
  • Function effectively with minimal supervision.
  • You can train and mentor potential 5x engineers on Data Engineering skillsets
  • You can spend time on self-learning and teaching for new technology when needed
  • You are an extremely proactive communicator, who understands the challenges of remote/virtual classroom training and the need to over-communicate to offset those challenges.

Requirements

  • Proven experience as a corporate trainer or have passion for Teaching/ Providing Training
  • Expertise in Data Engineering Space and have good experience in Data Collection, Data
  • Ingestion, Data Modeling, Data Transformation, and Data Visualization technologies and techniques
  • Experience Training working professionals on in-demand skills like Snowflake, debt, Fivetran, google data studio, etc.
  • Training/Implementation Experience using Fivetran, DBT Cloud, Heap, Segment, Airflow, Snowflake is a big plus

 

Read more
netmedscom
at netmedscom
3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
skill iconScala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
High-Growth Fintech Startup
Agency job
via Unnati by Ramya Senthilnathan
Remote, Mumbai
3 - 5 yrs
₹7L - ₹10L / yr
Business Intelligence (BI)
PowerBI
Analytics
Reporting
Data management
+5 more
Want to join the trailblazing Fintech company which is leveraging software and technology to change the face of short-term financing in India!

Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers. Its founders are IIT and ISB alumni with deep experience in the fin-tech industry, from earlier working with organizations like Axis Bank, Aditya Birla Group, Fractal Analytics, and Housing.com. It has raised funds of Rs. 100 Crore from finance industry stalwarts and is growing by leaps and bounds.
 
As a Data Analyst - SQL, you will be working on projects in the Analytics function to generate insights for business as well as manage reporting for the management for all things related to Lending.
 
You will be part of a rapidly growing tech-driven organization and will be responsible for generating insights that will drive business impact and productivity improvements.
 
What you will do:
  • Ensuring ease of data availability, with relevant dimensions, using Business Intelligence tools.
  • Providing strong reporting and analytical information support to the management team.
  • Transforming raw data into essential metrics basis needs of relevant stakeholders.
  • Performing data analysis for generating reports on a periodic basis.
  • Converting essential data into easy to reference visuals using Data Visualization tools (PowerBI, Metabase).
  • Providing recommendations to update current MIS to improve reporting efficiency and consistency.
  • Bringing fresh ideas to the table and keen observers of trends in the analytics and financial services industry.

 

 

What you need to have:
  • MBA/ BE/ Graduate, with work experience of 3+ years.
  • B.Tech /B.E.; MBA / PGDM
  • Experience in Reporting, Data Management (SQL, MongoDB), Visualization (PowerBI, Metabase, Data studio)
  • Work experience (into financial services, Indian Banks/ NBFCs in-house analytics units or Fintech/ analytics start-ups would be a plus.)
Skills:
  • Skilled at writing & optimizing large complicated SQL queries & MongoDB scripts.
  • Strong knowledge of Banking/ Financial Services domain
  • Experience with some of the modern relational databases
  • Ability to work on multiple projects of different nature and self- driven,
  • Liaise with cross-functional teams to resolve data issues and build strong reports

 

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos