Big Data Developer

at Mirafra Technologies

DP
Posted by Nirmala N S
icon
Remote, Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹5L - ₹18L / yr
icon
Full time
Skills
Big Data
Scala
Spark
"spark streaming"
"Hadoop
Should have experience in Big data development
Strong experience in Scala/Spark

End client: Sapient
Mode of Hiring : FTE
Notice should be less than 30days
Read more

About Mirafra Technologies

Founded
2005
Type
Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Lead Engineer - Data Quality

at Leading Sales Platform

Agency job
via Qrata
Big Data
ETL
Spark
Data engineering
Data governance
Informatica Data Quality
Java
Scala
Python
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹30L - ₹45L / yr
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more
Job posted by
Blessy Fernandes

Analyst (Research)

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Market Research
Big Data
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As an Analyst (Research) in the Mobile Publishing division you’ll be using your previous experience in analysing market trends to pull usable insights from numerous sources, and find trends others might miss.

What you tell your friends you do 

“I provide insights that help guide the direction of Kwalee’s mobile publishing team as they expand their operation”

What you will really be doing 

  • Using our internal and external data sources to generate insights.
  • Assess market trends and make recommendations to our publishing team on which opportunities to pursue and which to decline
  • Evaluate market movements and use data to assess new opportunities
  • Create frameworks to predict how successful new content can be and the metrics games are likely to achieve
  • Evaluate business opportunities and conduct due diligence on potential business partners we are planning to work with
  • Be an expert on industry data sets and how we can best use them

How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
  • You'll think creatively and be motivated by challenges and constantly striving for the best.
  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!

Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.

Skills and Requirements

  • Previous experience of working with big data sets, preferably in a gaming or tech environment
  • An advanced degree in a related field
  • A keen interest in video games and the market, particularly in the mobile space
  • Familiarity with industry tools and data providers
  • A can-do attitude and ability to move projects forward even when outcomes may not be clear 

We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
  • In addition to a competitive salary we also offer private medical cover and life assurance
  • Creative Wednesdays!(Design and make your own games every Wednesday)
  • 20 days of paid holidays plus bank holidays 
  • Hybrid model available depending on the department and the role
  • Relocation support available 
  • Great work-life balance with flexible working hours
  • Quarterly team building days - work hard, play hard!
  • Monthly employee awards
  • Free snacks, fruit and drinks

Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Read more
Job posted by
Michael Hoppitt

Scala Developer

at its customers successfully navigate their digital transform

Agency job
via HyrHub
Scala
Java
Spark
Amazon Web Services (AWS)
Amazon EC2
icon
Bengaluru (Bangalore)
icon
5 - 11 yrs
icon
₹15L - ₹25L / yr

Job Requirements :

- Define, implement and validate solution frameworks and architecture patterns for data modeling, data integration, processing, reporting, analytics and visualization using leading cloud, big data, open-source and other enterprise technologies.

- Develop scalable data and analytics solutions leveraging standard platforms, frameworks, patterns and full stack development skills.

- Analyze, characterize and understand data sources, participate in design discussions and provide guidance related to database technology best practices.

- Write tested, robust code that can be quickly moved into production

Responsibilities :

- Experience with distributed data processing and management systems.

- Experience with cloud technologies including Spark SQL, Java/ Scala, HDFS, AWS EC2, AWS S3, etc.

- Familiarity with leveraging and modifying open source libraries to build custom frameworks.

Primary Technical Skills :
- Spark SQL, Java/ Scala, Sbt/ Maven/ Gradle, HDFS, Hive, AWS(EC2, S3, SQS, EMR, Glue Scripts, Lambda, Step Functions), IntelliJ IDE, JIRA, Git, Bitbucket/GitLab, Linux, Oozie.


Notice Period - Max  30 -45 days only
Read more
Job posted by
Shwetha Naik

Big Data Architect

at Agilisium

Agency job
via Recruiting India
Big Data
Apache Spark
Spark
PySpark
ETL
Data engineering
icon
Chennai
icon
10 - 19 yrs
icon
₹12L - ₹40L / yr

Job Sector: IT, Software

Job Type: Permanent

Location: Chennai

Experience: 10 - 20 Years

Salary: 12 – 40 LPA

Education: Any Graduate

Notice Period: Immediate

Key Skills: Python, Spark, AWS, SQL, PySpark

Contact at triple eight two zero nine four two double seven

 

Job Description:

Requirements

  • Minimum 12 years experience
  • In depth understanding and knowledge on distributed computing with spark.
  • Deep understanding of Spark Architecture and internals
  • Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
  • Expertise in ETL processes, data warehousing and data lakes.
  • Hands on with python for Big data and analytics.
  • Hands on in agile scrum model is an added advantage.
  • Knowledge on CI/CD and orchestration tools is desirable.
  • AWS S3, Redshift, Lambda knowledge is preferred
Thanks
Read more
Job posted by
Moumita Santra
Big Data
Hadoop
Data engineering
data engineer
Google Cloud Platform (GCP)
Data Warehouse (DWH)
ETL
Systems Development Life Cycle (SDLC)
Java
Scala
Python
SQL
Scripting
Teradata
HiveQL
Pig
Spark
Apache Kafka
Windows Azure
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹4L - ₹16L / yr
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
Job posted by
Sanjay Biswakarma

Data Analyst

at Games 24x7

Agency job
via zyoin
PowerBI
Big Data
Hadoop
Apache Hive
Business Intelligence (BI)
Data Warehouse (DWH)
SQL
Python
Tableau
Java
icon
Bengaluru (Bangalore)
icon
0 - 6 yrs
icon
₹10L - ₹21L / yr
Location: Bangalore
Work Timing: 5 Days A Week

Responsibilities include:

• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.

Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Read more
Job posted by
Shubha N

Data Platform Engineer (SDE 1/2/3)

at Urbancompany (formerly known as Urbanclap)

Founded 2014  •  Services  •  100-1000 employees  •  Raised funding
Apache Kafka
Spark
NodeJS (Node.js)
Python
Hadoop
Apache Hadoop
PySpark
Data Science
Data Visualization
Big Data
icon
Remote, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
Best in industry

Why are we building Urban Company?

 

Organized service commerce is a large yet young industry in India. While India is a very large market for home and local services (~USD 50 Billion in retail spends) and expected to double in the next 5 years, there is no billion-dollar company in this segment today.

 

The industry is bare ~20 years old, with a sub-optimal market architecture typical of an unorganized market - fragmented supply side operated by middlemen. As a result, experiences are broken for both customers and service professionals, each largely relying upon word of mouth to discover the other. The industry can easily be 1.5-2x larger than it is today if the frictions in user and professional journeys are removed - and the experiences made more meaningful and joyful.

 

The Urban Company team is young and passionate, and we see a massive disruption opportunity in his industry. By leveraging technology, and a set of simple yet powerful processes, we wish to build a platform that can organize the world of services - and bring them to your finger-tips. We believe there is the immense value (akin to serendipity) in bringing together customers and professionals looking for each other. In the process, we hope to impact the lives of millions of service entrepreneurs, and transform service commerce the way Amazon transformed product commerce.

 

Urbancompany has grown 3x YOY and so as our tech stack. We have evolved in data-driven approach solving for the product over the last few years. We deal with around 10TB in data analytics with around 50Mn/day.  We adopted the platform thinking pretty early stage of UC. We started building central platform teams who are dedicated to solving core engineering problems around 2-3 years ago and now it has evolved to a full-fledged vertical. Out platform vertical majorly includes Data Engineering, Service and Core Platform, Infrastructure, and Security. We are looking for Data Engineers, a person who loves solving standardization, has strong platform thinking, opinions, and has solved for Data Engineering, Data Science and analytics platforms.

 

Job Responsibilities

  • Platform first approach to engineering problems.
  • Creating highly autonomous systems with minimal manual intervention.
  • Frameworks which can be extended to larger audiences through open source.
  • Extending and modifying the open source projects to adopt as per Urban Company use case.
  • Developer productivity.
  • Highly abstracted and standardized frameworks like micro services, event-driven architecture, etc.

 

Job Requirements/Potential Backgrounds

  • Bachelors/master’s in computer science form top-tier Engineering School.
  • Experience with Data pipeline and workflow management tools like Luigi, Airflow etc.
  • Proven ability to work in a fast paced environment.
  • History and Familiarity of server-side development of APIs, databases, dev-ops and systems.
  • Fanatic about building scalable, reliable data products.
  • Experience with Big data tools: Hadoop, Kafka/Kinesis, Flume, etc. is an added advantage.
  • Experience with Relational SQL and NO SQL databases like HBase, Cassandra etc.
  • Experience with stream processing engines like Spark, Link, Storm, etc. is an added advantage.

 

What UC has in store for you

 

  • A phenomenal work environment, with massive ownership and growth opportunities.
  • A high performance, high-velocity environment at the cutting edge of growth.
  • Strong ownership expectation and freedom to fail.
  • Quick iterations and deployments – fail-fast attitude.
  • Opportunity to work on cutting edge technologies.
  • The massive, and direct impact of the work you do on the lives of people.
Read more
Job posted by
Mohit Agrawal

Big Data Engineer

at Leading Digital marketing ageny

Big Data
Elastic Search
Hadoop
Apache Kafka
Apache Hive
icon
ukraine
icon
3 - 10 yrs
icon
₹15L - ₹30L / yr
Responsibility: • Studied Computer Science, • 5+ years of software development experience, • Must have experience in Elasticsearch (2+ years experience is preferable), • Skills in Java, Python or Scala, • Passionate about learning big data, data mining and data analysis technologies, • Self-motivated; independent, organized and proactive; highly responsive, flexible, and adaptable when working across multiple teams, • Strong SQL skills, including query optimization are required. • Experience working with large, complex datasets is required, • Experience with recommendation systems and data warehouse technologies are preferred, • You possess an intense curiosity about data and a strong commitment to practical problem-solving, • Creative in thinking data centric products which will be used in online customer behaviors and marketing, • Build systems to pull meaningful insights from our data platform, • Integrate our analytics platform internally across products and teams, • Focus on performance, throughput, latency and drive these throughout our architecture. Bonuses -Experience with big data architectures such as Lambda Architecture. -Experience working with big data technologies (like Hadoop, Java Map/Reduce, Hive, Spark SQL), real-time processing frameworks (like Spark Streaming, Storm, AWS Kinesis). -Proficiency in key-value stores such as : HBase/Cassandra, Redis, Riak and MongoDB -Experience with AWS EMR
Read more
Job posted by
Vidushi Singh

Artificial Intelligence Developers

at Precily Private Limited

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Science
Artificial Neural Network (ANN)
Artificial Intelligence (AI)
Machine Learning (ML)
Python
TensorFlow
Natural Language Processing (NLP)
Big Data
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 3 yrs
icon
₹3L - ₹9L / yr
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Read more
Job posted by
Bharath Rao

Data Scientist

at mPaani Solutions Pvt Ltd

Founded 2013  •  Product  •  20-100 employees  •  Raised funding
Machine Learning (ML)
Python
Data Science
Big Data
R Programming
Haskell
Hadoop
icon
Mumbai
icon
3 - 7 yrs
icon
₹5L - ₹15L / yr
Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.
Read more
Job posted by
Julie K
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Mirafra Technologies?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort