Big Data

at NoBroker

DP
Posted by noor aqsa
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹6L - ₹8L / yr
icon
Full time
Skills
Java
Spark
Hadoop
Big Data
Data engineering
PySpark
Selenium
You will build, setup and maintain some of the best data pipelines and MPP frameworks for our
datasets
Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
Collaborate with multiple cross-functional teams and work
Read more

About NoBroker

NoBroker is a new and disruptive force in the Real Estate Industry. We’re a site that’s built to let you buy, sell, rent, find a PG or a flatmate WITHOUT paying any brokerage.

 

Our mission is to lead India’s real estate industry, towards an era of doing real estate transactions in a convenient and brokerage-free manner. We currently save our customers over 250 crores per year in brokerage. NoBroker was founded by alumni from IIT Bombay, IIT Kanpur & IIM Ahmedabad in March 2014 and have since served over 35 lakh customers. As a VC funded company, we’ve raised over 20M+ in a couple of rounds of funding. We’re a team of 350 people driven by passion, the passion to help you fulfil your housing requirement, without paying a hefty brokerage.

 

NoBroker has worked tirelessly to remove all information asymmetry caused by brokers. We also enable owners and tenants to interact with each other directly by using our technologically advanced platform. Our world-class services include-

1-Verified brokerage-free properties for buyers and tenants

2- Quick brokerage-free tenants & buyers for property owners

3-Benefit rich services including online rental agreement and dedicated relationship managers

 

Our app (70 lakhs+ downloads) and our website serve 4 cities at present – Bangalore, Mumbai, Pune and Chennai. Our rapid growth means that we will keep on expanding to more cities shortly.

 

Are you looking for huge work- independence, passionate peers, steep learning curve, meritocratic work culture, massive growth environment with loads of fun, best-in-class salary and ESOPs? Just apply to our jobs below :-)

Read more
Founded
2014
Type
Products & Services
Size
100-1000 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Fintech Company

Agency job
via Jobdost
Python
SQL
Data Warehouse (DWH)
Hadoop
Amazon Web Services (AWS)
DevOps
Git
Selenium
Informatica
ETL
Big Data
Postman
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹7L - ₹12L / yr

Purpose of Job:

Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.


Job Responsibilities:

We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases

 

Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking

Read more
Job posted by
Sathish Kumar

Data Engineer

at building a cutting-edge data science department to serve the older adult community and marketplace.

Agency job
via HyrHub
Big Data
Hadoop
Apache Hive
Data Warehouse (DWH)
PySpark
Cloud Computing
icon
Chandigarh
icon
5 - 8 yrs
icon
₹8L - ₹15L / yr

We are currently seeking talented and highly motivated Data Engineers to lead in the development of our discovery and support platform. The successful candidate will join a small, global team of data focused associates that have successfully built, and maintained a best of class traditional, Kimball based, SQL server founded, data warehouse.  The successful candidate will lead the conversion of the existing data structure into an AWS focused, big data framework and assist in identifying and pipelining existing and augmented data sets into this environment. The successful candidate must be able to lead and assist in architecting and constructing the AWS foundation and initial data ports.

 

Specific responsibilities will be to:

  • Lead and assist in design, deploy, and maintain robust methods for data management and analysis, primarily using the AWS cloud
  • Develop computational methods for integrating multiple data sources to facilitate target and algorithmic
  • Provide computational tools to ensure trustworthy data sources and facilitate reproducible
  • Provide leadership around architecting, designing, and building target AWS data environment (like data lake and data warehouse).
  • Work with on staff subject-matter experts to evaluate existing data sources, DW, ETL ports, existing stove type data sources and available augmentation data sets.
  • Implement methods for execution of high-throughput assays and subsequent acquisition, management, and analysis of the
  • Assist in the communications of complex scientific, software and data concepts and
  • Assist in the identification and hiring of additional data engineer associates.

Job Requirements:

  • Master’s Degree (or equivalent experience) in computer science, data science or a scientific field that has relevance to healthcare in the United States
  • Extensive experience in the use of a high-level programming language (i.e., Python or Scala) and relevant AWS services.
  • Experience in AWS cloud services like S3, Glue, Lake Formation, Athena, and others.
  • Experience in creating and managing Data Lakes and Data Warehouses.
  • Experience with big data tools like Hadoop, Hive, Talend, Apache Spark, Kafka.
  • Advance SQL scripting.
  • Database Management Systems (for example, Oracle, MySQL or MS SQL Server)
  • Hands on experience in data transformation tools, data processing and data modeling on a big data environment.
  • Understanding the basics of distributed systems.
  • Experience working and communicating with subject matter expert
  • The ability to work independently as well as to collaborate on multidisciplinary, global teams in a startup fashion with traditional data warehouse skilled data associates and business teams unfamiliar with data science techniques
  • Strong communication, data presentation and visualization
Read more
Job posted by
Shwetha Naik

Big Data Engineer

at BDIPlus

Founded 2014  •  Product  •  100-500 employees  •  Profitable
Apache Hive
Spark
Scala
PySpark
Data engineering
Big Data
Hadoop
Java
Python
icon
Remote only
icon
2 - 6 yrs
icon
₹6L - ₹20L / yr
We are looking for big data engineers to join our transformational consulting team serving one of our top US clients in the financial sector. You'd get an opportunity to develop big data pipelines and convert business requirements to production grade services and products. With
lesser concentration on enforcing how to do a particular task, we believe in giving people the opportunity to think out of the box and come up with their own innovative solution to problem solving.
You will primarily be developing, managing and executing handling multiple prospect campaigns as part of Prospect Marketing Journey to ensure best conversion rates and retention rates. Below are the roles, responsibilities and skillsets we are looking for and if you feel these resonate with you, please get in touch with us by applying to this role.
Roles and Responsibilities:
• You'd be responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies.
• You'd collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.
• You'd Assist in the definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with multiple cross-functional teams.
• Assist in the design and implementation process for new products, research and create POC for possible solutions.
Skillset:
• Bachelors or Masters Degree in a technology related field preferred.
• Overall experience of 2-3 years on the Big Data Technologies.
• Hands on experience with Spark (Java/ Scala)
• Hands on experience with Hive, Shell Scripting
• Knowledge on Hbase, Elastic Search
• Development experience In Java/ Python is preferred
• Familiar with profiling, code coverage, logging, common IDE’s and other
development tools.
• Demonstrated verbal and written communication skills, and ability to interface with Business, Analytics and IT organizations.
• Ability to work effectively in short-cycle, team oriented environment, managing multiple priorities and tasks.
• Ability to identify non-obvious solutions to complex problems
Read more
Job posted by
Puja Kumari

Data Scientist

at TVS Credit Services Ltd

Founded 2009  •  Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Machine Learning (ML)
Hadoop
SQL server
Linear regression
Predictive modelling
icon
Chennai
icon
4 - 10 yrs
icon
₹10L - ₹20L / yr
Job Description: Be responsible for scaling our analytics capability across all internal disciplines and guide our strategic direction in regards to analytics Organize and analyze large, diverse data sets across multiple platforms Identify key insights and leverage them to inform and influence product strategy Technical Interactions with vendor or partners in technical capacity for scope/ approach & deliverables. Develops proof of concept to prove or disprove validity of concept. Working with all parts of the business to identify analytical requirements and formalize an approach for reliable, relevant, accurate, efficientreporting on those requirements Designing and implementing advanced statistical testing for customized problem solving Deliver concise verbal and written explanations of analyses to senior management that elevate findings into strategic recommendations Desired Candidate Profile: MTech / BE / BTech / MSc in CS or Stats or Maths, Operation Research, Statistics, Econometrics or in any quantitative field Experience in using Python, R, SAS Experience in working with large data sets and big data systems (SQL, Hadoop, Hive, etc.) Keen aptitude for large-scale data analysis with a passion for identifying key insights from data Expert working knowledge in various machine learning algorithms such XGBoost, SVM Etc. We are looking candidates from the following: Experience in Unsecured Loans & SME Loans analytics (cards, installment loans) - risk based pricing analytics Experience in Differential pricing / selection analytics (retail, airlines / travel etc). Experience in Digital product companies or Digital eCommerce with Product mindset and experience Experience in Fraud / Risk from Banks, NBFC / Fintech / Credit Bureau Experience in Online media with knowledge of media, online ads & sales (agencies) - Knowledge of DMP, DFP, Adobe/Omniture tools, Cloud Experience in Consumer Durable Loans lending companies (Experience in Credit Cards, Personal Loan - optional) Experience in Tractor Loans lending companies (Experience in Farm) Experience in Recovery, Collections analytics Experience in Marketing Analytics with Digital Marketing, Market Mix modelling, Advertising Technology
Read more
Job posted by
Vinodhkumar Panneerselvam

Data Engineer

at world’s fastest growing consumer internet company

Big Data
Data engineering
Big Data Engineering
Data Engineer
ETL
Spark
Apache Kafka
Python
Hadoop
Apache Spark
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹20L - ₹35L / yr

Data Engineer JD:

  • Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
  • Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
  • Taking care of the complete ETL (Extract, Transform & Load) process.
  • Ensuring architecture is planned in such a way that it meets all the business requirements.
  • Exploring new ways of using existing data, to provide more insights out of it.
  • Proposing ways to improve data quality, reliability & efficiency of the whole system.
  • Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
  • Introducing new data management tools & technologies into the existing system to make it more efficient.
  • Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies

What do we expect from you?

  • BS/MS in Computer Science or equivalent experience
  • 5 years of recent experience in Big Data Engineering.
  • Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
  • Excellent programming and debugging skills in Java or Python.
  • Apache spark, python, hands on experience in deploying ML models
  • Has worked on streaming and realtime pipelines
  • Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm

 

 

 

 

 

 

 

 

 

 

 

 

Focus Area:

 

R1

Data structure & Algorithms

R2

Problem solving + Coding

R3

Design (LLD)

 

Read more
Job posted by
Chandramohan Subramanian

Big Data Developer

at Maveric Systems Limited

Founded 2000  •  Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
Scala
SQL
icon
Bengaluru (Bangalore), Chennai, Pune
icon
4 - 10 yrs
icon
₹8L - ₹15L / yr

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
Job posted by
Rashmi Poovaiah

Internship- JAVA / Python / AI / ML

at Wise Source

Founded 2014  •  Product  •  20-100 employees  •  Profitable
Artificial Intelligence (AI)
Machine Learning (ML)
Internship
Java
Python
icon
Remote, Guindy
icon
0 - 2 yrs
icon
₹1L - ₹1.5L / yr
Looking out for Internship Candidates . Designation:- Intern/ Trainee Technology : .NET/JAVA/ Python/ AI/ ML Duration : 2-3 Months Job Location :Online Internship Joining :Immediately Job Type :Internship Job Description - MCA/M.Tech/ B.Tech/ BE who need 2-6 months internship project to be done. - Should be available to join us immediately. - Should be flexible to work on any Skills/ Technologies. - Ready to work in long working hours. - Must possess excellent analytical and logical skills. - Internship experience is provided from experts - Internship Certificate will be provided at the end of training. - The requirement is strictly for internship and not a permanent job - Stipend will be provided only based on the performance.
Read more
Job posted by
Wise HR

Data Science Engineer (SDE I)

at Couture.ai

Founded 2017  •  Product  •  20-100 employees  •  Profitable
Spark
Algorithms
Data Structures
Scala
Machine Learning (ML)
Big Data
Hadoop
Python
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹12L - ₹20L / yr
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
Read more
Job posted by
Shobhit Agarwal

Data Engineer

at Product / Internet / Media Companies

Agency job
via archelons
Big Data
Hadoop
Data processing
Python
Data engineering
HDFS
Spark
Data lake
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹30L / yr

REQUIREMENT:

  •  Previous experience of working in large scale data engineering
  •  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
  •  Previous experience of architecting and designing backend for large scale data processing.
  •  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
  •  Hands-on and have the ability to contribute a key portion of data engineering backend.
  •  Self-inspired and motivated to drive for exceptional results.
  •  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
  •  Familiarity and experience working with different DB technologies and how to scale them.

RESPONSIBILITY:

  •  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
  •  Build data engineering workflow for large scale data processing.
  •  Discover opportunities in data acquisition.
  •  Bring industry best practices for data engineering workflow.
  •  Develop data set processes for data modelling, mining and production.
  •  Take additional tech responsibilities for driving an initiative to completion
  •  Recommend ways to improve data reliability, efficiency and quality
  •  Goes out of their way to reduce complexity.
  •  Humble and outgoing - engineering cheerleaders.
Read more
Job posted by
Meenu Singh

Senior Software Engineer

at zeotap

Founded 2014  •  Product  •  20-100 employees  •  Raised funding
Python
Big Data
Hadoop
Scala
Spark
icon
Bengaluru (Bangalore)
icon
6 - 10 yrs
icon
₹5L - ₹40L / yr
Job posted by
Projjol Banerjea
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at NoBroker?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort