● Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
● Build dashboards using Self-Service tools on Kibana and perform data analysis to support
● Collaborate with multiple cross-functional teams and work
NoBroker is a new and disruptive force in the Real Estate Industry. We’re a site that’s built to let you buy, sell, rent, find a PG or a flatmate WITHOUT paying any brokerage.
Our mission is to lead India’s real estate industry, towards an era of doing real estate transactions in a convenient and brokerage-free manner. We currently save our customers over 250 crores per year in brokerage. NoBroker was founded by alumni from IIT Bombay, IIT Kanpur & IIM Ahmedabad in March 2014 and have since served over 35 lakh customers. As a VC funded company, we’ve raised over 20M+ in a couple of rounds of funding. We’re a team of 350 people driven by passion, the passion to help you fulfil your housing requirement, without paying a hefty brokerage.
NoBroker has worked tirelessly to remove all information asymmetry caused by brokers. We also enable owners and tenants to interact with each other directly by using our technologically advanced platform. Our world-class services include-
1-Verified brokerage-free properties for buyers and tenants
2- Quick brokerage-free tenants & buyers for property owners
3-Benefit rich services including online rental agreement and dedicated relationship managers
Our app (70 lakhs+ downloads) and our website serve 4 cities at present – Bangalore, Mumbai, Pune and Chennai. Our rapid growth means that we will keep on expanding to more cities shortly.
Are you looking for huge work- independence, passionate peers, steep learning curve, meritocratic work culture, massive growth environment with loads of fun, best-in-class salary and ESOPs? Just apply to our jobs below :-)
Purpose of Job:
Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.
We are looking for a smart and experienced Data Engineer 1 who can work with a senior
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking
We are currently seeking talented and highly motivated Data Engineers to lead in the development of our discovery and support platform. The successful candidate will join a small, global team of data focused associates that have successfully built, and maintained a best of class traditional, Kimball based, SQL server founded, data warehouse. The successful candidate will lead the conversion of the existing data structure into an AWS focused, big data framework and assist in identifying and pipelining existing and augmented data sets into this environment. The successful candidate must be able to lead and assist in architecting and constructing the AWS foundation and initial data ports.
Specific responsibilities will be to:
- Lead and assist in design, deploy, and maintain robust methods for data management and analysis, primarily using the AWS cloud
- Develop computational methods for integrating multiple data sources to facilitate target and algorithmic
- Provide computational tools to ensure trustworthy data sources and facilitate reproducible
- Provide leadership around architecting, designing, and building target AWS data environment (like data lake and data warehouse).
- Work with on staff subject-matter experts to evaluate existing data sources, DW, ETL ports, existing stove type data sources and available augmentation data sets.
- Implement methods for execution of high-throughput assays and subsequent acquisition, management, and analysis of the
- Assist in the communications of complex scientific, software and data concepts and
- Assist in the identification and hiring of additional data engineer associates.
- Master’s Degree (or equivalent experience) in computer science, data science or a scientific field that has relevance to healthcare in the United States
- Extensive experience in the use of a high-level programming language (i.e., Python or Scala) and relevant AWS services.
- Experience in AWS cloud services like S3, Glue, Lake Formation, Athena, and others.
- Experience in creating and managing Data Lakes and Data Warehouses.
- Experience with big data tools like Hadoop, Hive, Talend, Apache Spark, Kafka.
- Advance SQL scripting.
- Database Management Systems (for example, Oracle, MySQL or MS SQL Server)
- Hands on experience in data transformation tools, data processing and data modeling on a big data environment.
- Understanding the basics of distributed systems.
- Experience working and communicating with subject matter expert
- The ability to work independently as well as to collaborate on multidisciplinary, global teams in a startup fashion with traditional data warehouse skilled data associates and business teams unfamiliar with data science techniques
- Strong communication, data presentation and visualization
lesser concentration on enforcing how to do a particular task, we believe in giving people the opportunity to think out of the box and come up with their own innovative solution to problem solving.
You will primarily be developing, managing and executing handling multiple prospect campaigns as part of Prospect Marketing Journey to ensure best conversion rates and retention rates. Below are the roles, responsibilities and skillsets we are looking for and if you feel these resonate with you, please get in touch with us by applying to this role.
Roles and Responsibilities:
• You'd be responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies.
• You'd collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.
• You'd Assist in the definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with multiple cross-functional teams.
• Assist in the design and implementation process for new products, research and create POC for possible solutions.
• Bachelors or Masters Degree in a technology related field preferred.
• Overall experience of 2-3 years on the Big Data Technologies.
• Hands on experience with Spark (Java/ Scala)
• Hands on experience with Hive, Shell Scripting
• Knowledge on Hbase, Elastic Search
• Development experience In Java/ Python is preferred
• Familiar with profiling, code coverage, logging, common IDE’s and other
• Demonstrated verbal and written communication skills, and ability to interface with Business, Analytics and IT organizations.
• Ability to work effectively in short-cycle, team oriented environment, managing multiple priorities and tasks.
• Ability to identify non-obvious solutions to complex problems
Data Engineer JD:
- Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
- Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
- Taking care of the complete ETL (Extract, Transform & Load) process.
- Ensuring architecture is planned in such a way that it meets all the business requirements.
- Exploring new ways of using existing data, to provide more insights out of it.
- Proposing ways to improve data quality, reliability & efficiency of the whole system.
- Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
- Introducing new data management tools & technologies into the existing system to make it more efficient.
- Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies
What do we expect from you?
- BS/MS in Computer Science or equivalent experience
- 5 years of recent experience in Big Data Engineering.
- Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
- Excellent programming and debugging skills in Java or Python.
- Apache spark, python, hands on experience in deploying ML models
- Has worked on streaming and realtime pipelines
- Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm
Data structure & Algorithms
Problem solving + Coding
We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.
- The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
- Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
- Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
- Excellent knowledge in SQL & Linux Shell scripting
- Bachelors/Master’s/Engineering Degree from a well-reputed university.
- Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
- Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
- Ability to manage a diverse and challenging stakeholder community
- Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.
- Should works as a senior developer/individual contributor based on situations
- Should be part of SCRUM discussions and to take requirements
- Adhere to SCRUM timeline and deliver accordingly
- Participate in a team environment for the design, development and implementation
- Should take L3 activities on need basis
- Prepare Unit/SIT/UAT testcase and log the results
- Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
- Quality delivery and automation should be a top priority
- Co-ordinate change and deployment in time
- Should create healthy harmony within the team
- Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
- Previous experience of working in large scale data engineering
- 4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
- Previous experience of architecting and designing backend for large scale data processing.
- Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
- Hands-on and have the ability to contribute a key portion of data engineering backend.
- Self-inspired and motivated to drive for exceptional results.
- Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
- Familiarity and experience working with different DB technologies and how to scale them.
- End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
- Build data engineering workflow for large scale data processing.
- Discover opportunities in data acquisition.
- Bring industry best practices for data engineering workflow.
- Develop data set processes for data modelling, mining and production.
- Take additional tech responsibilities for driving an initiative to completion
- Recommend ways to improve data reliability, efficiency and quality
- Goes out of their way to reduce complexity.
- Humble and outgoing - engineering cheerleaders.