Why are we building Urbancomapny?
Organized service commerce is a large yet young industry in India. While India is a very large market for a home and local services (~USD 50 Billion in retail spends) and expected to double in the next 5 years, there is no billion-dollar company in this segment today.
The industry is bare ~20 years old, with a sub-optimal market architecture typical of an unorganized market - fragmented supply side operated by middlemen. As a result, experiences are broken for both customers and service professionals, each largely relying upon word of mouth to discover the other. The industry can easily be 1.5-2x larger than it is today if the frictions in user and professional's journeys are removed - and the experiences made more meaningful and joyful.
The Urban Company team is young and passionate, and we see a massive disruption opportunity in his industry. By leveraging technology, and a set of simple yet powerful processes, we wish to build a platform that can organize the world of services - and bring them to your finger-tips. We believe there is the immense value (akin to serendipity) in bringing together customers and professionals looking for each other. In the process, we hope to impact the lives of millions of service entrepreneurs, and transform service commerce they way Amazon transformed product commerce.
Job Description :
Urbancompany has grown 3x YOY and so as our tech stack. We have evolved in data-driven approach solving for the product over the last few years. We deal with around 10TB in data analytics with around 50Mn/day. We adopted platform thinking pretty at the very early stage of UC. We started building central platform teams who are dedicated solve for core engineering problems around 2-3 years ago and now it has evolved to a full-fledged vertical. Out platform vertical majorly includes Data Engineering, Service and Core Platform, Infrastructure, and Security. We are looking for an Engineering Manager for the Data Engineering team currently. A person who loves solving standardization, have strong platform thinking, opinions, have solved for Data Engineering, Data Science and analytics platform.
- Building high octane teams with high opinions and strong platform thinking
- Working on complex design and architectural problems.
- Solving funnel analytics, product insights and building a highly scalable data platform
- Experience in building Data Science Platform
- Highly productive data-driven models to contribute to product success and building
- Visioning out the roadmap and thought process behind taking current tech stack to next level
- Building and maintaining the high NPS of 70% of platform products
- Strong decision-maker with hands-on experience
- Think about abstractions, systems and services and write high-quality code.
- Have an understanding of loopholes in current systems/architecture that can potentially break in the future and push towards solving them with other stakeholders.
- Think through complex architecture to build robust platforms to serve together all the categories and flows, solve for scale, and work on internally build services to cater to our growing needs.
- At least 1-2+ Years of experience in managing teams
- 5-8 years of experience in the industry solving complex problems from scratch and have graduate/post-graduate degrees from top-tier universities.
- A thinker with strong opinions and the ability to get those opinions into reality
- Prior experience of creating complex systems in the past.
- Ability to build scalable, sustainable, reliable, and secure products based on past experience and leading teams and projects by themselves.
- Ability to bring new practices, architectural choices, and new initiatives onto the table to make the overall tech stack more robust.
- History and familiarity with server-side architecture based on APIs, databases, infrastructure, and systems.
- Ability to own the technical road map for systems/components.
What can you expect?
- A phenomenal work environment, with massive ownership and growth opportunities.
- A high performance, high-velocity environment at the cutting edge of growth.
- Strong ownership expectation and freedom to fail.
- Quick iterations and deployments – fail-fast attitude.
- Opportunity to work on cutting edge technologies.
- The massive, and direct impact of the work you do on the lives of people.
About Urbancompany (formerly known as Urbanclap)
Technical must haves:
● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must
Technical Ideal to have:
● Exposure to our tech stack – PHP
● Microsoft workflows knowledge
Behavioural Pen Portrait:
● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate
Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
4+ years of data analysis experience
Advanced working knowledge of SQL (window functions, CTEs, etc.)
Experience working with a BI tool like Tableau, Chartio, or Looker
Knowledge of Python and/or R
Strong critical thinking and problem-solving skills
Success owning your own projects and driving these projects to
Hands-on experience with data pipelines and/or ETL processes
Excellent verbal and written communication skills, with the ability to
communicate technical concepts to a non-technical audience
Strong business intuition and an ability to relate analyses to 6sense’s
goals and objectives
Ability to prioritize and execute tasks in a changing environment
We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
- Develop, test, and implement data solutions based on functional / non-functional business requirements.
- You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
- Build Data Models to store the data in a most optimized manner
- Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Implementing the ETL process and optimal data pipeline architecture
- Monitoring performance and advising any necessary infrastructure changes.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Proactively identify potential production issues and recommend and implement solutions
- Must be able to write quality code and build secure, highly available systems.
- Create design documents that describe the functionality, capacity, architecture, and process.
- Review peer-codes and pipelines before deploying to Production for optimization issues and code standards
- Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Proficient understanding of distributed computing principles
- Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
- Implemented complex projects dealing with the considerable data size (PB).
- Optimization techniques (performance, scalability, monitoring, etc.)
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Creation of DAGs for data engineering
- Expert at Python /Scala programming, especially for data engineering/ ETL purposes
- Modeling complex problems, discovering insights, and identifying opportunities through the use of statistical, algorithmic, mining, and visualization techniques
- Experience working with business understanding the requirement, creating the problem statement, and building scalable and dependable Analytical solutions
- Must have hands-on and strong experience in Python
- Broad knowledge of fundamentals and state-of-the-art in NLP and machine learning
- Strong analytical & algorithm development skills
- Deep knowledge of techniques such as Linear Regression, gradient descent, Logistic Regression, Forecasting, Cluster analysis, Decision trees, Linear Optimization, Text Mining, etc
- Ability to collaborate across teams and strong interpersonal skills
- Sound theoretical knowledge in ML algorithm and their application
- Hands-on experience in statistical modeling tools such as R, Python, and SQL
- Hands-on experience in Machine learning/data science
- Strong knowledge of statistics
- Experience in advanced analytics / Statistical techniques – Regression, Decision trees, Ensemble machine learning algorithms, etc
- Experience in Natural Language Processing & Deep Learning techniques
- Pandas, NLTK, Scikit-learn, SpaCy, Tensorflow
- Design and develop strong analytics system and predictive models
- Managing a team of data scientists, machine learning engineers, and big data specialists
- Identify valuable data sources and automate data collection processes
- Undertake pre-processing of structured and unstructured data
- Analyze large amounts of information to discover trends and patterns
- Build predictive models and machine-learning algorithms
- Combine models through ensemble modeling
- Present information using data visualization techniques
- Propose solutions and strategies to business challenges
- Collaborate with engineering and product development teams
- Proven experience as a seasoned Data Scientist
- Good Experience in data mining processes
- Understanding of machine learning and Knowledge of operations research is a value addition
- Strong understanding and experience in R, SQL, and Python; Knowledge base with Scala, Java, or C++ is an asset
- Experience using business intelligence tools (e. g. Tableau) and data frameworks (e. g. Hadoop)
- Strong math skills (e. g. statistics, algebra)
- Problem-solving aptitude
- Excellent communication and presentation skills
- Experience in Natural Language Processing (NLP)
- Strong competitive coding skills
- BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
4-6 years of total experience in data warehousing and business intelligence
3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)
2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)
Strong experience building visually appealing UI/UX in Power BI
Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)
Experience building Power BI using large data in direct query mode
Expert SQL background (query building, stored procedure, optimizing performance)
at Metadata Technologies, North America
We are looking for an exceptional Software Developer for our Data Engineering India team who can-
contribute to building a world-class big data engineering stack that will be used to fuel us
Analytics and Machine Learning products. This person will be contributing to the architecture,
operation, and enhancement of:
Our petabyte-scale data platform with a key focus on finding solutions that can support
Analytics and Machine Learning product roadmap. Everyday terabytes of ingested data
need to be processed and made available for querying and insights extraction for
various use cases.
About the Organisation:
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom, and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
Software Developer, Data Engineering team
Location: Pune(Initially 100% Remote due to Covid 19 for coming 1 year)
- Our bespoke Machine Learning pipelines. This will also provide opportunities to
contribute to the prototyping, building, and deployment of Machine Learning models.
- Have at least 4+ years’ Experience.
- Deep technical understanding of Java or Golang.
- Production experience with Python is a big plus, extremely valuable supporting skill for
- Exposure to modern Big Data tech: Cassandra/Scylla, Kafka, Ceph, the Hadoop Stack,
Spark, Flume, Hive, Druid etc… while at the same time understanding that certain
problems may require completely novel solutions.
- Exposure to one or more modern ML tech stacks: Spark ML-Lib, TensorFlow, Keras,
GCP ML Stack, AWS Sagemaker - is a plus.
- Experience includes working in Agile/Lean model
- Experience with supporting and troubleshooting large systems
- Exposure to configuration management tools such as Ansible or Salt
- Exposure to IAAS platforms such as AWS, GCP, Azure…
- Good addition - Experience working with large-scale data
- Good addition - Good to have experience architecting, developing, and operating data
warehouses, big data analytics platforms, and high velocity data pipelines
**** Not looking for a Big Data Developer / Hadoop Developer
This is NOT remote work position and the selected candidate is expected to commute to the office location in Navi Mumbai.
- Implement and deliver a product in the healthcare domain using machine learning and AI capabilities
- Ability to work on challenging tasks
- Ability to research ways of doing things efficiently
- Keen approach to problem solving
- Strong experience in statistics, python, SQL, NLP and predictiv modeling
- Bachelors Degree with a total of 9+ years of experience in software development with 2-3 years experience in Data Science
- Experienced in working with large and multiple datasets, data warehouses and ability to pull data using relevant programs and coding
- Well versed with necessary data preprocessing and feature engineering skills
- Background in healthcare IT space will be preferred
More about us : bit.ly/workatjarapp
Jar is seeking a talented Senior Product Analyst to join our Team. If you are intellectually curious, if you eat/sleep/drink data and are committed to translating data to insights & insights to actionable work items, want new challenges daily and impact the lives of hundreds of thousands of users, this is the role for you!
What You Will Do
- Deliver insight and analysis using statistical tools, data visualization, and business use cases with the Product and Business teams
- Understanding of tools like product analytics and engagement platforms like Clevertap, Amplitude, Apxor etc
- Conduct analysis to determine new project pilot settings, new features, user behaviour, and in-app behaviour
- Build & maintain dashboards for tracking business performance and product adoption
- Assist Product Managers and Business teams in creating data-backed decisions
- Collaborate with Consumer Platform's Product and Business teams in identifying new avenues for growth and opportunities, and back their product delivery with experimentation
- Build first cut Machine Learning models based on product requirements
- Automate data extraction by creating de-normalized tables
What You Will Need
- At least 2+ years of work experience dealing with product analytics, data, and statistics
- Expertise in SQL with experience using data visualization and dashboarding tools (e.g. Tableau, Metabase, Google Data Studio, Clevertap, Python)
- Experience in Machine Learning technologies (i.e. forecasting, clustering, statistical significance test, predictive modeling, and text mining)
- Experience in delivering products as end-to-end data solutions (from data pipelining to analysis, presenting, and scalable adaption)
- A strong business sense with the ability to transform ambiguous business and product issues into well-scoped, impactful analysis
- Strong ability to design and conduct simple experiments
- A goal-oriented, critical-thinking mindset with the ability to work equally well within a team and independently with minimal supervision