Engineering Manager

at Uber

DP
Posted by Swati Singh
icon
Bengaluru (Bangalore)
icon
9 - 15 yrs
icon
₹50L - ₹80L / yr
icon
Full time
Skills
Big Data
Leadership
Engineering Management
Architecture
Minimum 5+ years of experience as a manager and overall 10+ years of industry experience in a variety of contexts, during which you've built scalable, robust, and fault-tolerant systems. You have a solid knowledge of the whole web stack: front-end, back-end, databases, cache layer, HTTP protocol, TCP/IP, Linux, CPU architecture, etc. You are comfortable jamming on complex architecture and design principles with senior engineers. Bias for action. You believe that speed and quality aren't mutually exclusive. You've shown good judgement about shipping as fast as possible while still making sure that products are built in a sustainable, responsible way. Mentorship/ Guidance. You know that the most important part of your job is setting the team up for success. Through mentoring, teaching, and reviewing, you help other engineers make sound architectural decisions, improve their code quality, and get out of their comfort zone. Commitment. You care tremendously about keeping the Uber experience consistent for users and strive to make any issues invisible to riders. You hold yourself personally accountable, jumping in and taking ownership of problems that might not even be in your team's scope. Hiring know-how. You're a thoughtful interviewer who constantly raises the bar for excellence. You believe that what seems amazing one day becomes the norm the next day, and that each new hire should significantly improve the team. Design and business vision. You help your team understand requirements beyond the written word and you thrive in an environment where you can uncover subtle details.. Even in the absence of a PM or a designer, you show great attention to the design and product aspect of anything your team ships.

About Uber

We’re finding better ways for cities to move, work, and thrive. Download the app and get a ride in minutes. Or become a driver and earn money on your schedule.
Founded
2012
Type
Product
Size
500-1000 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Science - Risk

at Rupifi

Founded 2020  •  Product  •  20-100 employees  •  Raised funding
Data Analytics
Risk Management
Risk analysis
Data Science
Machine Learning (ML)
Python
SQL
Data Visualization
Big Data
Tableau
Data Structures
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹50L / yr

Data Scientist (Risk)/Sr. Data Scientist (Risk)


As a part of the Data science/Analytics team at Rupifi, you will play a significant role in  helping define the business/product vision and deliver it from the ground up by working with  passionate high-performing individuals in a very fast-paced working environment. 


You will work closely with Data Scientists & Analysts, Engineers, Designers, Product  Managers, Ops Managers and Business Leaders, and help the team make informed data  driven decisions and deliver high business impact.


Preferred Skills & Responsibilities: 

  1. Analyze data to better understand potential risks, concerns and outcomes of decisions.
  2. Aggregate data from multiple sources to provide a comprehensive assessment.
  3. Past experience of working with business users to understand and define inputs for risk models.
  4. Ability to design and implement best in class Risk Models in Banking & Fintech domain.
  5. Ability to quickly understand changing market trends and incorporate them into model inputs.
  6. Expertise in statistical analysis and modeling.
  7. Ability to translate complex model outputs into understandable insights for business users.
  8. Collaborate with other team members to effectively analyze and present data.
  9. Conduct research into potential clients and understand the risks of accepting each one.
  10. Monitor internal and external data points that may affect the risk level of a decision.

Tech skills: 

  • Hands-on experience in Python & SQL.
  • Hands-on experience in any visualization tool preferably Tableau
  • Hands-on experience in Machine & Deep Learning area
  • Experience in handling complex data sources
  • Experience in modeling techniques in the fintech/banking domain
  • Experience of working on Big data and distributed computing.

Preferred Qualifications: 

  • A BTech/BE/MSc degree in Math, Engineering, Statistics, Economics, ML, Operations  Research, or similar quantitative field.
  • 3 to 10 years of modeling experience in the fintech/banking domain in fields like collections, underwriting, customer management, etc.
  • Strong analytical skills with good problem solving ability
  • Strong presentation and communication skills
  • Experience in working on advanced machine learning techniques
  • Quantitative and analytical skills with a demonstrated ability to understand new analytical concepts.
Job posted by
Richa Tiwari

data engineer

at Information Solution Provider Company

Agency job
via Jobdost
Spark
Scala
Hadoop
PySpark
Data engineering
Big Data
Machine Learning (ML)
icon
Delhi
icon
3 - 5 yrs
icon
₹3L - ₹10L / yr

Data Engineer 

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.

 

Job posted by
Saida Jabbar

Senior consultant

at An IT Services Major, hiring for a leading insurance player.

Agency job
via Indventur Partner
Big Data
Hadoop
Apache Kafka
Apache Hive
Microsoft Windows Azure
Hbase
icon
Chennai
icon
3 - 5 yrs
icon
₹5L - ₹10L / yr

Client  An IT Services Major, hiring for a leading insurance player.

 

 

Position: SENIOR CONSULTANT

 

Job Description:

 

  • Azure admin- senior consultant with HD Insights(Big data)

 

Skills and Experience

 

  • Microsoft Azure Administrator certification
  • Bigdata project experience in Azure HDInsight Stack. big data processing frameworks such as Spark, Hadoop, Hive, Kafka or Hbase.
  • Preferred: Insurance or BFSI domain experience
  • 5 to 5 years of experience is required.
Job posted by
Vanshika kaur

Big Data Engineer

at Netmeds.com

Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Hadoop
Apache Hive
Scala
Spark
Datawarehousing
Machine Learning (ML)
Deep Learning
SQL
Data modeling
PySpark
Python
Amazon Web Services (AWS)
Java
Cassandra
DevOps
HDFS
icon
Chennai
icon
2 - 5 yrs
icon
₹6L - ₹25L / yr

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Job posted by
Vijay Hemnath

Bigdata Engineer

at BDIPlus

Founded 2014  •  Product  •  100-500 employees  •  Profitable
Big Data
Hadoop
Java
Python
PySpark
kafka
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹5L - ₹12L / yr

Roles and responsibilities:

 

  1. Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed  technologies.
  2. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics     etc.
  3. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements..
  4. Collaborate with QA team to define test cases, metrics, and resolve questions about test results.
  5. Assist in the design and implementation process for new products, research and create POC for possible solutions.
  6. Develop components based on business and/or application requirements
  7. Create unit tests in accordance with team policies & procedures
  8. Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process
  9. Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues. 
Job posted by
Silita S

Senior Systems Engineer – Big Data

at Couture.ai

Founded 2017  •  Product  •  20-100 employees  •  Profitable
Big Data
Hadoop
DevOps
Apache Spark
Spark
Shell Scripting
Docker
Kubernetes
Chef
Amberi
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹10L / yr
Skills Requirements
 Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning.
 Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker
 Knowledge on python would be desirable.
 Experience with HDP Manager/clients and various dashboards.
 Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking.
 Experience with automation/configuration management using Chef, Ansible or an equivalent.
 Strong experience with any Linux distribution.
 Basic understanding of network technologies, CPU, memory and storage.
 Database administration a plus.
Qualifications and Education Requirements
 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions and
dashboards running on Big Data technologies such as Hadoop/Spark.
 Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
Job posted by
Rajesh Kumar

Scala Spark Engineer

at Skanda Projects

Founded 2010  •  Services  •  100-1000 employees  •  Profitable
Scala
Apache Spark
Big Data
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹6L - ₹25L / yr
PreferredSkills- • Should have minimum 3 years of experience in Software development • Strong experience in spark Scala development • Person should have strong experience in AWS cloud platform services • Should have good knowledge and exposure in Amazon EMR, EC2 • Should be good in over databases like dynamodb, snowflake
Job posted by
Nagraj Kumar

Data Engineer

at Product / Internet / Media Companies

Agency job
via archelons
Big Data
Hadoop
Data processing
Python
Data engineering
HDFS
Spark
Data lake
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹30L / yr

REQUIREMENT:

  •  Previous experience of working in large scale data engineering
  •  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
  •  Previous experience of architecting and designing backend for large scale data processing.
  •  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
  •  Hands-on and have the ability to contribute a key portion of data engineering backend.
  •  Self-inspired and motivated to drive for exceptional results.
  •  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
  •  Familiarity and experience working with different DB technologies and how to scale them.

RESPONSIBILITY:

  •  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
  •  Build data engineering workflow for large scale data processing.
  •  Discover opportunities in data acquisition.
  •  Bring industry best practices for data engineering workflow.
  •  Develop data set processes for data modelling, mining and production.
  •  Take additional tech responsibilities for driving an initiative to completion
  •  Recommend ways to improve data reliability, efficiency and quality
  •  Goes out of their way to reduce complexity.
  •  Humble and outgoing - engineering cheerleaders.
Job posted by
Meenu Singh

Data Engineer

at Pluto Seven Business Solutions Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
MySQL
Python
Big Data
Google Cloud Storage
API
SQL Query Analyzer
Relational Database (RDBMS)
Agile/Scrum
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹6L - ₹18L / yr
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
Job posted by
Sindhu Narayan

Big Data Developer

at MediaMelon Inc

Founded 2008  •  Product  •  20-100 employees  •  Raised funding
Scala
Spark Streaming
Aero spike
Cassandra
Apache Kafka
Big Data
Elastic Search
icon
Bengaluru (Bangalore), Bengaluru (Bangalore)
icon
1 - 7 yrs
icon
₹0L / yr
Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms
Job posted by
Katreddi Kiran Kumar
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Uber?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort