Key Responsibilities : - Leverage the batch computation frameworks and our workflow management platform (Airflow) to assist in building out different data pipelines - Lower the latency and bridge the gap between our production systems and our data warehouse by rethinking and optimizing our core data pipeline jobs - Work with client to create and optimize critical batch processing jobs in Spark - Develop production grade code using Scala/Spark and Python/Spark code on Azure data bricks Skills and Experience : - Strong engineering background and interested in data - Good understanding of data analysis using SQL queries - Strong hold on Python or Scala as a programming language on Azure Databricks. - Experience of developing and maintaining distributed systems built with Azure Databricks or native Apache Spark - Experience of building libraries and tooling that provide abstractions to users for accessing data - Experience in writing and debugging ETL jobs using a distributed data framework (Spark/Hadoop MapReduce etc.) on Azure Databricks - Experience optimizing the end-to-end performance of distributed systems - Ability to recommend and implement ways to improve data reliability, efficiency, and quality.
About the Role If you are interested in building large scale data pipelines that impacts how Uber makes decisions about Rider lifecycle and experience, join the Rider Data Platform team. Uber collects petabyte scale analytics data from the different Ride booking apps. Help us build the software systems and data models that will enable data scientists reason about user behavior and build models for consumption by different rider facing program teams. What You'll Do Identify unified data models collaborating with Data Science teams Streamline data processing of the original event sources and consolidate them in source of truth event logs Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics Build systems that monitor data losses from the mobile sources Devise strategies to consolidate and compensate the data losses by correlating different sources Solve challenging data problems with cutting edge design and algorithms What You'll Need 4+ years experience in a competitive engineering environment Design: Knowledge of data structures and an eye for design. You can discuss the tradeoff between design choices, both on a theoretical level and on an applied level. Strong coding/debugging abilities: You have advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, and Scala. Big data: Experience with Distributed systems such as Hadoop, Hive, Spark, Kafka is preferred. Data pipeline: Strong understanding in SQL, Database. Experience in building data pipelines is a great plus. Love getting your hands dirty with the data implementing custom ETLs to shape it into information. A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others' candid feedback for continuous improvement. Business acumen: You understand requirements beyond the written word. Whether you're working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience. About the Team Rider Data Platform team is a relatively new team tasked with shaping up the future architecture of Uber's Rider Data Stack. We are a bunch of engineers passionate about helping Uber grow by focusing our energy on building the next gen data platform to provide insights to the global Rider data in the most optimal manner. This would be instrumental in identifying gaps in the current implementation as well as formulating the key strategies for overall Rider experience. Uber At Uber, we ignite opportunity by setting the world in motion. We take on big problems to help drivers, riders, delivery partners, and eaters get moving in more than 600 cities around the world. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together.
Key Responsibilities ● We believe that the role of an engineer at a typical product company in India has to evolve from just working in a request response mode to something more involved. ● Typically an engineer has very little to no connection with the product, its users, overall success criteria or long term vision of the product that he/she is working on. ● The system is not setup to encourage it. Engineers are evaluated on their tech prowess and very little attention is given to other aspects of being a successful engineer. ● We don’t hold appraisals as we don’t believe that evaluation of work and feedback is a constant affair rather than every 6 or 12 months. Besides there is no better testament of your abilities than the growth of the product. ● We don’t have a concept of hierarchy and hence we don’t have promotions. All we have in Udaan are Software Engineers. Skills & Knowledge: ○ 4-15 years of experience ○ Sound knowledge in Programming, ○ High Ownership & Impact oriented ○ Creative thinker & Implementation ○ Highly Customer Obsessed & Always Insisting on Highest Standards
Position 1 : Azure Data Engineer Exp 3 to 5 years Must have tech stack – Azure Platform + Spark + Scala Nice to have – Power BI, Azure Certification Budget – 14 LPA Notice period – within 30 days only Position 2 : Lead Azure Data Engineer Exp 6 to 8.5 years (can go up to 9.5 if it falls in the budget) Must have tech stack – Azure Platform + Spark + Scala + Power BI Other experience – Senior data engineer (1+years) / Lead Engineer / Architect Nice to have – Azure Certification Budget – 20 LPA Notice period – within 30 days only
About the Role We are looking for a Data Engineer to help us scale the existing data infrastructure and in parallel work on building the next generation data platform for analytics at scale, machine learning infrastructure and data validation systems.In this role, you will be responsible for communicating effectively with data consumers to fine-tune data platform systems (existing or new), taking ownership and delivering high performing systems and data pipelines, and helping the team scale them up, to endure ever growing traffic.This is a growing team, which makes for many opportunities to be involved directly with product management, development, sales, and support teams. Everybody on the team is passionate about their work and we’re looking for similarly motivated “get stuff done” kind of people to join us! Roles & Responsibilities Engineer data pipelines (batch and real-time ) that aids in creation of data-driven products for our platform Design, develop and maintain a robust and scalable data-warehouse and data lake Work closely alongside Product managers and data-scientists to bring the various datasets together and cater to our business intelligence and analytics use-cases Design and develop solutions using data science techniques ranging from statistics, algorithms to machine learning Perform hands-on devops work to keep the Data platform secure and reliable Skills Required Bachelor's degree in Computer Science, Information Systems, or related engineering discipline 6 + years’ experience with ETL, Data Mining, Data Modeling, and working with large-scale datasets 6+ years’ experience with an object-oriented programming language such as Python, Scala, Java, etc Extremely proficient in writing performant SQL working with large data volumes Experience with map-reduce, Spark, Kafka, Presto, and the ecosystem. Experience in building automated analytical systems utilizing large data sets. Experience with designing, scaling and optimizing cloud based data warehouses (like AWS Redshift) and data lakes Familiarity with AWS technologies preferred Qualification – B.Tech/M.Tech/MCA(IT/Computer Science) Years of Exp – 6-9
Spark / Scala experience should be more than 2 years. Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer. Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred Complete SDLC process and Agile Methodology (Scrum) Version control / Git
· Advanced Spark Programming Skills · Advanced Python Skills · Data Engineering ETL and ELT Skills · Expertise on Streaming data · Experience in Hadoop eco system · Basic understanding of Cloud Platforms · Technical Design Skills, Alternative approaches · Hands on expertise on writing UDF’s · Hands on expertise on streaming data ingestion · Be able to independently tune spark scripts · Advanced Debugging skills & Large Volume data handling. · Independently breakdown and plan technical Tasks
3. Key Result Areas · Create and maintain optimal data pipeline, · Assemble large, complex data sets that meet functional / non-functional business requirements. · Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. · Keep our data separated and secure · Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. · Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. · Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. · Work with data and analytics experts to strive for greater functionality in our data systems 4. Knowledge, Skills and Experience Core Skills: We are looking for a candidate with 7+ years of experience in a Data Engineer role. They should also have experience using the following software/tools: · Experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce · Experience with stream-processing systems: Spark-Streaming, Strom etc. · Experience with object-oriented/object function scripting languages: Python, Scala etc · Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data · Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs. Experience with data science and machine learning tools and technologies is a plus · Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. · Experience with Azure cloud services is a plus · Financial Services Knowledge is a plus
Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
As an engineer you will be responsible for: • Ownership of product/feature end-to-end for all phases from the development to the production.• Ensuring the developed features are scalable and highly available with no quality concerns.• Work closely with senior engineers for refining the design and implementation.• Management and execution against project plans and delivery commitments.• Assist directly and indirectly in the continual hiring and development of technical talent.• Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts.• Contribute intellectual property through patents.The ideal candidate is a passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. BASIC QUALIFICATIONS • A Bachelor's degree in Computer Science or related technical discipline.• 3+ years of Software Development experience.• Strong knowledge of Data Structures, Algorithms and CS fundamentals.• Strong coding and problem solving skills and Design (HLD and LLD). PREFERRED QUALIFICATIONS • Experience working with service oriented architectures and web-based solutions.• Experience in eCommerce and deep hands-on technical expertise.• Experience with NoSQL and relational databases.
Skill set we are looking for.. 3-5 years of professional experience in a data engineering role Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Proficient understanding of Java / Scala Proficient understanding of distributed computing principles (Hadoop v2, Map Reduce, HDFS) Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala Experience with Spark, Flink, Kafka Streams Experience with NoSQL databases, such as Aerospike, HBase Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Hands-on experience with AWS Big data technologies, such as EMR, Redshift, ElasticSearch. Would be great if you have.. Knowledge or strong interest in the following areas: Advertising Platforms SCRUM Agile Software Development You role will entail Implementing ETL process and selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Influence key decisions on architecture and implementation of scalable data processing and analytics structure Work with the Data Science team to bring machine learning models into production Build Hadoop MapReduce and Spark processing pipelines using Java, Python Build REST APIs for data access by systems across our infrastructure Focus on performance, throughput, and latency, and drive these throughout our architecture Write test automation, conduct code reviews, and take end-to-end ownership of deployments to production To learn more visit: www.lifesight.io
Job Description Be a part of the team that develops and maintains the analytics and data science platform. Perform functional, technical, and architectural role and play a key role in evaluating and improving data engineering, data warehouse design and BI systems. Develop technical architecture designs which support a robust solution and leads full-lifecycle availability of real-time Business Intelligence (BI) and enable the Data Scientists Responsibilities ● Construct, test and maintain data infrastructure and data pipelines to meet business requirements ● Develop process workflows for data preparations, modelling and mining Manage configurations to build reliable datasets for analysis Troubleshooting services, system bottlenecks and application integration ● Designing, integrating and documenting technical components, dependencies of big data platform Ensuring best practices that can be adopted in Big Data stack and share across teams ● Working hand in hand with application developers and data scientists to help build softwares that scales in terms of performance and stability Skills ● 3+ years of experience managing large scale data infrastructure and building data pipelines/ data products. ● Proficient in - Any data engineering technologies and proficient in AWS data engineering technologies is plus. ● Language - python, scala or go ● Experience in working with real time streaming systems Experience in handling millions of events per day Experience in developing and deploying data models on Cloud ● Bachelors/Masters in Computer Science or equivalent experience Ability to learn and use skills in new technologies
Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations. Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.
Greetings! Samsung R&D Institute India-Bangalore (SRI-B) is hiring experienced software professionals. Details are as below: Samsung R&D Institute India-Bangalore (SRI-B) is the largest R&D Center outside of South Korea and a key innovation hub in the Samsung group. With the best of talent from India and overseas, our focus is on creating cutting edge technologies across multiple areas of Samsung’s business, that transform experiences of users both globally, as well as in local markets.Current Opportunities:Qualified Engineers will be hired against roles which includes Artificial Intelligence, Big Data, Machine Learning, Data Science, Analytics, Enterprise & IOT Solutions, Wearable computing, multimedia systems,3GPP, 4G/5G, Network,Modem,protocols,RTL, PHY, Android/Tizen Platforms, Healthcare/Medical solutions, Natural Language Processing, Computer vision, Image Processing, Computer Architect.EDUCATION- Minimum 60% in BE, B.Tech, ME, M.Tech, PhD or MCA WORK EXPERIENCE - Minimum 1 year PROGRAMMING SKILLS Any of the following: C,C++ Java Python, Java Script, JSON, XML – Jquery, Spring, Struts, Hibernate, iBatis, Node.js, Memcache/Redis, Cassandra/Hbase, MongoDB/CouchDBMap Reduce, Hadoop, Spark, Hive, Mahout, Fast Data Processing – Storm – Rules Engine – Drools GENERAL Strong problem solving skills, analytical skills and trouble shootingGood understanding of algorithms, data structures and performance optimization techniquesHands on with Design, Coding, Debugging and TestingExcellent communication & interpersonal skills; Team player. PS: Please do share this opportunity with your colleagues and friends