Position: Big Data EngineerWhat You'll Do Punchh is seeking to hire Big Data Engineer at either a senior or tech lead level. Reporting to the Director of Big Data, he/she will play a critical role in leading Punchh’s big data innovations. By leveraging prior industrial experience in big data, he/she will help create cutting-edge data and analytics products for Punchh’s business partners. This role requires close collaborations with data, engineering, and product organizations. His/her job functions include Work with large data sets and implement sophisticated data pipelines with both structured and structured data. Collaborate with stakeholders to design scalable solutions. Manage and optimize our internal data pipeline that supports marketing, customer success and data science to name a few. A technical leader of Punchh’s big data platform that supports AI and BI products. Work with infra and operations team to monitor and optimize existing infrastructure Occasional business travels are required. What You'll Need 5+ years of experience as a Big Data engineering professional, developing scalable big data solutions. Advanced degree in computer science, engineering or other related fields. Demonstrated strength in data modeling, data warehousing and SQL. Extensive knowledge with cloud technologies, e.g. AWS and Azure. Excellent software engineering background. High familiarity with software development life cycle. Familiarity with GitHub/Airflow. Advanced knowledge of big data technologies, such as programming language (Python, Java), relational (Postgres, mysql), NoSQL (Mongodb), Hadoop (EMR) and streaming (Kafka, Spark). Strong problem solving skills with demonstrated rigor in building and maintaining a complex data pipeline. Exceptional communication skills and ability to articulate a complex concept with thoughtful, actionable recommendations.
Why are we building Urban Company? Organized service commerce is a large yet young industry in India. While India is a very large market for home and local services (~USD 50 Billion in retail spends) and expected to double in the next 5 years, there is no billion-dollar company in this segment today. The industry is bare ~20 years old, with a sub-optimal market architecture typical of an unorganized market - fragmented supply side operated by middlemen. As a result, experiences are broken for both customers and service professionals, each largely relying upon word of mouth to discover the other. The industry can easily be 1.5-2x larger than it is today if the frictions in user and professional journeys are removed - and the experiences made more meaningful and joyful. The Urban Company team is young and passionate, and we see a massive disruption opportunity in his industry. By leveraging technology, and a set of simple yet powerful processes, we wish to build a platform that can organize the world of services - and bring them to your finger-tips. We believe there is the immense value (akin to serendipity) in bringing together customers and professionals looking for each other. In the process, we hope to impact the lives of millions of service entrepreneurs, and transform service commerce the way Amazon transformed product commerce. Urbancompany has grown 3x YOY and so as our tech stack. We have evolved in data-driven approach solving for the product over the last few years. We deal with around 10TB in data analytics with around 50Mn/day. We adopted the platform thinking pretty early stage of UC. We started building central platform teams who are dedicated to solving core engineering problems around 2-3 years ago and now it has evolved to a full-fledged vertical. Out platform vertical majorly includes Data Engineering, Service and Core Platform, Infrastructure, and Security. We are looking for Data Engineers, a person who loves solving standardization, has strong platform thinking, opinions, and has solved for Data Engineering, Data Science and analytics platforms. Job Responsibilities Platform first approach to engineering problems. Creating highly autonomous systems with minimal manual intervention. Frameworks which can be extended to larger audiences through open source. Extending and modifying the open source projects to adopt as per Urban Company use case. Developer productivity. Highly abstracted and standardized frameworks like micro services, event-driven architecture, etc. Job Requirements/Potential Backgrounds Bachelors/master’s in computer science form top-tier Engineering School. Experience with Data pipeline and workflow management tools like Luigi, Airflow etc. Proven ability to work in a fast paced environment. History and Familiarity of server-side development of APIs, databases, dev-ops and systems. Fanatic about building scalable, reliable data products. Experience with Big data tools: Hadoop, Kafka/Kinesis, Flume, etc. is an added advantage. Experience with Relational SQL and NO SQL databases like HBase, Cassandra etc. Experience with stream processing engines like Spark, Link, Storm, etc. is an added advantage. What UC has in store for you A phenomenal work environment, with massive ownership and growth opportunities. A high performance, high-velocity environment at the cutting edge of growth. Strong ownership expectation and freedom to fail. Quick iterations and deployments – fail-fast attitude. Opportunity to work on cutting edge technologies. The massive, and direct impact of the work you do on the lives of people.
Dear Candidate,, Greetings of the day! As discussed, Please find the below job description. Job Title : Hadoop developer Experience : 3+ years Job Location : New Delhi Job type : Permanent Knowledge and Skills Required: Brief Skills: Hadoop, Spark, Scala and Spark SQL Main Skills: Strong experience in Hadoop development Experience in Spark Experience in Scala Experience in Spark SQL Why OTSi! Working with OTSi gives you the assurance of a successful, fast-paced career. Exposure to infinite opportunities to learn and grow, familiarization with cutting-edge technologies, cross-domain experience and a harmonious environment are some of the prime attractions for a career-driven workforce. Join us today, as we assure you 2000+ friends and a great career; Happiness begins at a great workplace..! Feel free to refer this opportunity to your friends and associates. About OTSI: (CMMI Level 3): Founded in 1999 and headquartered in Overland Park, Kansas, OTSI offers global reach and local delivery to companies of all sizes, from start-ups to Fortune 500s. Through offices across the US and around the world, we provide universal access to exceptional talent and innovative solutions in a variety of delivery models to reduce overall risk while optimizing outcomes & enabling our customers to thrive in a global economy.OTSI's global presence, scalable and sustainable world-class infrastructure, business continuity processes, ISO 9001:2000, CMMI 3 certifications makes us a preferred service provider for our clients. OTSI has the expertise in different technologies enhanced by our partnerships and alliances with industry giants like HP, Microsoft, IBM, Oracle, and SAP and others. Highly repetitive local company with a proven success of serving the UAE Government IT needs is seeking to attract, employ and develop people with exceptional skills who want to make a difference in a challenging environment.Object Technology Solutions India Pvt Ltd is a leading Global Information Technology (IT) Services and Solutions company offering a wide array of Solutions for a range of key Verticals. The company is headquartered in Overland Park, Kansas, and has a strong presence in US, Europe and Asia-Pacific with a Global Delivery Center based in India. OTSI offers a broad range of IT application solutions and services including; e-Business solutions, Enterprise Resource Planning (ERP) implementation and Post Implementation Support, Application development, Application Maintenance, Software customizations services. OTSI Partners & Practices SAP Partner Microsoft Silver Partner Oracle Gold Partner Microsoft CoE DevOps Consulting Cloud Mobile & IoT Digital Transformation Big data & Analytics Testing Solutions OTSI Honor’s & Awards: #91 in Inc.5000 . Fastest growing IT Companies in Inc.5000…
JD: Required Skills: Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala. Strong practical knowledge of SQL.Hands on experience on Spark/SparkSQL Data Structure and Algorithms Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc Experience on NoSQL Databases like HBase, etc Experience with Linux OS environment (Shell script, AWK, SED) Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.