Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem. About Recko: Recko was founded in 2017 to organise the world's transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. We are building products which enable them to handle and monitor massive volumes of transactional data without writing a single line of code and ensure the right amounts are flowing between the right beneficiaries, with the right deductions at the right time. Over the last few months, we have grown to a point where we are processing more than 25 million transactions monthly for our customers. Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We are reimagining enterprise software to be built around the user. We believe software is an extension of one's capability, and it should be delightful and fun to use. Working at Recko: We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 35 members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals. About the Role: What are we looking for: 2+ years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and with Big Data frameworks / platforms / data stores like Apache Drill, Arrow, Hadoop, HDFS, Spark, MapR etc Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud 2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc 2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred Knowledge of statistical analysis tools like R, SAS etc Familiarity with any data visualization software A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with As a data engineer at Recko, you will: Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems.
REQUIREMENT: Previous experience of working in large scale data engineering 4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory. Previous experience of architecting and designing backend for large scale data processing. Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc. Hands-on and have the ability to contribute a key portion of data engineering backend. Self-inspired and motivated to drive for exceptional results. Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis. Familiarity and experience working with different DB technologies and how to scale them. RESPONSIBILITY: End to end responsibility to come up with data engineering architecture, design, development and then implementation of it. Build data engineering workflow for large scale data processing. Discover opportunities in data acquisition. Bring industry best practices for data engineering workflow. Develop data set processes for data modelling, mining and production. Take additional tech responsibilities for driving an initiative to completion Recommend ways to improve data reliability, efficiency and quality Goes out of their way to reduce complexity. Humble and outgoing - engineering cheerleaders.