About BlackHawk Network:Blackhawk Network is building a digital platform and products that bring people and brands together. We facilitate cross channel payments via cash-in, cash-out and mobile payments. By leveraging blockchain, smart contracts, serverless technology, real time payment systems, we are unlocking the next million users through innovation. Our employees are our biggest assets! Come find out how we engage, with the biggest brands in the world. We look for people who collaborate, who are inspirational, who have passion that can make a difference by working as a team while striving for global excellence. You can expect a strong investment in your professional growth, and a dedication to crafting a successful, sustainable career for you. Our teams are composed of highly talented and passionate 'A' players, who are also invested in mentoring and enabling the best qualities. Our vibrant culture and high expectations will kindle your passion and bring out the best in you! As a leader in branded payments, we are building a strong diverse team and expanding in ASIA PACIFIC –we are hiring in Bengaluru, India! This is an amazing opportunity for problem solvers who want to be a part of an innovative and creative Engineering team that values your contribution to the company. If this role has your name written all over it, please contact us apply now with a resume so that we explore further and get connected. If you enjoy building world class payment applications, are highly passionate about pushing the boundaries of scale and availability on the cloud, leveraging the next horizon technologies, rapidly deliver features to production, make data driven decisions on product development, collaborate and innovate with like-minded experts, then this would be your ideal job. Blackhawk is seeking passionate backend engineers at all levels to build our next generation of payment systems on a public cloud infrastructure. Our team enjoys working together to contribute to meaningful work seen by millions of merchants worldwide.As a Senior SDET, you will work closely with data engineers to automate developed features and manual testing of the new data ETL Jobs, Data pipelines and Reports. You will be responsible for owning the complete architecture of automation framework and planning and designing automation for data ingestion, transformation and Reporting/Visualization. You will be building high-quality automation frameworks to cover end to end testing of the data platforms and ensure test data setup and pre-empt post production issues by high quality testing in the lower environments. You will get an opportunity to contribute at all levels of the test pyramid. You will also work with customer success and product teams to replicate post-production release issues. Key Qualifications Bachelor’s degree in Computer Science, Engineering or related fields 5+ years of experience testing data ingestion, visualization and info delivery systems. Real passion for data quality, reconciliation and uncovering hard to find scenarios and bugs. Proficiency in at least one programming language (preferably Python/Java) Expertise in end to end ETL (E.g. DataStage, Matillion) and BI platforms (E.g. MicroStrategy, PowerBI) testing and data validation Experience working with big data technologies such as Hadoop and MapReduce is desirable Excellent analytical, problem solving and communication skills. Self-motivated, results oriented and deadline driven. Experience with databases and data visualization and dashboarding tools would be desirable Experience working with Amazon Web Services (AWS) and Redshift is desirable Excellent knowledge of Software development lifecycle, testing Methodologies, QA terminology, processes, and tools Experience with automation using automation frameworks and tools, such as TestNG, JUnit and Selenium
About the Role If you are interested in building large scale data pipelines that impacts how Uber makes decisions about Rider lifecycle and experience, join the Rider Data Platform team. Uber collects petabyte scale analytics data from the different Ride booking apps. Help us build the software systems and data models that will enable data scientists reason about user behavior and build models for consumption by different rider facing program teams. What You'll Do Identify unified data models collaborating with Data Science teams Streamline data processing of the original event sources and consolidate them in source of truth event logs Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics Build systems that monitor data losses from the mobile sources Devise strategies to consolidate and compensate the data losses by correlating different sources Solve challenging data problems with cutting edge design and algorithms What You'll Need 4+ years experience in a competitive engineering environment Design: Knowledge of data structures and an eye for design. You can discuss the tradeoff between design choices, both on a theoretical level and on an applied level. Strong coding/debugging abilities: You have advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, and Scala. Big data: Experience with Distributed systems such as Hadoop, Hive, Spark, Kafka is preferred. Data pipeline: Strong understanding in SQL, Database. Experience in building data pipelines is a great plus. Love getting your hands dirty with the data implementing custom ETLs to shape it into information. A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others' candid feedback for continuous improvement. Business acumen: You understand requirements beyond the written word. Whether you're working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience. About the Team Rider Data Platform team is a relatively new team tasked with shaping up the future architecture of Uber's Rider Data Stack. We are a bunch of engineers passionate about helping Uber grow by focusing our energy on building the next gen data platform to provide insights to the global Rider data in the most optimal manner. This would be instrumental in identifying gaps in the current implementation as well as formulating the key strategies for overall Rider experience. Uber At Uber, we ignite opportunity by setting the world in motion. We take on big problems to help drivers, riders, delivery partners, and eaters get moving in more than 600 cities around the world. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together.
REQUIREMENT: Previous experience of working in large scale data engineering 4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory. Previous experience of architecting and designing backend for large scale data processing. Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc. Hands-on and have the ability to contribute a key portion of data engineering backend. Self-inspired and motivated to drive for exceptional results. Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis. Familiarity and experience working with different DB technologies and how to scale them. RESPONSIBILITY: End to end responsibility to come up with data engineering architecture, design, development and then implementation of it. Build data engineering workflow for large scale data processing. Discover opportunities in data acquisition. Bring industry best practices for data engineering workflow. Develop data set processes for data modelling, mining and production. Take additional tech responsibilities for driving an initiative to completion Recommend ways to improve data reliability, efficiency and quality Goes out of their way to reduce complexity. Humble and outgoing - engineering cheerleaders.
About the Role We are looking for a Data Engineer to help us scale the existing data infrastructure and in parallel work on building the next generation data platform for analytics at scale, machine learning infrastructure and data validation systems.In this role, you will be responsible for communicating effectively with data consumers to fine-tune data platform systems (existing or new), taking ownership and delivering high performing systems and data pipelines, and helping the team scale them up, to endure ever growing traffic.This is a growing team, which makes for many opportunities to be involved directly with product management, development, sales, and support teams. Everybody on the team is passionate about their work and we’re looking for similarly motivated “get stuff done” kind of people to join us! Roles & Responsibilities Engineer data pipelines (batch and real-time ) that aids in creation of data-driven products for our platform Design, develop and maintain a robust and scalable data-warehouse and data lake Work closely alongside Product managers and data-scientists to bring the various datasets together and cater to our business intelligence and analytics use-cases Design and develop solutions using data science techniques ranging from statistics, algorithms to machine learning Perform hands-on devops work to keep the Data platform secure and reliable Skills Required Bachelor's degree in Computer Science, Information Systems, or related engineering discipline 6 + years’ experience with ETL, Data Mining, Data Modeling, and working with large-scale datasets 6+ years’ experience with an object-oriented programming language such as Python, Scala, Java, etc Extremely proficient in writing performant SQL working with large data volumes Experience with map-reduce, Spark, Kafka, Presto, and the ecosystem. Experience in building automated analytical systems utilizing large data sets. Experience with designing, scaling and optimizing cloud based data warehouses (like AWS Redshift) and data lakes Familiarity with AWS technologies preferred Qualification – B.Tech/M.Tech/MCA(IT/Computer Science) Years of Exp – 6-9
Spark / Scala experience should be more than 2 years. Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer. Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred Complete SDLC process and Agile Methodology (Scrum) Version control / Git
A data engineer with AWS Cloud infrastructure experience to join our Big Data Operations team. This role will provide advanced operations support, contribute to automation and system improvements, and work directly with enterprise customers to provide excellent customer service.The candidate,1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab8. Knowledge on AWS Quick sight reporting9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.12. Demonstrated successful experience learning new technologies quicklyWHAT WILL BE THE ROLES AND RESPONSIBILITIES?1. Create procedures/run books for operational and security aspects of AWS platform2. Improve AWS infrastructure by developing and enhancing automation methods3. Provide advanced business and engineering support services to end users4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs5. Research and deploy new tools and frameworks to build a sustainable big data platform6. Assist with creating programs for training and onboarding for new end users7. Lead Agile/Kanban workflows and team process work8. Troubleshoot issues to resolve problems9. Provide status updates to Operations product owner and stakeholders10. Track all details in the issue tracking system (JIRA)11. Provide issue review and triage problems for new service/support requests12. Use DevOps automation tools, including Jenkins build jobs13. Fulfil any ad-hoc data or report request queries from different functional groups
Key Result Areas · Create and maintain optimal data pipeline, · Assemble large, complex data sets that meet functional / non-functional business requirements. · Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. · Keep our data separated and secure · Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. · Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. · Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. · Work with data and analytics experts to strive for greater functionality in our data systems Knowledge, Skills and Experience Core Skills: We are looking for a candidate with 5+ years of experience in a Data Engineer role. They should also have experience using the following software/tools: · Experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce · Experience with stream-processing systems: Spark-Streaming, Strom etc. · Experience with object-oriented/object function scripting languages: Python, Scala etc · Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data · Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs. Experience with data science and machine learning tools and technologies is a plus · Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. · Experience with Azure cloud services is a plus · Financial Services Knowledge is a plus
Workdays: Monday to FridayShift: DayWork location: Koramangala, BangaloreJob Description: • Hands-on experience with Hadoop Eco-Systems, HDFS, Map Reduce, HIVE, Oozie, YARN, Spark, Kafka, AWS Cloud, Sqoop, Zookeeper. • BS/MS in computer science or equivalent work experience. • Experience on Cloud. Google cloud or AWS is preferable. • Experience in Software design/architecture process, Unit testing and Test Driven Development. • Experience with Object-Oriented Languages (OOD): Java/J2EE, Python • Expertise with the entire Software Development Life Cycle (SDLC). • Must have strong oral and written communication skills, presentation skills and have the ability to self prioritize tasks. • Excellent communication skills.
· Advanced Spark Programming Skills · Advanced Python Skills · Data Engineering ETL and ELT Skills · Expertise on Streaming data · Experience in Hadoop eco system · Basic understanding of Cloud Platforms · Technical Design Skills, Alternative approaches · Hands on expertise on writing UDF’s · Hands on expertise on streaming data ingestion · Be able to independently tune spark scripts · Advanced Debugging skills & Large Volume data handling. · Independently breakdown and plan technical Tasks
Hiring for a funded fintech startup based out of Bangalore!!! Our Ideal Candidate We are looking for a Senior DevOps engineer to join the engineering team and help us automate the build, release, packaging and infrastructure provisioning and support processes. The candidate is expected to own the full life-cycle of provisioning, configuration management, monitoring, maintenance and support for cloud as well as on-premise deployments. Requirements 5-plus years of DevOps experience managing the Big Data application stack including HDFS, YARN, Spark, Hive and Hbase Deeper understanding of all the configurations required for installing and maintaining the infrastructure in the long run Experience setting up high availability, configuring resource allocation, setting up capacity schedulers, handling data recovery tasks Experience with middle-layer technologies including web servers (httpd, ningx), application servers (Jboss, Tomcat) and database systems (postgres, mysql) Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc. Experience maintaining and hardening the infrastructure by regularly applying required security packages and patches Experience supporting on-premise solutions as well as on AWS cloud Experience working with and supporting Spark-based applications on YARN Experience with one or more automation tools such as Ansible, Teraform, etc Experience working with CI/CD tools like Jenkins and various test report and coverage plugins Experience defining and automating the build, versioning and release processes for complex enterprise products Experience supporting clients remotely and on-site Experience working with and supporting Java- and Python-based tech stacks would be a plus Desired Non-technical Requirements Very strong communication skills both written and verbal Strong desire to work with start-ups Must be a team player Job Perks Attractive variable compensation package Flexible working hours – everything is results-oriented Opportunity to work with an award-winning organization in the hottest space in tech – artificial intelligence and advanced machine learning
Data Engineering role at ThoughtWorks ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success. You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices Ideally, you should have - 2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities. If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!
Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.
Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations. Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.
Intro Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions. What will you doThe data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources. Create and maintain optimal data pipeline architecture and ETL processes Assemble large, complex data sets that meet functional / non-functional business requirements. Develop data pipeline and infrastructure to support real-time decisions Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. What will you need• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse• Experience dealing with large scale Proficiency in writing and debugging complex SQLs Experience working with AWS big data tools• Ability to lead the project and implement best data practises and technology Data Pipelining Strong command in building & optimizing data pipelines, architectures and data sets Strong command on relational SQL & noSQL databases including Postgres Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Big Data: Strong experience in big data tools & applications Tools: Hadoop, Spark, HDFS etc AWS cloud services: EC2, EMR, RDS, Redshift Stream-processing systems: Storm, Spark-Streaming, Flink etc. Message queuing: RabbitMQ, Spark etc Software Development & Debugging Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc Strong hold on data structures & algorithms What would be a bonus Prior experience working in a fast-growth Startup Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data