Lead Data Engineer

at Top 3 Fintech Startup

Agency job
icon
Bengaluru (Bangalore)
icon
6 - 9 yrs
icon
₹16L - ₹24L / yr
icon
Full time
Skills
SQL
Amazon Web Services (AWS)
Spark
PySpark
Apache Hive

We are looking for an exceptionally talented Lead data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Candidate with both hands-on and leadership capabilities will be ideal for this position.

 

Qualification: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Preferred Masters degree

 

Job Responsibilities:

• Total 6+ years of experience as a Data Engineer and 2+ years of experience in managing a team

• Have minimum 3 years of AWS Cloud experience.

• Well versed in languages such as Python, PySpark, SQL, NodeJS etc

• Has extensive experience in the real-timeSpark ecosystem and has worked on both real time and batch processing

• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.

• Experience with modern Database systems such as Redshift, Presto, Hive etc.

• Worked on building data lakes in the past on S3 or Apache Hudi

• Solid understanding of Data Warehousing Concepts

• Good to have experience on tools such as Kafka or Kinesis

• Good to have AWS Developer Associate or Solutions Architect Associate Certification

• Have experience in managing a team

Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Manager - Analytics

at Leading Grooming Platform

Agency job
via Qrata
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
Python
SQL
icon
Remote, Ahmedabad
icon
3 - 6 yrs
icon
₹15L - ₹25L / yr
  • Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
  • At least 1 Data Query language – SQL/Python
  • Experience in creating breakthrough visualizations
  • Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
Job posted by
Blessy Fernandes

MLOps Engineer

at Synapsica Healthcare

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Python
CI/CD
DVCS
Machine Learning (ML)
Kubernetes
Amazon Web Services (AWS)
AWS CloudFormation
Docker
Airflow
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹12L - ₹20L / yr

Introduction

Synapsica is a series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls 


Your Roles and Responsibilities

We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.

 

 

Roles and Responsibilities:

  • Work closely with a cross functional team to serve business goals and objectives.
  • Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
  • Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform. 
  • Build and maintain code, tools, packages in cloud

Requirements:

  • At Least 2+ years of experience in Data engineering 
  • At Least 3+ yr experience in Python with familiarity in popular ML libraries.
  • At Least 2+ years experience in model serving and pipelines
  • Working knowledge of containers like kubernetes , dockers, in AWS
  • Design distributed systems deployment at scale
  • Hands-on experience in coding and scripting
  • Ability to write effective scalable and modular code.
  • Familiarity with Git workflows, CI CD and NoSQL Mongodb
  • Familiarity with Airflow, DVC and MLflow is a plus
Job posted by
Human Resources

Head of Engineering

at 60 Decibels

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
CI/CD
SaaS
Ruby
Ruby on Rails (ROR)
Javascript
Python
PostgreSQL
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹60L - ₹70L / yr

Head of Engineering

 

https://60decibels.com/" target="_blank">60 Decibels is an impact measurement company that makes it easy to listen to the people who matter most. We believe that the best way to understand social impact is by talking to the people experiencing that impact. It sounds obvious when you say it, but that is not the typical practice for many impact investors, corporations and foundations working to create social change.

 

We collect social Impact data directly from beneficiaries (customers / employees / suppliers) using our network of 1000+ trained researchers in 70+ countries. We do it quickly and without the fuss typically associated with measuring social impact. Our researchers speak directly to customers to understand their lived experience; and our team turns all this data into benchmarked social performance reports, with accompanying insights, to help our clients demonstrate and improve social performance.

 

If you want to help build interesting solutions to help social enterprises that are solving some of the world’s most challenging problems, then read on! We're looking for a Head of Engineering to join our global team full-time, in a hybrid role for our Bengaluru office.

 

About the Position

 

We are seeking a full-time Head of Engineering, someone to lead a team of developers while solving real user problems through smart and efficient application of technical knowledge and tools. You will be working closely with a multidisciplinary team, and will be responsible for working with the team to translate product specs into clean, functional, production-ready code and to further develop features of our platform across the full technical stack.

 

You will have the opportunity to work across many types of projects across the organization. Your primary responsibility will be advancing our integrated data capture and insights platform (Ruby/React/PostgreSQL) and associated tooling (Python),which will involve, in part:

 

  • Working with the Leadership, Product and Operations teams, and leading the Engineering team on requirements gathering, specifications and scoping for feature development & product initiatives
  • Designing, developing, testing and maintaining robust applications and interfaces to a high level of quality, reliability and scalability
  • Anticipating and leading the definition of the systems architecture vision to better support our team’s needs
  • Growing our technical capacity by mentoring other engineers and interviewing candidates
  • Collaborating with team members to determine best practices and requirements for our software stack
  • Participating in code reviews and model good development practices (such as test-writing)
  • Troubleshooting coding problems quickly and efficiently to ensure a productive workplace

 

About you

 

First and foremost, you bring passion and dedication to this work because it matters to you. You are a pragmatic and product-driven engineer who is interested in solving user problems and delivering value while taking into account tradeoffs between Business and Tech. You have a bias towards action: you get your hands dirty and actively tackle problems in a way that leads to the best outcomes and brings teams together. You successfully balance flexibility and rigor, using informed judgement to make decisions. You model critical thinking and introspection, taking strategic risks and growing from mistakes. You are decisive and bold, have a growth mindset, are an excellent communicator, and know the value of being a part of an effective team. More specifically, you bring:

  • 10+ years of experience in a software engineering role, preferably building a SaaS product. You can demonstrate the impact that your work has had on the product and/or the team
  • Deep knowledge of frameworks we use (e.g. Ruby on Rails, React), or the interest and ability of picking up new languages and frameworks quickly
  • Professional experience building production-ready, data-intensive, applications and APIs
  • Professional experience with developing and deploying applications using cloud service providers (eg. AWS, GCP, Azure, GitHub CI/CD)
  • Demonstrated knowledge of web applications, cybersecurity, open-source technologies
  • Demonstrated ability to lead a team
  • Outstanding collaboration and communication skills with both tech and non-tech teams/stakeholders.

 

Working with 60 Decibels

 

We are a fun, international and highly-motivated team who believes that team members should have the opportunity to expand their skills and career in a supportive environment. We currently have offices in New York, London, Nairobi and Bengaluru. Please note that permanent work authorization in one of these geographies is required.

 

We offer a competitive salary, the opportunity to work flexibly and in a fun, supportive working environment.

 

As a growing company, we are building towards a more universally accessible workplace for our employees. At this time, we do use some cloud-based technologies that are not compatible with screen readers and other assistive devices. We would be happy to discuss accessibility at 60 Decibels in greater depth during the recruitment process.

 

Want to get to know a little better?

> Sign up to receive https://us20.campaign-archive.com/home/?u=eb8a3471cbcc7f7bb20ae1019&;id=4f8f9fc97a" target="_blank">The Volume, our monthly collection of things worth reading.

> Visit our website at http://www.60decibels.com/" target="_blank">60decibels.com.

> Read about our team values https://drive.google.com/a/60decibels.com/file/d/1XxQkrGpNrwQzuHBzq3KqNVVASoq_IZM9/view?usp=sharing" target="_blank">here.

Job posted by
Jay Batavia

Data Scientist

at Blue Sky Analytics

Founded 2018  •  Product  •  20-100 employees  •  Raised funding
NumPy
SciPy
Data Science
Python
pandas
Git
GitHub
SQL
Amazon S3
Amazon EC2
GIS analysis
GDAL
QGIS
icon
Remote only
icon
1 - 5 yrs
icon
Best in industry

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for a data scientist to join its growing team. This position will require you to think and act on the geospatial architecture and data needs (specifically geospatial data) of the company. This position is strategic and will also require you to collaborate closely with data engineers, data scientists, software developers and even colleagues from other business functions. Come save the planet with us!


Your Role

Manage: It goes without saying that you will be handling large amounts of image and location datasets. You will develop dataframes and automated pipelines of data from multiple sources. You are expected to know how to visualize them and use machine learning algorithms to be able to make predictions. You will be working across teams to get the job done.

Analyze: You will curate and analyze vast amounts of geospatial datasets like satellite imagery, elevation data, meteorological datasets, openstreetmaps, demographic data, socio-econometric data and topography to extract useful insights about the events happening on our planet.

Develop: You will be required to develop processes and tools to monitor and analyze data and its accuracy. You will develop innovative algorithms which will be useful in tracking global environmental problems like depleting water levels, illegal tree logging, and even tracking of oil-spills.

Demonstrate: A familiarity with working in geospatial libraries such as GDAL/Rasterio for reading/writing of data, and use of QGIS in making visualizations. This will also extend to using advanced statistical techniques and applying concepts like regression, properties of distribution, and conduct other statistical tests.

Produce: With all the hard work being put into data creation and management, it has to be used! You will be able to produce maps showing (but not limited to) spatial distribution of various kinds of data, including emission statistics and pollution hotspots. In addition, you will produce reports that contain maps, visualizations and other resources developed over the course of managing these datasets.

Requirements

These are must have skill-sets that we are looking for:

  • Excellent coding skills in Python (including deep familiarity with NumPy, SciPy, pandas).
  • Significant experience with git, GitHub, SQL, AWS (S3 and EC2).
  • Worked on GIS and is familiar with geospatial libraries such as GDAL and rasterio to read/write the data, a GIS software such as QGIS for visualisation and query, and basic machine learning algorithms to make predictions.
  • Demonstrable experience implementing efficient neural network models and deploying them in a production environment.
  • Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
  • Capable of writing clear and lucid reports and demystifying data for the rest of us.
  • Be curious and care about the planet!
  • Minimum 2 years of demonstrable industry experience working with large and noisy datasets.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Job posted by
Balahun Khonglanoh

Data Analyst

at Amagi Media Labs

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
Python
SQL
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹10L - ₹14L / yr
1. 2 to 4 years of experience
2. hands on experience using python, sql, tablaue
3. Data Analyst 
About Amagi (http://www.amagi.com/" target="_blank">www.amagi.com): Amagi is a market leader in cloud based media technology services for channel creation, distribution and ad monetization. Amagi’s cloud technology and managed services is used by TV networks, content owners, sports rights owners and pay TV / OTT platforms to create 24x7 linear channels for OTT and broadcast and deliver them to end consumers. Amagi’s pioneering and market leading cloud platform has won numerous accolades and is deployed in over 40 countries by 400+ TV networks. Customers of Amagi include A+E Networks, Comcast, Google, NBC Universal, Roku, Samsung and Warner Media. This is a unique and transformative opportunity to participate and grow a world-class technology company that changes the tenets of TV. Amagi is a private equity backed firm with investments from KKR (Emerald Media Fund), Premji Invest and MayField. Amagi has offices in New York, Los Angeles, London, New Delhi and Bangalore. LinkedIn page : https://www.linkedin.com/company/amagicorporation" target="_blank">https://www.linkedin.com/company/amagicorporation News: https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400" target="_blank">https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400/ Cofounder on Youtube: https://www.youtube.com/watch?v=EZ0nBT3ht0E" target="_blank">https://www.youtube.com/watch?v=EZ0nBT3ht0E
 

About Amagi & Growth


Amagi Corporation is a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute and monetize live linear channels on Free-Ad-Supported TV and video services platforms. Amagi also offers 24x7 cloud managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 500+ channels on its platform for linear channel creation, distribution, and monetization with deployments in over 40 countries. Amagi has offices in New York (Corporate office), Los Angeles, and London, broadcast operations in New Delhi, and our Development & Innovation center in Bangalore. Amagi is also expanding in Singapore, Canada and other countries.

Amagi has seen phenomenal growth as a global organization over the last 3 years. Amagi has been a profitable firm for the last 2 years, and is now looking at investing in multiple new areas. Amagi has been backed by 4 investors - Emerald, Premji Invest, Nadathur and Mayfield. As of the fiscal year ending March 31, 2021, the company witnessed stellar growth in the areas of channel creation, distribution, and monetization, enabling customers to extend distribution and earn advertising dollars while saving up to 40% in cost of operations compared to traditional delivery models. Some key highlights of this include:

·   Annual revenue growth of 136%
·   44% increase in customers
·   50+ Free Ad Supported Streaming TV (FAST) platform partnerships and 100+ platform partnerships globally
·   250+ channels added to its cloud platform taking the overall tally to more than 500
·   Approximately 2 billion ad opportunities every month supporting OTT ad-insertion for 1000+ channels
·   60% increase in workforce in the US, UK, and India to support strong customer growth (current headcount being 360 full-time employees + Contractors)
·   5-10x growth in ad impressions among top customers
 
Over the last 4 years, Amagi has grown more than 400%. Amagi now has an aggressive growth plan over the next 3 years - to grow 10X in terms of Revenue. In terms of headcount, Amagi is looking to grow to more than 600 employees over the next 1 year. Amagi is building several key organizational processes to support the high growth journey and has gone digital in a big way.
 
Job posted by
Rajesh C

Cloud Architect

at Leading Payment Solution Company

Amazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Cloud Computing
Microsoft Windows Azure
Java
Python
C#
PHP
Cloud Migration
icon
Chennai, Mumbai, Pune, Hyderabad, Bengaluru (Bangalore)
icon
10 - 17 yrs
icon
₹8L - ₹25L / yr

About Company:

The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.

Role Overview
  • Senior Engineer with a strong background and experience in cloud related technologies and architectures.
  • Can design target cloud architectures to transform existing architectures together with the in-house team.
  • Can actively hands-on configure and build cloud architectures and guide others.

Key Knowledge

  • 3-5+ years of experience in AWS/GCP or Azure technologies
  • Is likely certified on one or more of the major cloud platforms
  • Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
  • Ability to guide and lead internal agile teams on cloud technology
  • Background from the financial services industry or similar critical operational experience
  •  
Job posted by
Naveed Mohd
Tableau
SQL
Problem solving
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹8L - ₹12L / yr
  • Hands-on development/maintenance experience in Tableau: Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions using Tableau
  • Reviewing and improving existing Tableau dashboards and data models/ systems and collaborating with teams to integrate new systems
  • Provide support and expertise to the business community to assist with better utilization of Tableau
  • Understand business requirements, conduct analysis and recommend solution options for intelligent dashboards in Tableau
  • Experience with Data Extraction, Transformation and Load (ETL) – knowledge of how to extract, transform and load data
  • Execute SQL data queries across multiple data sources in support of business intelligence reporting needs. Format query results / reports in various ways
  • Participates in QA testing, liaising with other project team members and being responsive to client's needs, all with an eye on details in a fast-paced environment
  • Performing and documenting data analysis, data validation, and data mapping/design

 

  • Extensive experience in developing, maintaining and managing Tableau driven dashboards & analytics and working knowledge of Tableau administration /architecture.
  • A solid understanding of SQL, rational databases, and normalization
  • Proficiency in use of query and reporting analysis tools
  • Competency in Excel (macros, pivot tables, etc.)
  • Degree in Mathematics, Computer Science, Information Systems, or related field.
Job posted by
Jerrin Thomas

Senior Big Data Engineer

at Banyan Data Services

Founded 2018  •  Product  •  20-100 employees  •  Bootstrapped
Data Science
Data Scientist
MongoDB
Java
Big Data
Apache Kafka
Python
SQL
Deep Learning
RF
Generalized linear model
k-means clustering
Hadoop
Spring
Apache HBase
Cassandra
DevOps
Docker
Kubernetes
icon
Bengaluru (Bangalore)
icon
3 - 15 yrs
icon
₹6L - ₹20L / yr

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Job posted by
Sathish Kumar
Amazon Web Services (AWS)
AWS Lambda
Functional testing
lambda
ELB
icon
Pune
icon
6 - 12 yrs
icon
₹25L - ₹27L / yr
• Extensive system administration experience
• 3+ years of experience on AWS Administration
• Hands-on experience in tasks automation experience via scripting
• Hands-on experience in implementing auto-scaling, ELBs, Lamdba functions, and other auto-scaling technologies
• Experience in vulnerability management and security.
• Ability to proactively and effectively communicate and influence stakeholders
• Experience in virtual, cross-functional teamwork
• Strong customer and service management focus and mindset
• Solid and technical hands-on experience with administrating public and private cloud systems (compute, storage, networks, security, hardware, software, etc)
• AWS Associate, Professional or Specialist certification
 
Job posted by
Harpreet kour

Data Engineer

at Networking & Cybersecurity Solutions

Agency job
via Multi Recruit
Spark
Apache Kafka
Data Engineer
Hadoop
Big Data
TensorFlow
Flink
"Numpy"
icon
Bengaluru (Bangalore)
icon
4 - 16 yrs
icon
₹40L - ₹60L / yr
  • Developing telemetry software to connect Junos devices to the cloud
  • Fast prototyping and laying the SW foundation for product solutions
  • Moving prototype solutions to a production cloud multitenant SaaS solution
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics specialists to strive for greater functionality in our data systems.

Qualification and Desired Experiences

  • Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • 5+ years experiences building data pipelines for data science-driven solutions
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Good team worker with excellent interpersonal skills written, verbal and presentation
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
  • Previous work in a start-up environment
  • 3+ years experiences building data pipelines for data science-driven solutions
  • Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and interpersonal skills.
  • Experience supporting and working with multi-functional teams in a multidimensional environment.
Job posted by
Ashwini Miniyar
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Top 3 Fintech Startup?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort