Cutshort logo
Data Engineer-Backend
Simpl's logo

Data Engineer-Backend

at Simpl

DP
Posted by Elish Ismael
icon
Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹10L - ₹50L / yr
icon
Full time
Skills
Java
Apache Spark
Big Data
Hadoop
Apache Hive
About Simpl
The thrill of working at a start-up that is starting to scale massively is something else. Simpl (FinTech startup of the year - 2020) was formed in 2015 by Nitya Sharma, an investment banker from Wall Street and Chaitra Chidanand, a tech executive from the Valley, when they teamed up with a very clear mission - to make money simple so that people can live well and do amazing things. Simpl is the payment platform for the mobile-first world, and we’re backed by some of the best names in fintech globally (folks who have invested in Visa, Square and Transferwise), and
has Joe Saunders, Ex Chairman and CEO of Visa as a board member.

Everyone at Simpl is an internal entrepreneur who is given a lot of bandwidth and resources to create the next breakthrough towards the long term vision of “making money Simpl”. Our first product is a payment platform that lets people buy instantly, anywhere online, and pay later. In
the background, Simpl uses big data for credit underwriting, risk and fraud modelling, all without any paperwork, and enables Banks and Non-Bank Financial Companies to access a whole new consumer market.
In place of traditional forms of identification and authentication, Simpl integrates deeply into merchant apps via SDKs and APIs. This allows for more sophisticated forms of authentication that take full advantage of smartphone data and processing power

Skillset:
 Workflow manager/scheduler like Airflow, Luigi, Oozie
 Good handle on Python
 ETL Experience
 Batch processing frameworks like Spark, MR/PIG
 File formats: parquet, JSON, XML, thrift, avro, protobuff
 Rule engine (drools - business rule management system)
 Distributed file systems like HDFS, NFS, AWS, S3 and equivalent
 Built/configured dashboards

Nice to have:
 Data platform experience for eg: building data lakes, working with near - realtime
applications/frameworks like storm, flink, spark.
 AWS
 File encoding types: Thrift, Avro, Protobuff, Parquet, JSON, XML
 HIVE, HBASE
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort

About Simpl

Founded
2015
Type
Size
Stage
Raised funding
About

Simpl has revolutionized online checkout in India by creating a Pay-Later platform, empowering e-commerce merchants to offer their consumers a 1-click checkout, a line of credit at POS, and full buyer protection. It aims to empower merchants to own their customer's checkout experience. With Simpl merchants are able to provide consumers with an easy, safe, and intuitive user experience that builds a trusted relationship between the two.

Read more
Connect with the team
icon
Syed Nazeerulla
icon
Elish Ismael
icon
Arya Muralidharan
Company social profiles
N/A
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

DP
Posted by Nitesh Parab
Bengaluru (Bangalore), Hyderabad, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SSIS
SQL Server Integration Services (SSIS)
+10 more

Job Title: Data Engineer

Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.

Responsibilities:

  • Design, build, and maintain data pipelines to collect, store, and process data from various sources.
  • Create and manage data warehousing and data lake solutions.
  • Develop and maintain data processing and data integration tools.
  • Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
  • Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
  • Ensure data quality and integrity across all data sources.
  • Develop and implement best practices for data governance, security, and privacy.
  • Monitor data pipeline performance / Errors and troubleshoot issues as needed.
  • Stay up-to-date with emerging data technologies and best practices.

Requirements:

Bachelor's degree in Computer Science, Information Systems, or a related field.

Experience with ETL tools like Matillion,SSIS,Informatica

Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.

Experience in writing complex SQL queries

Strong programming skills in languages such as Python, Java, or Scala.

Experience with data modeling, data warehousing, and data integration.

Strong problem-solving skills and ability to work independently.

Excellent communication and collaboration skills.

Familiarity with big data technologies such as Hadoop, Spark, or Kafka.

Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks

Familiarity with cloud computing platforms such as AWS, Azure, or GCP.

Familiarity with Reporting tools

Teamwork/ growth contribution

  • Helping the team in taking the Interviews and identifying right candidates
  • Adhering to timelines
  • Intime status communication and upfront communication of any risks
  • Tech, train, share knowledge with peers.
  • Good Communication skills
  • Proven abilities to take initiative and be innovative
  • Analytical mind with a problem-solving aptitude

Good to have :

Master's degree in Computer Science, Information Systems, or a related field.

Experience with NoSQL databases such as MongoDB or Cassandra.

Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.

Knowledge of machine learning and statistical modeling techniques.

If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.

Read more
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
5 - 10 yrs
₹15L - ₹30L / yr
ETL
Big Data
Data engineering
Apache Kafka
PySpark
+11 more
What you will do
  • Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
    • programmatically ingesting data from several static and real-time sources (incl. web scraping)
    • rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
    • performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
  • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
  • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
  • Build data tools to facilitate fast data cleaning and statistical analysis
  • Ensure data architecture is secure and compliant
  • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
  • Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

You should be

  •  Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
  • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
  • Expert in shell scripting and writing schedulers
  • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
  • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
  • Strong knowledge of data security best practices
  • 5+ years experience in a data engineering role
  • Science / Engineering graduate from a Tier-1 university in the country
  • And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Read more
at Telstra
1 video
1 recruiter
DP
Posted by Mahesh Balappa
Bengaluru (Bangalore), Hyderabad, Pune
3 - 7 yrs
Best in industry
Spark
Hadoop
NOSQL Databases
Apache Kafka

About Telstra

 

Telstra is Australia’s leading telecommunications and technology company, with operations in more than 20 countries, including In India where we’re building a new Innovation and Capability Centre (ICC) in Bangalore.

 

We’re growing, fast, and for you that means many exciting opportunities to develop your career at Telstra. Join us on this exciting journey, and together, we’ll reimagine the future.

 

Why Telstra?

 

  • We're an iconic Australian company with a rich heritage that's been built over 100 years. Telstra is Australia's leading Telecommunications and Technology Company. We've been operating internationally for more than 70 years.
  • International presence spanning over 20 countries.
  • We are one of the 20 largest telecommunications providers globally
  • At Telstra, the work is complex and stimulating, but with that comes a great sense of achievement. We are shaping the tomorrow's modes of communication with our innovation driven teams.

 

Telstra offers an opportunity to make a difference to lives of millions of people by providing the choice of flexibility in work and a rewarding career that you will be proud of!

 

About the team

Being part of Networks & IT means you'll be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy.

With us, you'll be working with world-leading technology and change the way we do IT to ensure business needs drive priorities, accelerating our digitisation programme.

 

Focus of the role

Any new engineer who comes into data chapter would be mostly into developing reusable data processing and storage frameworks that can be used across data platform.

 

About you

To be successful in the role, you'll bring skills and experience in:-

 

Essential 

  • Hands-on experience in Spark Core, Spark SQL, SQL/Hive/Impala, Git/SVN/Any other VCS and Data warehousing
  • Skilled in the Hadoop Ecosystem(HDP/Cloudera/MapR/EMR etc)
  • Azure data factory/Airflow/control-M/Luigi
  • PL/SQL
  • Exposure to NOSQL(Hbase/Cassandra/GraphDB(Neo4J)/MongoDB)
  • File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)
  • Kafka/Kinesis/Eventhub

 

Highly Desirable

Experience and knowledgeable on the following:

  • Spark Streaming
  • Cloud exposure (Azure/AWS/GCP)
  • Azure data offerings - ADF, ADLS2, Azure Databricks, Azure Synapse, Eventhubs, CosmosDB etc.
  • Presto/Athena
  • Azure DevOps
  • Jenkins/ Bamboo/Any similar build tools
  • Power BI
  • Prior experience in building or working in team building reusable frameworks,
  • Data modelling.
  • Data Architecture and design principles. (Delta/Kappa/Lambda architecture)
  • Exposure to CI/CD
  • Code Quality - Static and Dynamic code scans
  • Agile SDLC      

 

If you've got a passion to innovate, succeed as part of a great team, and looking for the next step in your career, we'd welcome you to apply!

___________________________

 

We’re committed to building a diverse and inclusive workforce in all its forms. We encourage applicants from diverse gender, cultural and linguistic backgrounds and applicants who may be living with a disability. We also offer flexibility in all our roles, to ensure everyone can participate.

To learn more about how we support our people, including accessibility adjustments we can provide you through the recruitment process, visit tel.st/thrive.

Read more
at Number Theory
3 recruiters
DP
Posted by Nidhi Mishra
Gurugram
5 - 12 yrs
₹10L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Job Description – Big Data Architect
Number Theory is looking for experienced software/data engineer who would be focused on owning and rearchitecting dynamic pricing engineering systems
Job Responsibilities:
 Evaluate and recommend Big Data technology stack best suited for NT AI at scale Platform
and other products
 Lead the team for defining proper Big Data Architecture Design.
 Design and implement features on NT AI at scale platform using Spark and other Hadoop
Stack components.
 Drive significant technology initiatives end to end and across multiple layers of architecture
 Provides strong technical leadership in adopting and contributing to open source technologies related to Big Data across multiple engagements
 Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with considerable scalable amount of data
 Identify and work upon incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.)

Requirements:
 A successful candidate with 8+ years of experience in the role of implementation of a highend software product.
 Provides technical leadership in Big Data space (Spark and Hadoop Stack like Map/Reduc,
HDFS, Hive, HBase, Flume, Sqoop etc. NoSQL stores like Cassandra, HBase etc) across
Engagements and contributes to open-source Big Data technologies.
 Rich hands on in Spark and worked on Spark at a larger scale.
 Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near
Real-time, Realtime technologies).
 Passionate for continuous learning, experimenting, applying and contributing towards
cutting edge open-source technologies and software paradigms
 Expert-level proficiency in Java and Scala.
 Strong understanding and experience in distributed computing frameworks, particularly
Apache Hadoop2.0 (YARN; MR & HDFS) and associated technologies one or more of Hive,
Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its
components (Streaming, SQL, MLLib)
 Operating knowledge of cloud computing platforms (AWS,Azure) –

Good to have:

 Operating knowledge of different enterprise hadoop distribution (C) –
 Good Knowledge of Design Patterns
 Experience working within a Linux computing environment, and use of command line tools
including knowledge of shell/Python scripting for automating common tasks.
Read more
at Capgemini
14 recruiters
Agency job
via Nu-Pie by Jerrin Thomas
Bengaluru (Bangalore)
5 - 8 yrs
₹8L - ₹13L / yr
ETL
Data Warehouse (DWH)
ETL Developer
Relational Database (RDBMS)
Spark
+8 more

 Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools.

Experience with Data Management & data warehouse development

Star schemas, Data Vaults, RDBMS, and ODS

Change Data capture

Slowly changing dimensions

Data governance

Data quality

Partitioning and tuning

Data Stewardship

Survivorship

Fuzzy Matching

Concurrency

Vertical and horizontal scaling

ELT, ETL

Spark, Hadoop, MPP, RDBMS

Experience with Dev/OPS architecture, implementation and operation

Hand's on working knowledge of Unix/Linux

Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue.

Complex ETL program design coding

Experience in Shell Scripting, Batch Scripting.

Good communication (oral & written) and inter-personal skills

Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval.

Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery.

Propose good design & solutions and adherence to the best Design & Standard practices.

Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks.

Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques.

Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies.

Work with functional business analysts to ensure that application programs are functioning as defined. 

Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence.

Technologies (Select based on requirement)

Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift

Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory

Utilities for bulk loading and extracting

Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala

J/ODBC, JSON

Data Virtualization Data services development

Service Delivery - REST, Web Services

Data Virtualization Delivery – Denodo

 

ELT, ETL

Cloud certification Azure

Complex SQL Queries

 

Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
Read more
at DataMetica
1 video
7 recruiters
DP
Posted by Nikita Aher
Pune
2 - 6 yrs
₹3L - ₹15L / yr
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
+3 more

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
Read more
at Capgemini
14 recruiters
Agency job
via Nu-Pie by DAMIAN FRANCIS
Remote, Bangalore
3 - 6 yrs
₹6L - ₹13L / yr
Big Data
Spark
Hadoop
Apache Kafka
Data engineering
+2 more
  • Work with the application development team to implement database schema, data strategies, build data flows
  • Should have expertise across the database. Responsible for gathering and analyzing data requirements and data flows to other systems
  • Perform database modeling and SQL scripting
  • 8+ years of experience in Database modeling and SQL scripting
  • Experience in OLTP and OLAP database modeling
  • Experience in Data modeling for Relational (PostgreSQL) and Non-relational Databases
  • Strong scripting experience P-SQL (Views, Store procedures), Python, Spark.
  • Develop notebooks & jobs using Python language
  • Big Data stack: Spark, Hadoop, Sqoop, Pig, Hive, Hbase, Flume, Kafka, Storm
  • Should have good understanding on Azure Cosmos DB, Azure DW

Read more
at Reval Analytics
2 recruiters
DP
Posted by Jyoti Nair
Pune
3 - 6 yrs
₹5L - ₹9L / yr
Python
Django
Big Data

Position Name: Software Developer

Required Experience: 3+ Years

Number of positions: 4

Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science).

Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting

Compensation - As per industry standards.

Joining - Immediate joining is preferrable.

 

Required Skills:

 

  • Strong Experience in Python and web frameworks like Django, Tornado and/or Flask
  • Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib
  • Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot
  • Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift.
  • Experience in building APIs, Multi-threading for tasks on Linux platform
  • Exposure to finance and capital markets will be added advantage. 
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application.
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3

 

Company Description:

Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.

 

Website: www.virtuaresearch.com

Read more
Agency job
via Wenger and Watson Inc by Bevin Baby
Remote, Bengaluru (Bangalore)
15 - 20 yrs
₹50L - ₹120L / yr
Scala
Big Data
Hadoop
Spark
JVM
+2 more
About the Company, Conviva:
Conviva is the leader in streaming media intelligence, powered by its real-time platform. More than 250 industry leaders and brands – including CBS, CCTV, Cirque Du Soleil, DAZN, Disney+, HBO, Hulu, Sky, Sling TV, TED, Univision, and Warner Media – rely on Conviva to maximize their consumer engagement, deliver the quality experiences viewers expect and drive revenue growth. With a global footprint of more than 500 million unique viewers watching 150 billion streams per year across 3 billion applications streaming on devices, Conviva offers streaming providers unmatched scale for continuous video measurement, intelligence and benchmarking across every stream, every screen, every second. Conviva is privately held and headquartered in Silicon Valley, California, with offices around the world. For more information, please visit us at www.conviva.com.

What you get to do:

 Be a thought leader. As one of the senior most technical minds in the India centre, influence our technical evolution journey by pushing the boundaries of possibilities by testing forwarding looking ideas and demonstrating its value.
 Be a technical leader: Demonstrate pragmatic skills of translating requirements into technical design.
 Be an influencer. Understand challenges and collaborate across executives and stakeholders in a geographically distributed environment to influence them.
 Be a technical mentor. Build respect within team. Mentor senior engineers technically and
contribute to the growth of talent in the India centre.
 Be a customer advocate. Be empathetic to customer and domain by resolving ambiguity efficiently with the customer in mind.
 Be a transformation agent. Passionately champion engineering best practices and sharing across teams.
 Be hands-on. Participate regularly in code and design reviews, drive technical prototypes and actively contribute to resolving difficult production issues.

What you bring to the role:
 Thrive in a start-up environment and has a platform mindset.
 Excellent communicator. Demonstrated ability to succinctly communicate and describe complexvtechnical designs and technology choices both to executives and developers.
 Expert in Scala coding. JVM based stack is a bonus.
 Expert in big data technologies like Druid, Spark, Hadoop, Flink (or Akka) & Kafka.
 Passionate about one or more engineering best practices that influence design, quality of code or developer efficiency.
 Familiar with building distributed applications using webservices and RESTful APIs.
 Familiarity in building SaaS platforms on either in-house data centres or public cloud providers.
Read more
at I Base IT
1 recruiter
DP
Posted by Sravanthi Alamuri
Hyderabad
9 - 13 yrs
₹10L - ₹23L / yr
Data Analytics
Data Warehouse (DWH)
Data Structures
Spark
Architecture
+4 more
Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort