Principle Data Engineer

at Incubyte

DP
Posted by Lifi Lawrance
icon
Remote only
icon
4 - 8 yrs
icon
₹10L - ₹30L / yr
icon
Full time
Skills
Test driven development (TDD)
Spark
PySpark
ADF
SSIS
Datawarehousing

Who are we?

 

We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.

 

What we are looking for

 

We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.

 

What you’ll be doing

 

You’ll be working on the data architecture, building ETL pipelines and on a cloud migration strategy. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews

You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases where possible, development, deployment and fixes. You will own the entire stack and take complete ownership of the solution. And, most importantly, you’ll be making a pledge that you’ll never stop learning!

 

Skills you need in order to succeed in this role

Most Important: Integrity of character, diligence, and the commitment to do your best

Must-Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development

Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing, Azure Functions, Azure App Insights

 

Self-Learner: You must be extremely hands-on and obsessive about delivering clean code

 

  • Sense of Ownership: Do whatever it takes to meet development timelines
  • Wide range of experience including on-prem and cloud technologies.
  • Can analyze a problem and recommend solutions.
  • Clear communication to the team and clients
  • Experience in creating end-to-end data pipeline
  • Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
  • Working experience in Databricks
  • Strong in BI/DW/Data lake Architecture, design, and ETL
  • Strong in Requirement Analysis, Data Analysis, and Data Modeling capabilities
  • Experience in object-oriented programming, data structures, algorithms, and software engineering
  • Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
  • Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
  • Working knowledge of server configuration/deployment
  • Experience using source control and bug tracking systems,

    writing user stories and technical documentation

  • Strong in Requirement Analysis, Data Analysis, and Data Modeling capabilities
  • Expertise in creating tables, procedures, functions, triggers, indexes, views, joins, and optimization of complex
  • Experience with database versioning, backups, restores and
  • Expertise in data security and
  • Ability to perform database performance tuning queries
Read more

About Incubyte

Founded
2020
Type
Size
20-100
Stage
Bootstrapped
About

Who we are

We are Software Craftspeople. We are proud of the way we work and the code we write. We embrace and are evangelists of eXtreme Programming practices. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus own quality. And most importantly, we never stop learning!


We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. We work with large established institutions to help them create internal applications to automate manual opperations and achieve scale.

We design software, design the team a well as the organizational strategy required to successfully release robust and scalable products. Incubyte strives to find people who are passionate about coding, learning and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim to bringing a product mindset into services. More on our website: https://incubyte.co" target="_blank">https://incubyte.co

 

Join our team! We’re always looking for like minded people!

Read more
Connect with the team
icon
Rushali Parikh
icon
Arohi Parikh
icon
Karishma Shah
icon
Lifi Lawrance
icon
Gouthami Vallabhaneni
icon
Shilpi Gupta
icon
Pooja Karnani
Company social profiles
N/A
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Pune, Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+9 more
  • 5+ years of professional experience in developing data-pipelines for large-scale, complex datasets from varieties of data sources.
  • Data Engineering expertise with strong experience working with Big data technologies such as Hadoop, Hive, Spark, Scala, Python etc.
  • Experience working with Cloud based data technologies such as Azure Data lake, Azure Data factory, Azure Data Bricks highly desirable.
  • Knowledge and experience working with database systems such as Cassandra, HBase, Cosmos etc.
  • Moderate coding skills. SQL or similar required. C# or other languages strongly preferred.
  • Proven track record of designing and delivering large-scale, high quality systems and software products.
  • Outstanding communication and collaboration skills. You can learn from and teach others.
  • Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services.
  • Experience with prediction in adversarial (energy) environments highly desirable.
  • A Bachelor or Master’s degree in computer science or Engineering with coursework in
  • Statistics, Data Science, Experimentation Design, and Machine Learning highly desirable. Education:
  • Engineering graduate or higher from Tier I or Tier II colleges
Read more
Agency job
via Nu-Pie by Sanjay Biswakarma
Remote, Bengaluru (Bangalore)
4 - 8 yrs
₹4L - ₹16L / yr
Big Data
Hadoop
Data engineering
data engineer
Google Cloud Platform (GCP)
+14 more
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
DP
Posted by Ranjana Guru
Remote only
8 - 15 yrs
₹1L - ₹20L / yr
Data Analytics
Datawarehousing
Data architecture
SAP HANA

Required Skills:

  • Proven work experience as an Enterprise / Data / Analytics Architect - Data Platform in HANA XSA, XS, Data Intelligence and SDI
  • Can work on new and existing architecture decision in HANA XSA, XS, Data Intelligence and SDI
  • Well versed with data architecture principles, software / web application design, API design, UI / UX capabilities, XSA / Cloud foundry architecture
  • In-depth understand of database structure (HANA in-memory) principles.
  • In-depth understand of ETL solutions and data integration strategy.
  • Excellent knowledge of Software and Application design, API, XSA, and microservices concepts

 

Roles & Responsibilities:

  • Advise and ensure compliance of the defined Data Architecture principle.
  • Identifies new technologies update and development tools including new release/upgrade/patch as required. 
  • Analyzes technical risks and advises on risk mitigation strategy.
  • Advise and ensures compliance to existing and development required data and reporting standard including naming convention.

 

The time window is ideally AEST (8 am till 5 pm) which means starting at 3:30 am IST. We understand it can be very early for an SME supporting from India. Hence, we can consider the candidates who can support from at least 7 am IST (earlier is possible).

Read more
Bengaluru (Bangalore)
1 - 4 yrs
₹7L - ₹12L / yr
SQL Server Integration Services (SSIS)
SQL
ETL
Informatica
Data Warehouse (DWH)
+4 more

About Company:

Working with a multitude of clients populating the FTSE and Fortune 500s, Audit Partnership is a people focused organization with a strong belief in our employees. We hire the best people to provide the best services to our clients.

APL offers profit recovery services to organizations of all sizes across a number of sectors. APL was borne out of a desire to offer an alternative from the stagnant service provision on offer in the profit recovery industry.

Every year we cover million of pounds for our clients and also work closely with them, sharing our audit findings to minimize future losses. Our dedicated and highly experienced audit teams utilize progressive & dynamic financial service solutions & industry leading technology to achieve maximum success.

We provide dynamic work environments focused on delivering data-driven solutions at a rapidly increased pace over traditional development. Be a part of our passionate and motivated team who are excited to use the latest in software technologies within financial services.

Headquartered in the UK, we have expanded from a small team in 2002 to a market leading organization serving clients across the globe while keeping our clients at the heart of all decisions we make.


Job description:

We are looking for a high-potential, enthusiastic SQL Data Engineer with a strong desire to build a career in data analysis, database design and application solutions. Reporting directly to our UK based Technology team, you will provide support to our global operation in the delivery of data analysis, conversion, and application development to our core audit functions.

Duties will include assisting with data loads, using T-SQL to analyse data, front-end code changes, data housekeeping, data administration, and supporting the Data Services team as a whole.  Your contribution will grow in line with your experience and skills, becoming increasingly involved in the core service functions and client delivery.  A self-starter with a deep commitment to the highest standards of quality and customer service. We are offering a fantastic career opportunity, working for a leading international financial services organisation, serving the world’s largest organisations.

 

What we are looking for:

  • 1-2 years of previous experience in a similar role
  • Data analysis and conversion skills using Microsoft SQL Server is essential
  • An understanding of relational database design and build
  • Schema design, normalising data, indexing, query performance analysis
  • Ability to analyse complex data to identify patterns and detect anomalies
  • Assisting with ETL design and implementation projects
  • Knowledge or experience in one or more of the key technologies below would be preferable:
    • Microsoft SQL Server (SQL Server Management Studio, Stored Procedure writing etc)
    • T-SQL
    • Programming languages (C#, VB, Python etc)
    • Use of Python to manipulate and import data
    •  
    • Experience of ETL/automation advantageous but not essential (SSIS/Prefect/Azure)
  • A self-starter who can drive projects with minimal guidance
  • Meeting stakeholders to agree system requirements
  • Someone who is enthusiastic and eager to learn
  • Very good command of English and excellent communication skills

 

Perks & Benefits:

  • A fantastic work life balance
  • Competitive compensation and benefits
  • Exposure of working with Fortune 500 organization
  • Expert guidance and nurture from global leaders
  • Opportunities for career and personal advancement with our continued global growth strategy
  • Industry leading training programs
  • A working environment that is exciting, fun and engaging

 

Read more
DP
Posted by Shanu Mohan
Gurugram, Mumbai, Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹17L / yr
Python
PySpark
Amazon Web Services (AWS)
Spark
Scala
+2 more
  • Hands-on experience in any Cloud Platform
· Versed in Spark, Scala/python, SQL
  • Microsoft Azure Experience
· Experience working on Real Time Data Processing Pipeline
Read more
Bengaluru (Bangalore)
6 - 8 yrs
₹8L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
6-8years of experience in data engineer
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
AWS Lambda
SQL
hadoop
kafka
Read more
DP
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Pune
4 - 12 yrs
₹6L - ₹15L / yr
Data engineering
Data Engineer
ETL
Spark
Apache Kafka
+5 more
We are looking for a smart candidate with:
  • Strong Python Coding skills and OOP skills
  • Should have worked on Big Data product Architecture
  • Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
  • NoSQL-based databases such as Cassandra, Elasticsearch etc.
  • Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
  • Experience on development of ETL for data product
  • Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
  • Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
  • Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
  • Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
Key Skills:
  • Python and Scala (Optional), Spark / PySpark, Parallel programming
Read more
Bengaluru (Bangalore)
3 - 10 yrs
₹6L - ₹12L / yr
Python
Machine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
TensorFlow
+3 more
Responsibilities :- Define the short-term tactics and long-term technology strategy.- Communicate that technical vision to technical and non-technical partners, customers and investors.- Lead the development of AI/ML related products as it matures into lean, high performing agile teams.- Scale the AI/ML teams by finding and hiring the right mix of on-shore and off-shore resources.- Work collaboratively with the business, partners, and customers to consistently deliver business value.- Own the vision and execution of developing and integrating AI & machine learning into all aspects of the platform.- Drive innovation through the use of technology and unique ways of applying it to business problems.Experience and Qualifications :- Masters or Ph.D. in AI, computer science, ML, electrical engineering or related fields (statistics, applied math, computational neuroscience)- Relevant experience leading & building teams establishing technical direction- A well-developed portfolio of past software development, composed of some mixture of professional work, open source contributions, and personal projects.- Experience in leading and developing remote and distributed teams- Think strategically and apply that through to innovative solutions- Experience with cloud infrastructure- Experience working with machine learning, artificial intelligence, and large datasets to drive insights and business value- Experience in agents architecture, deep learning, neural networks, computer vision and NLP- Experience with distributed computational frameworks (YARN, Spark, Hadoop)- Proficiency in Python, C++. Familiarity with DL frameworks (e.g. neon, TensorFlow, Caffe, etc.)Personal Attributes :- Excellent communication skills- Strong fit with the culture- Hands-on approach, self-motivated with a strong work ethic- Ability to learn quickly (technology, business models, target industries)- Creative and inspired.Superpowers we love :- Entrepreneurial spirit and a vibrant personality- Experience with lean startup build-measure-learn cycle- Vision for AI- Extensive understanding of why things are done the way they are done in agile development.- A passion for adding business valueNote: Selected candidate will be offered ESOPs too.Employment Type : Full TimeSalary : 8-10 Lacs + ESOPFunction : Systems/Product SoftwareExperience : 3 - 10 Years
Read more
DP
Posted by Amit Gupta
NCR (Delhi | Gurgaon | Noida)
1 - 5 yrs
₹6L - ₹18L / yr
Spark
MapReduce
Hadoop
ETL
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Incubyte?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort