Data Engineer

at Incubyte

DP
Posted by Lifi Lawrance
icon
Remote only
icon
2 - 3 yrs
icon
₹8L - ₹20L / yr
icon
Full time
Skills
Data engineering
Spark
SQL
Windows Azure
MySQL
Python
ETL
ADF
azure

Who are we?

 

We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.

 

What we are looking for

 

We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.

 

What you’ll be doing

 

First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.

 

You will work in a product team. Building products and rapidly rolling out new features and fixes.

 

You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!

 

Skills you need in order to succeed in this role

Most Important: Integrity of character, diligence and the commitment to do your best

Must Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development

Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing

 

Self-Learner: You must be extremely hands-on and obsessive about delivering clean code

 

  • Sense of Ownership: Do whatever it takes to meet development timelines
  • Experience in creating end to end data pipeline
  • Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
  • Working experience in Databricks
  • Strong in BI/DW/Datalake Architecture, design and ETL
  • Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
  • Experience in object-oriented programming, data structures, algorithms and software engineering
  • Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
  • Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
  • Working knowledge of server configuration / deployment
  • Experience using source control and bug tracking systems,

    writing user stories and technical documentation

  • Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
  • Expertise in creating tables, procedures, functions, triggers, indexes, views, joins and optimization of complex
  • Experience with database versioning, backups, restores and
  • Expertise in data security and
  • Ability to perform database performance tuning queries
Read more

About Incubyte

Founded
2020
Type
Size
20-100
Stage
Bootstrapped
About

Who we are

We are Software Craftspeople. We are proud of the way we work and the code we write. We embrace and are evangelists of eXtreme Programming practices. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus own quality. And most importantly, we never stop learning!


We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. We work with large established institutions to help them create internal applications to automate manual opperations and achieve scale.

We design software, design the team a well as the organizational strategy required to successfully release robust and scalable products. Incubyte strives to find people who are passionate about coding, learning and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim to bringing a product mindset into services. More on our website: https://incubyte.co" target="_blank">https://incubyte.co

 

Join our team! We’re always looking for like minded people!

Read more
Connect with the team
icon
Rushali Parikh
icon
Arohi Parikh
icon
Karishma Shah
icon
Lifi Lawrance
icon
Gouthami Vallabhaneni
icon
Shilpi Gupta
icon
Pooja Karnani
Company social profiles
N/A
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Broadcast Media Production and Distribution Company
Agency job
via Qrata by Prajakta Kulkarni
Mumbai
1 - 4 yrs
₹10L - ₹12L / yr
Python
Object Oriented Programming (OOPs)
ETL
PowerBI
Tableau
+1 more

Functional Competencies:

  • Data base connections using python utilized package point
  • Data pull from multiple sources and automation of the same
  • Summarising the respondent level data in format to be able to answer the channel queries.
  • Creating final presentations to be presented to the internal and external stake holder.

Other Competency:

  • Media experience preferred
  • Knowledge of Data science/Data analytics
  • Expert in Python coding
  • Knowledge of Microsoft Excel/Powerpoint
Understanding of any visualisation tool like Tableau or Power BI
Read more
Client is a Machine Learning company based in New Delhi.
Agency job
via Jobdost by Sathish Kumar
NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹10L - ₹25L / yr
Data Science
R Programming
Python
Machine Learning (ML)
Entity Framework
+2 more

Job Responsibilities

  • Design machine learning systems
  • Research and implement appropriate ML algorithms and tools
  • Develop machine learning applications according to requirements
  • Select appropriate datasets and data representation methods
  • Run machine learning tests and experiments
  • Perform statistical analysis and fine-tuning using test results
  • Train and retrain systems when necessary

 

Requirements for the Job

 

  1. Bachelor’s/Master's/PhD in Computer Science, Mathematics, Statistics or equivalent field andmust have a minimum of 2 years of overall experience in tier one colleges 
  1. Minimum 1 year of experience working as a Data Scientist in deploying ML at scale in production
  2. Experience in machine learning techniques (e.g. NLP, Computer Vision, BERT, LSTM etc..) andframeworks (e.g. TensorFlow, PyTorch, Scikit-learn, etc.)
  1. Working knowledge in deployment of Python systems (using Flask, Tensorflow Serving)
  2. Previous experience in following areas will be preferred: Natural Language Processing(NLP) - Using LSTM and BERT; chatbots or dialogue systems, machine translation, comprehension of text, text summarization.
  3. Computer Vision - Deep Neural Networks/CNNs for object detection and image classification, transfer learning pipeline and object detection/instance segmentation (Mask R-CNN, Yolo, SSD).
Read more
Diggibyte Technologies
Agency job
Bengaluru (Bangalore)
2 - 3 yrs
₹10L - ₹15L / yr
Scala
Spark
Python
Microsoft Windows Azure
SQL Azure

Hiring For Data Engineer - Bangalore (Novel Tech Park)

Salary : Max upto 15LPA

Experience : 3-5years

  • We are looking for an experienced (3-5 years) Data Engineers to join our team in Bangalore.
  • Someone who can help client to build scalable, reliable, and secure Data analytic solutions.

 

Technologies you will get to work with:

 

1.Azure Data-bricks

2.Azure Data factory

3.Azure DevOps

4.Spark with Python & Scala and Airflow scheduling.

 

What You will Do: -

 

* Build large-scale batch and real-time data pipelines with data processing frameworks like spark, Scala on Azure platform.

* Collaborate with other software engineers, ML engineers and stakeholders, taking learning and leadership opportunities that will arise every single day.

* Use best practices in continuous integration and delivery.

* Sharing technical knowledge with other members of the Data Engineering Team.

* Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.

* You will get to work with massive data sets and learn to apply the latest big data technologies on a leading-edge platform.

 

Job Functions:

Information Technology Employment

Type - Full-time

Who can apply -Seniority Level / Mid / Entry level

Read more
Snapblocs
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 10 yrs
₹20L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
You should hold a B. Tech/MTech degree. • You should have 5 to 10 years of experience with a minimum of 3 years in working in any data driven company/platform. • Competency in core java is a must. • You should have worked with distributed data processing frameworks like Apache Spark, Apache Flink or Hadoop. • You should be a team player and have an open mind to approach the problems to solve them in the right manner with the right set of tools and technologies by working with the team. • You should have knowledge of frameworks & distributed systems, be good at algorithms, data structures, and design patterns. • You should have an in-depth understanding of big data technologies and NoSql databases (Kafka, HBase, Spark, Cassandra, MongoDb etc). • Work experience with AWS cloud platform, Spring Boot and developing API will be a plus. • You should have exceptional problem solving and analytical abilities, and organisation skills with an eye for detail
Read more
DP
Posted by Sachin Bhatevara
Pune
2 - 7 yrs
₹4L - ₹15L / yr
Python
MySQL
athena
Data Visualization
Data Analytics
AdElement is an online advertising startup based in Pune. We do AI driven ad personalization for video and display ads. Audiences are targeted algorithmically across biddable sources of ad inventory through real time bidding. We are looking to grow our teams to meet the rapidly expanding market opportunity.

Job Description

  • Use statistical methods to analyze data and generate useful business reports and insights
  • Analyze Publisher and Demand side data and provide actionable insights to improve monetisation to operations team and implement the strategies
  • Provide support for ad hoc data requests from the Operations teams and Management
  • Use 3rd party API's, web scraping, csv report processing to build dashboards in Google Data Studio
  • Provide support for Analytics Processes monitoring and troubleshooting
  • Support in creating reports, dashboards and models
  • Independently determine the appropriate approach for new assignments
Required Skills
  • Inquisitive and having great problem-solving skills
  • Ability to own projects and work independently once given a direction
  • Experience working directly with business users to build reports, dashboards, models and solving business questions with data
  • Tools Expertise - Relational Databases -SQL is a must along with Python
  • Familiarity with AWS Athena, Redshift a plus

Experience

  • 2-7 years

Education

  • UG - B.Tech/B.E.; PG - M.Tech/ MSc, Computer Science, Statistics, Maths, Data Science/ Data Analytics
Read more
DP
Posted by Rajesh C
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹14L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
1. 2 to 4 years of experience
2. hands on experience using python, sql, tablaue
3. Data Analyst 
About Amagi (http://www.amagi.com/" target="_blank">www.amagi.com): Amagi is a market leader in cloud based media technology services for channel creation, distribution and ad monetization. Amagi’s cloud technology and managed services is used by TV networks, content owners, sports rights owners and pay TV / OTT platforms to create 24x7 linear channels for OTT and broadcast and deliver them to end consumers. Amagi’s pioneering and market leading cloud platform has won numerous accolades and is deployed in over 40 countries by 400+ TV networks. Customers of Amagi include A+E Networks, Comcast, Google, NBC Universal, Roku, Samsung and Warner Media. This is a unique and transformative opportunity to participate and grow a world-class technology company that changes the tenets of TV. Amagi is a private equity backed firm with investments from KKR (Emerald Media Fund), Premji Invest and MayField. Amagi has offices in New York, Los Angeles, London, New Delhi and Bangalore. LinkedIn page : https://www.linkedin.com/company/amagicorporation" target="_blank">https://www.linkedin.com/company/amagicorporation News: https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400" target="_blank">https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400/ Cofounder on Youtube: https://www.youtube.com/watch?v=EZ0nBT3ht0E" target="_blank">https://www.youtube.com/watch?v=EZ0nBT3ht0E
 

About Amagi & Growth


Amagi Corporation is a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute and monetize live linear channels on Free-Ad-Supported TV and video services platforms. Amagi also offers 24x7 cloud managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 500+ channels on its platform for linear channel creation, distribution, and monetization with deployments in over 40 countries. Amagi has offices in New York (Corporate office), Los Angeles, and London, broadcast operations in New Delhi, and our Development & Innovation center in Bangalore. Amagi is also expanding in Singapore, Canada and other countries.

Amagi has seen phenomenal growth as a global organization over the last 3 years. Amagi has been a profitable firm for the last 2 years, and is now looking at investing in multiple new areas. Amagi has been backed by 4 investors - Emerald, Premji Invest, Nadathur and Mayfield. As of the fiscal year ending March 31, 2021, the company witnessed stellar growth in the areas of channel creation, distribution, and monetization, enabling customers to extend distribution and earn advertising dollars while saving up to 40% in cost of operations compared to traditional delivery models. Some key highlights of this include:

·   Annual revenue growth of 136%
·   44% increase in customers
·   50+ Free Ad Supported Streaming TV (FAST) platform partnerships and 100+ platform partnerships globally
·   250+ channels added to its cloud platform taking the overall tally to more than 500
·   Approximately 2 billion ad opportunities every month supporting OTT ad-insertion for 1000+ channels
·   60% increase in workforce in the US, UK, and India to support strong customer growth (current headcount being 360 full-time employees + Contractors)
·   5-10x growth in ad impressions among top customers
 
Over the last 4 years, Amagi has grown more than 400%. Amagi now has an aggressive growth plan over the next 3 years - to grow 10X in terms of Revenue. In terms of headcount, Amagi is looking to grow to more than 600 employees over the next 1 year. Amagi is building several key organizational processes to support the high growth journey and has gone digital in a big way.
 
Read more
NA
Agency job
via Talent folks by Rijooshri Saikia
Bengaluru (Bangalore)
15 - 25 yrs
₹10L - ₹15L / yr
Java
Python
Big Data

Job Description

  • Design, development and deployment of highly-available and fault-tolerant enterprise business software at scale.

  • Demonstrate tech expertise to go very deep or broad in solving classes of problems or creating broadly leverage-able solutions.

  • Execute large-scale projects - Provide technical leadership in architecting and building product solutions.

  • Collaborate across teams to deliver a result, from hardworking team members within your group, through smart technologists across lines of business.

  • Be a role model on acting with good judgment and responsibility, helping teams to commit and move forward.

  • Be a humble mentor and trusted advisor for both our talented team members and passionate leaders alike. Deal with differences in opinion in a mature and fair way.

  • Raise the bar by improving standard methodologies, producing best-in-class efficient solutions, code, documentation, testing, and monitoring.

Qualifications

      • 15+ years of relevant engineering experience.

  • Proven record of building and productionizing highly reliable products at scale.

  • Experience with Java and Python

  • Experience with the Big Data technologie is a plus.

  • Ability to assess new technologies and make pragmatic choices that help guide us towards a long-term vision

  • Can collaborate well with several other engineering orgs to articulate requirements and system design

Additional Information

Professional Attributes:
• Team player!

• Great interpersonal skills, deep technical ability, and a portfolio of successful execution.

• Excellent written and verbal communication skills, including the ability to write detailed technical documents.

• Passionate about helping teams grow by inspiring and mentoring engineers.



Read more
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹35L / yr
Python
ETL
Big Data
Amazon Web Services (AWS)
pandas

What you’ll do

  • Deliver plugins for our Python-based ETL pipelines.
  • Deliver Python microservices for provisioning and managing cloud infrastructure.
  • Implement algorithms to analyse large data sets.
  • Draft design documents that translate requirements into code.
  • Deal with challenges associated with handling large volumes of data.
  • Assume responsibilities from technical design through technical client support.
  • Manage expectations with internal stakeholders and context-switch in a fast paced environment.
  • Thrive in an environment that uses AWS and Elasticsearch extensively.
  • Keep abreast of technology and contribute to the engineering strategy.
  • Champion best development practices and provide mentorship.

What we’re looking for

  • Experience in Python 3.
  • Python libraries used for data (such as pandas, numpy).
  • AWS.
  • Elasticsearch.
  • Performance tuning.
  • Object Oriented Design and Modelling.
  • Delivering complex software, ideally in a FinTech setting.
  • CI/CD tools.
  • Knowledge of design patterns.
  • Sharp analytical and problem-solving skills.
  • Strong sense of ownership.
  • Demonstrable desire to learn and grow.
  • Excellent written and oral communication skills.
  • Mature collaboration and mentoring abilities.

About SteelEye Culture

  • Work from home until you are vaccinated against COVID-19
  • Top of the line health insurance • Order discounted meals every day from a dedicated portal
  • Fair and simple salary structure
  • 30+ holidays in a year
  • Fresh fruits every day
  • Centrally located. 5 mins to the nearest metro station (MG Road)
  • Measured on output and not input
Read more
DP
Posted by Priyanka Muralidharan
Bengaluru (Bangalore), Mumbai
4 - 6 yrs
₹19L - ₹24L / yr
SQL
Python
Tableau
Team Management
Statistical Analysis

Role Summary

We Are looking for an analytically inclined, Insights Driven Product Analyst to make our organisation more data driven. In this role you will be responsible for creating dashboards to drive insights for product and business teams. Be it Day to Day decisions as well as long term impact assessment, Measuring the Efficacy of different products or certain teams, You'll be Empowering each of them. The growing nature of the team will require you to be in touch with all of the teams at upgrad. Are you the "Go-To" person everyone looks at for getting Data, Then this role is for you.

 

Roles & Responsibilities

  • Lead and own the analysis of highly complex data sources, identifying trends and patterns in data and provide insights/recommendations based on analysis results
  • Build, maintain, own and communicate detailed reports to assist Marketing, Growth/Learning Experience and Other Business/Executive Teams
  • Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
  • Analyze data and generate insights in the form of user analysis, user segmentation, performance reports, etc.
  • Facilitate review sessions with management, business users and other team members
  • Design and create visualizations to present actionable insights related to data sets and business questions at hand
  • Develop intelligent models around channel performance, user profiling, and personalization

Skills Required

  • Having 4-6 yrs hands-on experience with Product related analytics and reporting
  • Experience with building dashboards in Tableau or other data visualization tools such as D3
  • Strong data, statistics, and analytical skills with a good grasp of SQL.
  • Programming experience in Python is must
  • Comfortable managing large data sets
  • Good Excel/data management skills
Read more
DP
Posted by Ankita Kale
Pune
1 - 5 yrs
₹3L - ₹10L / yr
ETL
Hadoop
Apache Hive
Java
Spark
+2 more
  • Core Java: advanced level competency, should have worked on projects with core Java development.

 

  • Linux shell : advanced level competency, work experience with Linux shell scripting, knowledge and experience to use important shell commands

 

  • Rdbms, SQL: advanced level competency, Should have expertise in SQL query language syntax, should be well versed with aggregations, joins of SQL query language.

 

  • Data structures and problem solving: should have ability to use appropriate data structure.

 

  • AWS cloud : Good to have experience with aws serverless toolset along with aws infra

 

  • Data Engineering ecosystem : Good to have experience and knowledge of data engineering, ETL, data warehouse (any toolset)

 

  • Hadoop, HDFS, YARN : Should have introduction to internal working of these toolsets

 

  • HIVE, MapReduce, Spark: Good to have experience developing transformations using hive queries, MapReduce job implementation and Spark Job Implementation. Spark implementation in Scala will be plus point.

 

  • Airflow, Oozie, Sqoop, Zookeeper, Kafka: Good to have knowledge about purpose and working of these technology toolsets. Working experience will be a plus point here.

 

Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Incubyte?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort