Cutshort logo
Byteware Cloud PVT LTD logo
ETL Developer
Byteware Cloud PVT LTD's logo

ETL Developer

Ibrahim Ahmed's profile picture
Posted by Ibrahim Ahmed
6 - 10 yrs
₹10L - ₹15L / yr
Remote only
Skills
Data Warehouse (DWH)
Informatica
ETL

Job Overview:

As a Lead ETL Developer for a very large client of Paradigm, you are in charge of design and creation of data warehouse related functions such as, extraction, transformation and loading of data and expected to have specialized working knowledge in Cloud platforms especially Snowflake. In this role, you’ll be part of Paradigm’s Digital Solutions group, where we are looking for someone with the technical expertise to build and maintain sustainable ETL Solutions around data modeling and data profiling to support identified needs and expectations from the client.

 

Delivery Responsibilities

  • Lead the technical planning, architecture, estimation, develop, and testing of ETL solutions
  • Knowledge and experience in most of the following architectural styles: Layered Architectures, Transactional applications, PaaS-based architectures, and SaaS-based applications; Experience developing ETL-based Cloud PaaS and SaaS solutions.
  • Create Data models that are aligned with clients requirements.
  • Design, Develop and support ETL mapping, strong SQL skills with experience in developing ETL specifications
  • Create ELT pipeline, Data Model Updates, & Orchestration using DBT / Streams/ Tasks / Astronomer & Testing
  • Focus on ETL aspects including performance, scalability, reliability, monitoring, and other operational concerns of data warehouse solutions
  • Design reusable assets, components, standards, frameworks, and processes to support and facilitate end to end ETL solutions
  • Experience gathering requirements and defining the strategy for 3rd party data ingestion methodologies such as SAP Hana, and Oracle
  • Understanding and experience on most of the following architectural styles: Layered Architectures, Transactional applications, PaaS-based architectures and SaaS-based applications; Experience designing ETL based Cloud PaaS and SaaS solutions.

Required Qualifications

  • Expert Hands-on experience in the following:
  • Technologies such as Python, Teradata, MYSQL, SQL Server, RDBMS, Apache Airflow, AWS S3, AWS Datalake, Unix scripting, AWS Cloud Formation, DevOps, GitHub
  • Demonstrate best practices in implementing Airflow orchestration best practices such as creating DAG’s, and hands on knowledge in Python libraries including Pandas, Numpy, Boto3, Dataframe, connectors to different databases, APIs
  • Data modelling, Master and Operational Data Stores, Data ingestion & distribution patterns, ETL / ELT technologies, Relational and Non-Relational DB's, DB Optimization patterns
  • Develop virtual warehouses using Snowflake for data-sharing needs for both internal and external customers.
  • Create Snowflake data-sharing capabilities that will create a marketplace for sharing files, datasets, and other types of data in real-time and batch frequencies
  • At least 8+ years’ experience in ETL/Data Development experience
  • Working knowledge of Fact / Dimensional data models and AWS Cloud
  • Strong Experience in creating Technical design documents, source-to-target mapping, Test cases/resultsd.
  • Understand the security requirements and apply RBAC, PBAC, ABAC policies on the data.
  • Build data pipelines in Snowflake leveraging Data Lake (S3/Blob), Stages, Streams, Tasks, Snowpipe, Time travel, and other critical capabilities within Snowflake
  • Ability to collaborate, influence, and communicate across multiple stakeholders and levels of leadership, speaking at the appropriate level of detail to both business executives and technology teams
  • Excellent communication skills with a demonstrated ability to engage, influence, and encourage partners and stakeholders to drive collaboration and alignment
  • High degree of organization, individual initiative, results and solution oriented, and personal accountability and resiliency
  • Demonstrated learning agility, ability to make decisions quickly and with the highest level of integrity
  • Demonstrable experience of driving meaningful improvements in business value through data management and strategy
  • Must have a positive, collaborative leadership style with colleague and customer first attitude
  • Should be a self-starter and team player, capable of working with a team of architects, co-developers, and business analysts

 

Preferred Qualifications:

  • Experience with Azure Cloud, DevOps implementation
  • Ability to work as a collaborative team, mentoring and training the junior team members
  • Position requires expert knowledge across multiple platforms, data ingestion patterns, processes, data/domain models, and architectures.
  • Candidates must demonstrate an understanding of the following disciplines: enterprise architecture, business architecture, information architecture, application architecture, and integration architecture.
  • Ability to focus on business solutions and understand how to achieve them according to the given timeframes and resources.
  • Recognized as an expert/thought leader. Anticipates and solves highly complex problems with a broad impact on a business area.
  • Experience with Agile Methodology / Scaled Agile Framework (SAFe).
  • Outstanding oral and written communication skills including formal presentations for all levels of management combined with strong collaboration/influencing.

 

Preferred Education/Skills:

  • Prefer Master’s degree
  • Bachelor’s Degree in Computer Science with a minimum of 8+ years relevant experience or equivalent.


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Byteware Cloud PVT LTD

Founded :
1998
Type
Size :
100-1000
Stage :
Profitable
About
N/A
Company social profiles
linkedin

Similar jobs

Leading Sales Platform
Bengaluru (Bangalore)
5 - 10 yrs
₹30L - ₹45L / yr
Big Data
ETL
Spark
Data engineering
Data governance
+4 more
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more
Gurugram, Bengaluru (Bangalore)
2 - 9 yrs
Best in industry
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
Greetings!!

We are looking out for a technically driven  "Full-Stack Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. 

Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Read more
Concinnity Media Technologies
at Concinnity Media Technologies
2 candid answers
Anirban Biswas
Posted by Anirban Biswas
Pune
6 - 10 yrs
₹18L - ₹27L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+9 more
  • Develop, train, and optimize machine learning models using Python, ML algorithms, deep learning frameworks (e.g., TensorFlow, PyTorch), and other relevant technologies.
  • Implement MLOps best practices, including model deployment, monitoring, and versioning.
  • Utilize Vertex AI, MLFlow, KubeFlow, TFX, and other relevant MLOps tools and frameworks to streamline the machine learning lifecycle.
  • Collaborate with cross-functional teams to design and implement CI/CD pipelines for continuous integration and deployment using tools such as GitHub Actions, TeamCity, and similar platforms.
  • Conduct research and stay up-to-date with the latest advancements in machine learning, deep learning, and MLOps technologies.
  • Provide guidance and support to data scientists and software engineers on best practices for machine learning development and deployment.
  • Assist in developing tooling strategies by evaluating various options, vendors, and product roadmaps to enhance the efficiency and effectiveness of our AI and data science initiatives.


Read more
LiftOff Software India
at LiftOff Software India
2 recruiters
Hameeda Haider
Posted by Hameeda Haider
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

About the Role

 

If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craftLiftOff is a great place for you.


  • Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
  • Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
  • Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
  • Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
  • Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage


Please Note that this is for a Consultant Role

Candidates who are okay with freelancing/Part-time can apply

Read more
Cambridge Technology
at Cambridge Technology
2 recruiters
Muthyala Shirish Kumar
Posted by Muthyala Shirish Kumar
Hyderabad
2 - 15 yrs
₹10L - ₹40L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+7 more

From building entire infrastructures or platforms to solving complex IT challenges, Cambridge Technology helps businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, big data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.


No Of Positions: 1


Skills required: 

  • The ideal candidate will have a bachelor’s degree in data science, statistics, or a related discipline with 4-6 years of experience, or a master’s degree with 4-6 years of experience. A strong candidate will also possess many of the following characteristics:
  • Strong problem-solving skills with an emphasis on achieving proof-of-concept
  • Knowledge of statistical techniques and concepts (regression, statistical tests, etc.)
  • Knowledge of machine learning and deep learning fundamentals
  • Experience with Python implementations to build ML and deep learning algorithms (e.g., pandas, numpy, sci-kit-learn, Stats Models, Keras, PyTorch, etc.)
  • Experience writing and debugging code in an IDE
  • Experience using managed web services (e.g., AWS, GCP, etc.)
  • Strong analytical and communication skills
  • Curiosity, flexibility, creativity, and a strong tolerance for ambiguity
  • Ability to learn new tools from documentation and internet resources.

Roles and responsibilities :

  • You will work on a small, core team alongside other engineers and business leaders throughout Cambridge with the following responsibilities:
  • Collaborate with client-facing teams to design and build operational AI solutions for client engagements.
  • Identify relevant data sources for data wrangling and EDA
  • Identify model architectures to use for client business needs.
  • Build full-stack data science solutions up to MVP that can be deployed into existing client business processes or scaled up based on clear documentation.
  • Present findings to teammates and key stakeholders in a clear and repeatable manner.

Experience :

2 - 14 Yrs

Read more
NoBroker
at NoBroker
1 video
26 recruiters
noor aqsa
Posted by noor aqsa
Bengaluru (Bangalore)
1 - 3 yrs
₹6L - ₹8L / yr
skill iconJava
Spark
PySpark
Data engineering
Big Data
+2 more
You will build, setup and maintain some of the best data pipelines and MPP frameworks for our
datasets
Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
Collaborate with multiple cross-functional teams and work
Read more
Bengaluru (Bangalore), Hyderabad, Chennai
4 - 7 yrs
₹12L - ₹15L / yr
Informatica
Data integration
Salesforce
JIRA
Oracle
+1 more
Knowledge & Experience:
  • 4-7 years of Industry experience in IT or consulting organizations
  • 3+ years of experience defining and delivering Informatica Cloud Data Integration & Application Integration enterprise applications in lead developer role
  • Must have working knowledge on integrating with Salesforce, Oracle DB, JIRA Cloud  
  • Must have working scripting knowledge (windows or Nodejs) 


Soft Skills 

  • Superb interpersonal skills, both written and verbal, in order to effectively develop materials that are appropriate for variety of audience in business & technical teams
  • Strong presentation skills, successfully present and defend point of view to Business & IT audiences
  • Excellent analysis skills and ability to rapidly learn and take advantage of new concepts, business models, and technologies
Read more
NCR (Delhi | Gurgaon | Noida)
2 - 12 yrs
₹25L - ₹40L / yr
Data governance
DevOps
Data integration
Data engineering
skill iconPython
+14 more
Data Platforms (Data Integration) is responsible for envisioning, building and operating the Bank’s data integration platforms. The successful candidate will work out of Gurgaon as a part of a high performing team who is distributed across our two development centers – Copenhagen and Gurugram. The individual must be driven, passionate about technology and display a level of customer service that is second to none.

Roles & Responsibilities

  • Designing and delivering a best-in-class, highly scalable data governance platform
  • Improving processes and applying best practices
  • Contribute in all scrum ceremonies; assuming the role of ‘scum master’ on a rotational basis
  •  Development, management and operation of our infrastructure to ensure it is easy to deploy, scalable, secure and fault-tolerant
  • Flexible on working hours as per business needs
Read more
Digital Aristotle
at Digital Aristotle
2 recruiters
Digital Aristotle
Posted by Digital Aristotle
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹15L / yr
skill iconDeep Learning
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
skill iconPython

JD : ML/NLP Tech Lead

- We are looking to hire an ML/NLP Tech lead who can own products for a technology perspective and manage a team of up to 10 members. You will play a pivotal role in re-engineering our products, transformation, and scaling of AssessEd

WHAT ARE WE BUILDING :

- A revolutionary way of providing continuous assessments of a child's skill and learning, pointing the way to the child's potential in the future. This as opposed to the traditional one-time, dipstick methodology of a test that hurriedly bundles the child into a slot, that in-turn - declares- the child to be fit for a career in a specific area or a particular set of courses that would perhaps get him somewhere. At the core of our system is a lot of data - both structured and unstructured. 

 

- We have books and questions and web resources and student reports that drive all our machine learning algorithms. Our goal is to not only figure out how a child is coping but to also figure out how to help him by presenting relevant information and questions to him in topics that he is struggling to learn.

Required Skill sets :

- Wisdom to know when to hustle and when to be calm and dig deep. Strong can do mentality, who is joining us to build on a vision, not to do a job.

- A deep hunger to learn, understand, and apply your knowledge to create technology.

- Ability and Experience tackling hard Natural Language Processing problems, to separate wheat from the chaff, knowledge of mathematical tools to succinctly describe the ideas to implement them in code.

- Very Good understanding of Natural Language Processing and Machine Learning with projects to back the same.

- Strong fundamentals in Linear Algebra, Probability and Random Variables, and Algorithms.

- Strong Systems experience in Distributed Systems Pipeline: Hadoop, Spark, etc.

- Good knowledge of at least one prototyping/scripting language: Python, MATLAB/Octave or R.

- Good understanding of Algorithms and Data Structures.

- Strong programming experience in C++/Java/Lisp/Haskell.

- Good written and verbal communication.

Desired Skill sets :

- Passion for well-engineered product and you are - ticked off- when something engineered is off and you want to get your hands dirty and fix it.

- 3+ yrs of research experience in Machine Learning, Deep Learning and NLP

- Top tier peer-reviewed research publication in areas like Algorithms, Computer Vision/Image Processing, Machine Learning or Optimization (CVPR, ICCV, ICML, NIPS, EMNLP, ACL, SODA, FOCS etc)

- Open Source Contribution (include the link to your projects, GitHub etc.)

- Knowledge of functional programming.

- International level participation in ACM ICPC, IOI, TopCoder, etc

 

- International level participation in Physics or Math Olympiad

- Intellectual curiosity about advanced math topics like Theoretical Computer Science, Abstract Algebra, Topology, Differential Geometry, Category Theory, etc.

What can you expect :

- Opportunity to work on the interesting and hard research problem, to see the real application of state-of-the-art research into practice.

- Opportunity to work on important problems with big social impact: Massive, and direct impact of the work you do on the lives of students.

- An intellectually invigorating, phenomenal work environment, with massive ownership and growth opportunities.

- Learn effective engineering habits required to build/deploy large production-ready ML applications.

- Ability to do quick iterations and deployments.

- We would be excited to see you publish papers (though certain restrictions do apply).

Website : http://Digitalaristotle.ai


Work Location: - Bangalore

Read more
Saama Technologies
at Saama Technologies
6 recruiters
Sandeep Chaudhary
Posted by Sandeep Chaudhary
Pune
2 - 5 yrs
₹1L - ₹18L / yr
Hadoop
Spark
Apache Hive
Apache Flume
skill iconJava
+5 more
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos