Job Overview:
As a Lead ETL Developer for a very large client of Paradigm, you are in charge of design and creation of data warehouse related functions such as, extraction, transformation and loading of data and expected to have specialized working knowledge in Cloud platforms especially Snowflake. In this role, you’ll be part of Paradigm’s Digital Solutions group, where we are looking for someone with the technical expertise to build and maintain sustainable ETL Solutions around data modeling and data profiling to support identified needs and expectations from the client.
Delivery Responsibilities
- Lead the technical planning, architecture, estimation, develop, and testing of ETL solutions
- Knowledge and experience in most of the following architectural styles: Layered Architectures, Transactional applications, PaaS-based architectures, and SaaS-based applications; Experience developing ETL-based Cloud PaaS and SaaS solutions.
- Create Data models that are aligned with clients requirements.
- Design, Develop and support ETL mapping, strong SQL skills with experience in developing ETL specifications
- Create ELT pipeline, Data Model Updates, & Orchestration using DBT / Streams/ Tasks / Astronomer & Testing
- Focus on ETL aspects including performance, scalability, reliability, monitoring, and other operational concerns of data warehouse solutions
- Design reusable assets, components, standards, frameworks, and processes to support and facilitate end to end ETL solutions
- Experience gathering requirements and defining the strategy for 3rd party data ingestion methodologies such as SAP Hana, and Oracle
- Understanding and experience on most of the following architectural styles: Layered Architectures, Transactional applications, PaaS-based architectures and SaaS-based applications; Experience designing ETL based Cloud PaaS and SaaS solutions.
Required Qualifications
- Expert Hands-on experience in the following:
- Technologies such as Python, Teradata, MYSQL, SQL Server, RDBMS, Apache Airflow, AWS S3, AWS Datalake, Unix scripting, AWS Cloud Formation, DevOps, GitHub
- Demonstrate best practices in implementing Airflow orchestration best practices such as creating DAG’s, and hands on knowledge in Python libraries including Pandas, Numpy, Boto3, Dataframe, connectors to different databases, APIs
- Data modelling, Master and Operational Data Stores, Data ingestion & distribution patterns, ETL / ELT technologies, Relational and Non-Relational DB's, DB Optimization patterns
- Develop virtual warehouses using Snowflake for data-sharing needs for both internal and external customers.
- Create Snowflake data-sharing capabilities that will create a marketplace for sharing files, datasets, and other types of data in real-time and batch frequencies
- At least 8+ years’ experience in ETL/Data Development experience
- Working knowledge of Fact / Dimensional data models and AWS Cloud
- Strong Experience in creating Technical design documents, source-to-target mapping, Test cases/resultsd.
- Understand the security requirements and apply RBAC, PBAC, ABAC policies on the data.
- Build data pipelines in Snowflake leveraging Data Lake (S3/Blob), Stages, Streams, Tasks, Snowpipe, Time travel, and other critical capabilities within Snowflake
- Ability to collaborate, influence, and communicate across multiple stakeholders and levels of leadership, speaking at the appropriate level of detail to both business executives and technology teams
- Excellent communication skills with a demonstrated ability to engage, influence, and encourage partners and stakeholders to drive collaboration and alignment
- High degree of organization, individual initiative, results and solution oriented, and personal accountability and resiliency
- Demonstrated learning agility, ability to make decisions quickly and with the highest level of integrity
- Demonstrable experience of driving meaningful improvements in business value through data management and strategy
- Must have a positive, collaborative leadership style with colleague and customer first attitude
- Should be a self-starter and team player, capable of working with a team of architects, co-developers, and business analysts
Preferred Qualifications:
- Experience with Azure Cloud, DevOps implementation
- Ability to work as a collaborative team, mentoring and training the junior team members
- Position requires expert knowledge across multiple platforms, data ingestion patterns, processes, data/domain models, and architectures.
- Candidates must demonstrate an understanding of the following disciplines: enterprise architecture, business architecture, information architecture, application architecture, and integration architecture.
- Ability to focus on business solutions and understand how to achieve them according to the given timeframes and resources.
- Recognized as an expert/thought leader. Anticipates and solves highly complex problems with a broad impact on a business area.
- Experience with Agile Methodology / Scaled Agile Framework (SAFe).
- Outstanding oral and written communication skills including formal presentations for all levels of management combined with strong collaboration/influencing.
Preferred Education/Skills:
- Prefer Master’s degree
- Bachelor’s Degree in Computer Science with a minimum of 8+ years relevant experience or equivalent.
Similar jobs
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
We are looking out for a technically driven "Full-Stack Engineer" for one of our premium client
COMPANY DESCRIPTION:
Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
- Develop, train, and optimize machine learning models using Python, ML algorithms, deep learning frameworks (e.g., TensorFlow, PyTorch), and other relevant technologies.
- Implement MLOps best practices, including model deployment, monitoring, and versioning.
- Utilize Vertex AI, MLFlow, KubeFlow, TFX, and other relevant MLOps tools and frameworks to streamline the machine learning lifecycle.
- Collaborate with cross-functional teams to design and implement CI/CD pipelines for continuous integration and deployment using tools such as GitHub Actions, TeamCity, and similar platforms.
- Conduct research and stay up-to-date with the latest advancements in machine learning, deep learning, and MLOps technologies.
- Provide guidance and support to data scientists and software engineers on best practices for machine learning development and deployment.
- Assist in developing tooling strategies by evaluating various options, vendors, and product roadmaps to enhance the efficiency and effectiveness of our AI and data science initiatives.
Why LiftOff?
We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.
Many on the team are serial entrepreneurs with a history of successful exits.
As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.
About the Role
If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craft, LiftOff is a great place for you.
- Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
- Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
- Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
- Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
- Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage
Please Note that this is for a Consultant Role
Candidates who are okay with freelancing/Part-time can apply
From building entire infrastructures or platforms to solving complex IT challenges, Cambridge Technology helps businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, big data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.
No Of Positions: 1
Skills required:
- The ideal candidate will have a bachelor’s degree in data science, statistics, or a related discipline with 4-6 years of experience, or a master’s degree with 4-6 years of experience. A strong candidate will also possess many of the following characteristics:
- Strong problem-solving skills with an emphasis on achieving proof-of-concept
- Knowledge of statistical techniques and concepts (regression, statistical tests, etc.)
- Knowledge of machine learning and deep learning fundamentals
- Experience with Python implementations to build ML and deep learning algorithms (e.g., pandas, numpy, sci-kit-learn, Stats Models, Keras, PyTorch, etc.)
- Experience writing and debugging code in an IDE
- Experience using managed web services (e.g., AWS, GCP, etc.)
- Strong analytical and communication skills
- Curiosity, flexibility, creativity, and a strong tolerance for ambiguity
- Ability to learn new tools from documentation and internet resources.
Roles and responsibilities :
- You will work on a small, core team alongside other engineers and business leaders throughout Cambridge with the following responsibilities:
- Collaborate with client-facing teams to design and build operational AI solutions for client engagements.
- Identify relevant data sources for data wrangling and EDA
- Identify model architectures to use for client business needs.
- Build full-stack data science solutions up to MVP that can be deployed into existing client business processes or scaled up based on clear documentation.
- Present findings to teammates and key stakeholders in a clear and repeatable manner.
Experience :
2 - 14 Yrs
datasets
● Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
● Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
● Collaborate with multiple cross-functional teams and work
- 4-7 years of Industry experience in IT or consulting organizations
- 3+ years of experience defining and delivering Informatica Cloud Data Integration & Application Integration enterprise applications in lead developer role
- Must have working knowledge on integrating with Salesforce, Oracle DB, JIRA Cloud
- Must have working scripting knowledge (windows or Nodejs)
Soft Skills
- Superb interpersonal skills, both written and verbal, in order to effectively develop materials that are appropriate for variety of audience in business & technical teams
- Strong presentation skills, successfully present and defend point of view to Business & IT audiences
- Excellent analysis skills and ability to rapidly learn and take advantage of new concepts, business models, and technologies
Roles & Responsibilities
- Designing and delivering a best-in-class, highly scalable data governance platform
- Improving processes and applying best practices
- Contribute in all scrum ceremonies; assuming the role of ‘scum master’ on a rotational basis
- Development, management and operation of our infrastructure to ensure it is easy to deploy, scalable, secure and fault-tolerant
- Flexible on working hours as per business needs
JD : ML/NLP Tech Lead
- We are looking to hire an ML/NLP Tech lead who can own products for a technology perspective and manage a team of up to 10 members. You will play a pivotal role in re-engineering our products, transformation, and scaling of AssessEd
WHAT ARE WE BUILDING :
- A revolutionary way of providing continuous assessments of a child's skill and learning, pointing the way to the child's potential in the future. This as opposed to the traditional one-time, dipstick methodology of a test that hurriedly bundles the child into a slot, that in-turn - declares- the child to be fit for a career in a specific area or a particular set of courses that would perhaps get him somewhere. At the core of our system is a lot of data - both structured and unstructured.
- We have books and questions and web resources and student reports that drive all our machine learning algorithms. Our goal is to not only figure out how a child is coping but to also figure out how to help him by presenting relevant information and questions to him in topics that he is struggling to learn.
Required Skill sets :
- Wisdom to know when to hustle and when to be calm and dig deep. Strong can do mentality, who is joining us to build on a vision, not to do a job.
- A deep hunger to learn, understand, and apply your knowledge to create technology.
- Ability and Experience tackling hard Natural Language Processing problems, to separate wheat from the chaff, knowledge of mathematical tools to succinctly describe the ideas to implement them in code.
- Very Good understanding of Natural Language Processing and Machine Learning with projects to back the same.
- Strong fundamentals in Linear Algebra, Probability and Random Variables, and Algorithms.
- Strong Systems experience in Distributed Systems Pipeline: Hadoop, Spark, etc.
- Good knowledge of at least one prototyping/scripting language: Python, MATLAB/Octave or R.
- Good understanding of Algorithms and Data Structures.
- Strong programming experience in C++/Java/Lisp/Haskell.
- Good written and verbal communication.
Desired Skill sets :
- Passion for well-engineered product and you are - ticked off- when something engineered is off and you want to get your hands dirty and fix it.
- 3+ yrs of research experience in Machine Learning, Deep Learning and NLP
- Top tier peer-reviewed research publication in areas like Algorithms, Computer Vision/Image Processing, Machine Learning or Optimization (CVPR, ICCV, ICML, NIPS, EMNLP, ACL, SODA, FOCS etc)
- Open Source Contribution (include the link to your projects, GitHub etc.)
- Knowledge of functional programming.
- International level participation in ACM ICPC, IOI, TopCoder, etc
- International level participation in Physics or Math Olympiad
- Intellectual curiosity about advanced math topics like Theoretical Computer Science, Abstract Algebra, Topology, Differential Geometry, Category Theory, etc.
What can you expect :
- Opportunity to work on the interesting and hard research problem, to see the real application of state-of-the-art research into practice.
- Opportunity to work on important problems with big social impact: Massive, and direct impact of the work you do on the lives of students.
- An intellectually invigorating, phenomenal work environment, with massive ownership and growth opportunities.
- Learn effective engineering habits required to build/deploy large production-ready ML applications.
- Ability to do quick iterations and deployments.
- We would be excited to see you publish papers (though certain restrictions do apply).
Website : http://Digitalaristotle.ai
Work Location: - Bangalore