Title: Data Engineer – Snowflake
Location: Mysore (Hybrid model)
Exp-2-8 yrs
Type: Full Time
Walk-in date: 25th Jan 2023 @Mysore
Job Role: We are looking for an experienced Snowflake developer to join our team as a Data Engineer who will work as part of a team to help design and develop data-driven solutions that deliver insights to the business. The ideal candidate is a data pipeline builder and data wrangler who enjoys building data-driven systems that drive analytical solutions and building them from the ground up. You will be responsible for building and optimizing our data as well as building automated processes for production jobs. You will support our software developers, database architects, data analysts and data scientists on data initiatives
Key Roles & Responsibilities:
- Use advanced complex Snowflake/Python and SQL to extract data from source systems for ingestion into a data pipeline.
- Design, develop and deploy scalable and efficient data pipelines.
- Analyze and assemble large, complex datasets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements. For example: automating manual processes, optimizing data delivery, re-designing data platform infrastructure for greater scalability.
- Build required infrastructure for optimal extraction, loading, and transformation (ELT) of data from various data sources using AWS and Snowflake leveraging Python or SQL technologies.
- Monitor cloud-based systems and components for availability, performance, reliability, security and efficiency
- Create and configure appropriate cloud resources to meet the needs of the end users.
- As needed, document topology, processes, and solution architecture.
- Share your passion for staying on top of tech trends, experimenting with and learning new technologies
Qualifications & Experience
Qualification & Experience Requirements:
- Bachelor's degree in computer science, computer engineering, or a related field.
- 2-8 years of experience working with Snowflake
- 2+ years of experience with the AWS services.
- Candidate should able to write the stored procedure and function in Snowflake.
- At least 2 years’ experience in snowflake developer.
- Strong SQL Knowledge.
- Data injection in snowflake using Snowflake procedure.
- ETL Experience is Must (Could be any tool)
- Candidate should be aware of snowflake architecture.
- Worked on the Migration project
- DW Concept (Optional)
- Experience with cloud data storage and compute components including lambda functions, EC2s, containers.
- Experience with data pipeline and workflow management tools: Airflow, etc.
- Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources
- Experience working with Linux and UNIX environments.
- Experience with profiling data, with and without data definition documentation
- Familiar with Git
- Familiar with issue tracking systems like JIRA (Project Management Tool) or Trello.
- Experience working in an agile environment.
Desired Skills:
- Experience in Snowflake. Must be willing to be Snowflake certified in the first 3 months of employment.
- Experience with a stream-processing system: Snowpipe
- Working knowledge of AWS or Azure
- Experience in migrating from on-prem to cloud systems
About Archwell
Similar jobs
About DeepIntent:
DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioural, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences and message them on a one-to-one basis in a privacy-compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.
What You’ll Do:
- Establish formal data practice for the organisation.
- Build & operate scalable and robust data architectures.
- Create pipelines for the self-service introduction and usage of new data
- Implement DataOps practices
- Design, Develop, and operate Data Pipelines which support Data scientists and machine learning
- Engineers.
- Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy
- to deploy and manage.
- Collaborate with various business stakeholders, software engineers, machine learning
- engineers, and analysts.
Who You Are:
- Experience in designing, developing and operating configurable Data pipelines serving high
- volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, data architecture, Agile and
- DevOps methodologies.
- Experience building Data architectures that optimize performance and cost, whether the
- components are prepackaged or homegrown
- Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash
- Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow
- etc. and big data databases like BigQuery, Clickhouse, etc
- Good communication skills with the ability to collaborate with both technical and non-technical
- people.
- Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious
generation.
o 3+ years of software engineering experience.
o Advanced knowledge of Python, with 2+ years in a production environment.
o Experience with practical applications of deep learning.
o Experience with agile, test-driven development, continuous integration, and automated testing.
o Experience with productionizing machine learning models and integrating into web- services.
o Experience with the full software development life cycle, including requirements collection, design, implementation, testing, and operational support.
o Excellent verbal and written communication, teamwork, decision making and influencing
skills.
o Hustle. Thrives in an evolving, fast paced, ambiguous work environment.
Intuitive cloud (http://www.intuitive.cloud">www.intuitive.cloud) is one of the fastest growing top-tier Cloud Solutions and SDx Engineering solution and service company supporting 80+ Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is a recognized professional and manage service partner for core superpowers in cloud(public/ Hybrid), security, GRC, DevSecOps, SRE, Application modernization/ containers/ K8 -as-a- service and cloud application delivery.
Data Engineering:
- 9+ years’ experience as data engineer.
- Must have 4+ Years in implementing data engineering solutions with Databricks.
- This is hands on role building data pipelines using Databricks. Hands-on technical experience with Apache Spark.
- Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers
- Must have worked with relational databases like Snowflake. Good SQL experience for writing complex SQL transformation.
- Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
- Hands on architectural experience
- Nice to have Databricks administration including security and infrastructure features of Databricks.
You will be responsible for designing, building, and maintaining data pipelines that handle Real-world data at Compile. You will be handling both inbound and outbound data deliveries at Compile for datasets including Claims, Remittances, EHR, SDOH, etc.
You will
- Work on building and maintaining data pipelines (specifically RWD).
- Build, enhance and maintain existing pipelines in pyspark, python and help build analytical insights and datasets.
- Scheduling and maintaining pipeline jobs for RWD.
- Develop, test, and implement data solutions based on the design.
- Design and implement quality checks on existing and new data pipelines.
- Ensure adherence to security and compliance that is required for the products.
- Maintain relationships with various data vendors and track changes and issues across vendors and deliveries.
You have
- Hands-on experience with ETL process (min of 5 years).
- Excellent communication skills and ability to work with multiple vendors.
- High proficiency with Spark, SQL.
- Proficiency in Data modeling, validation, quality check, and data engineering concepts.
- Experience in working with big-data processing technologies using - databricks, dbt, S3, Delta lake, Deequ, Griffin, Snowflake, BigQuery.
- Familiarity with version control technologies, and CI/CD systems.
- Understanding of scheduling tools like Airflow/Prefect.
- Min of 3 years of experience managing data warehouses.
- Familiarity with healthcare datasets is a plus.
Compile embraces diversity and equal opportunity in a serious way. We are committed to building a team of people from many backgrounds, perspectives, and skills. We know the more inclusive we are, the better our work will be.
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
What you will do:
- Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
- Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
- Back-testing investment appraisal models at regular intervals to improve the same
- Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
- Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
- Identifying relevant sub-sector criteria to score and rate investment opportunities internally
Desired Candidate Profile
What you need to have:
- Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
- Experience in working in lending/investing fintech (mandatory)
- Strong Excel skills (mandatory)
- Previous experience in credit rating or credit scoring or investment analysis (preferred)
- Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
- Proficiency in data analysis (preferred)
- Good verbal and written skills
Job Description: Data Scientist
At Propellor.ai, we derive insights that allow our clients to make scientific decisions. We believe in demanding more from the fields of Mathematics, Computer Science, and Business Logic. Combine these and we show our clients a 360-degree view of their business. In this role, the Data Scientist will be expected to work on Procurement problems along with a team-based across the globe.
We are a Remote-First Company.
Read more about us here: https://www.propellor.ai/consulting" target="_blank">https://www.propellor.ai/consulting
What will help you be successful in this role
- Articulate
- High Energy
- Passion to learn
- High sense of ownership
- Ability to work in a fast-paced and deadline-driven environment
- Loves technology
- Highly skilled at Data Interpretation
- Problem solver
- Ability to narrate the story to the business stakeholders
- Generate insights and the ability to turn them into actions and decisions
Skills to work in a challenging, complex project environment
- Need you to be naturally curious and have a passion for understanding consumer behavior
- A high level of motivation, passion, and high sense of ownership
- Excellent communication skills needed to manage an incredibly diverse slate of work, clients, and team personalities
- Flexibility to work on multiple projects and deadline-driven fast-paced environment
- Ability to work in ambiguity and manage the chaos
Key Responsibilities
- Analyze data to unlock insights: Ability to identify relevant insights and actions from data. Use regression, cluster analysis, time series, etc. to explore relationships and trends in response to stakeholder questions and business challenges.
- Bring in experience for AI and ML: Bring in Industry experience and apply the same to build efficient and optimal Machine Learning solutions.
- Exploratory Data Analysis (EDA) and Generate Insights: Analyse internal and external datasets using analytical techniques, tools, and visualization methods. Ensure pre-processing/cleansing of data and evaluate data points across the enterprise landscape and/or external data points that can be leveraged in machine learning models to generate insights.
- DS and ML Model Identification and Training: Identity, test, and train machine learning models that need to be leveraged for business use cases. Evaluate models based on interpretability, performance, and accuracy as required. Experiment and identify features from datasets that will help influence model outputs. Determine what models will need to be deployed, data points that need to be fed into models, and aid in the deployment and maintenance of models.
Technical Skills
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of them. We are open to promising candidates who are passionate about their work, fast learners and are team players.
- Strong experience with machine learning and AI including regression, forecasting, time series, cluster analysis, classification, Image recognition, NLP, Text Analytics and Computer Vision.
- Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, or similar.
- Strong experience with popular database programming languages including SQL.
- Strong experience in Spark/Pyspark
- Experience in working in Databricks
What are the company benefits you get, when you join us as?
- Permanent Work from Home Opportunity
- Opportunity to work with Business Decision Makers and an internationally based team
- The work environment that offers limitless learning
- A culture void of any bureaucracy, hierarchy
- A culture of being open, direct, and with mutual respect
- A fun, high-caliber team that trusts you and provides the support and mentorship to help you grow
- The opportunity to work on high-impact business problems that are already defining the future of Marketing and improving real lives
To know more about how we work: https://bit.ly/3Oy6WlE" target="_blank">https://bit.ly/3Oy6WlE
Whom will you work with?
You will closely work with other Senior Data Scientists and Data Engineers.
Immediate to 15-day Joiners will be preferred.
- Extract and present valuable information from data
- Understand business requirements and generate insights
- Build mathematical models, validate and work with them
- Explain complex topics tailored to the audience
- Validate and follow up on results
- Work with large and complex data sets
- Establish priorities with clear goals and responsibilities to achieve a high level of performance.
- Work in an agile and iterative manner on solving problems
- Evaluate different options proactively and the ability to solve problems in an innovative way. Develop new solutions or combine existing methods to create new approaches.
- Good understanding of Digital & analytics
- Strong communication skills, orally and in writing
Job Overview:
As a Data Scientist, you will work in collaboration with our business and engineering people, on creating value from data. Often the work requires solving complex problems by turning vast amounts of data into business insights through advanced analytics, modeling, and machine learning. You have a strong foundation in analytics, mathematical modeling, computer science, and math - coupled with a strong business sense. You proactively fetch information from various sources and analyze it for better understanding of how the business performs. Furthermore, you model and build AI tools that automate certain processes within the company. The solutions produced will be implemented to impact business results.
Primary Responsibilities:
- Develop an understanding of business obstacles, create solutions based on advanced analytics and draw implications for model development
- Combine, explore, and draw insights from data. Often large and complex data assets from different parts of the business.
- Design and build explorative, predictive- or prescriptive models, utilizing optimization, simulation, and machine learning techniques
- Prototype and pilot new solutions and be a part of the aim of ‘productizing’ those valuable solutions that can have an impact at a global scale
- Guides and coaches other chapter colleagues to help solve data/technical problems at an operational level, and in methodologies to help improve development processes
- Identifies and interprets trends and patterns in complex data sets to enable the business to make data-driven decisions
Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.
With over 1000+ associates globally, Indium operates through offices in the US, UK and India
Visit http://www.indiumsoftware.com">www.indiumsoftware.com to know more.
Job Title: Analytics Data Engineer
What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.
We ask:
Extensive Experience with SQL and strong ability to process and analyse complex data
The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.
Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto
- Relate Metrics to product
- Programmatic Thinking
- Edge cases
- Good Communication
- Product functionality understanding
Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!
Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!