Job DescriptionPosition: Sr Data Engineer – Databricks & AWS
Experience: 4 - 5 Years
Company Profile:
Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are transforming ourselves and rapidly expanding our business.
Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.
One of the top partners of Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik in 2017.
Get to know more about us on our website: http://www.exponentia.ai/ and Life @Exponentia.
Role Overview:
· A Data Engineer understands the client requirements and develops and delivers the data engineering solutions as per the scope.
· The role requires good skills in the development of solutions using various services required for data architecture on Databricks Delta Lake, streaming, AWS, ETL Development, and data modeling.
Job Responsibilities
• Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization.
• Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services.
• Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data.
• Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts and demonstrating it to stakeholders.
Technical Skills
• Has more than 2 Years of experience in developing data lakes, and datamarts on the Databricks platform.
• Proven skill sets in AWS Data Lake services such as - AWS Glue, S3, Lambda, SNS, IAM, and skills in Spark, Python, and SQL.
• Experience in Pentaho
• Good understanding of developing a data warehouse, data marts etc.
• Has a good understanding of system architectures, and design patterns and should be able to design and develop applications using these principles.
Personality Traits
• Good collaboration and communication skills
• Excellent problem-solving skills to be able to structure the right analytical solutions.
• Strong sense of teamwork, ownership, and accountability
• Analytical and conceptual thinking
• Ability to work in a fast-paced environment with tight schedules.
• Good presentation skills with the ability to convey complex ideas to peers and management.
Education:
BE / ME / MS/MCA.
About Exponentia.ai
Data is the new Oil and AI is the most powerful value accelerator today - this is one of the most important belief we live by. We are an award-winning AI-Tech MNC transforming businesses through AI, ML and Data Analytics Solutions.
We help organizations solve complex business challenges by combining industry experience, data and AI first engineering practices, and our propriety technology solutions to achieve business outcomes at scale.
Our propriety solutions include - OneTAP for Voice Analytics, Intelligent Nudges, Customer Intelligence Platform and Engagely.ai and we specialise in Data (ML, BI, Cloud, DWH & Data Lakes) and AI Solutions (NLP, Conversational Analytics).
Exponentia.ai was founded in the year 2014 and is headquartered in Mumbai, India. We have expanded our services to the UK, Singapore and US. We’ve been honored with the Innovation Award by Qlik & Excellence in Business Process Automation by Automation Anywhere.
Visit our website- www.exponentia.ai to learn more about our products and services.
Similar jobs
Looking for freelance?
We are seeking a freelance Data Engineer with 7+ years of experience
Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems
What we are looking for
We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes
- The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
- Strong in scripting languages like Python, Scala
- 5+ years of hands-on experience with one or more of these data integration/ETL tools.
- Experience building on-prem data warehousing solutions.
- Experience with designing and developing ETLs, Data Marts, Star Schema
- Designing a data warehouse solution using Synapse or Azure SQL DB
- Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
- Understanding of integration run times available in Azure.
- Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database
Data Analyst
Job Description
Summary
Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.
At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.
Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.
Key Qualifications
- Experience in building highly cost optimised data analytics solutions
- Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
- Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
- Experience in working with AWS or GCP
- Experience with relational and NoSQL databases
- Experience to full stack web development (Preferably Python)
- Expertise with data visualisation systems such as Tableau and Quick Sight
- Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
- Familiarity with ML/AÍ technologies is a plus
- Demonstrate strong understanding of development processes and agile methodologies
- Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly
Description
Data Analytics is at the core of our work, and you will have the opportunity to:
- Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
- Lead quick prototypes by integrating data from multiple sources
- Do advanced Business Analytics through ad-hoc SQL queries
- Work on Sales Finance reporting solutions using tableau, HTML5, React applications
We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.
Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..
Education & Experience
A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.
Senior Data Scientist-Job Description
The Senior Data Scientist role is a creative problem solver who utilizes statistical/mathematical principles and modelling skills to uncover new insights that will significantly and meaningfully impact business decisions and actions. She/he applies their data science expertise in identifying, defining, and executing state-of-art techniques for academic opportunities and business objectives in collaboration with other Analytics team members. The Senior Data Scientist will execute analyses & outputs spanning test design and measurement, predictive analytics, multivariate analysis, data/text mining, pattern recognition, artificial intelligence, and machine learning.
Key Responsibilities:
- Perform the full range of data science activities including test design and measurement, predictive/advanced analytics, and data mining, and analytic dashboards.
- Extract, manipulate, analyse & interpret data from various corporate data sources developing advanced analytic solutions, deriving key observations, findings, insights, and formulating actionable recommendations.
- Generate clearly understood and intuitive data science / advanced analytics outputs.
- Provide thought leadership and recommendations on business process improvement, analytic solutions to complex problems.
- Participate in best practice sharing and communication platform for advancement of the data science discipline.
- Coach and collaborate with other data scientists and data analysts.
- Present impact, insights, outcomes & recommendations to key business partners and stakeholders.
- Comply with established Service Level Agreements to ensure timely, high quality deliverables with value-add recommendations, clearly articulated key findings and observations.
Qualification:
- Bachelor's Degree (B.A./B.S.) or Master’s Degree (M.A./M.S.) in Computer Science, Statistics, Mathematics, Machine Learning, Physics, or similar degree
- 5+ years of experience in data science in a digitally advanced industry focusing on strategic initiatives, marketing and/or operations.
- Advanced knowledge of best-in-class analytic software tools and languages: Python, SQL, R, SAS, Tableau, Excel, PowerPoint.
- Expertise in statistical methods, statistical analysis, data visualization, and data mining techniques.
- Experience in Test design, Design of Experiments, A/B Testing, Measurement Science Strong influencing skills to drive a robust testing agenda and data driven decision making for process improvements
- Strong Critical thinking skills to track down complex data and engineering issues, evaluate different algorithmic approaches, and analyse data to solve problems.
- Experience in partnering with IT, marketing operations & business operations to deploy predictive analytic solutions.
- Ability to translate/communicate complex analytical/statistical/mathematical concepts with non-technical audience.
- Strong written and verbal communications skills, as well as presentation skills.
Should have experience in Big Data, Hadoop.
Currently providing WFH.
immediate joiner or 30 days
Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers. Its founders are IIT and ISB alumni with deep experience in the fin-tech industry, from earlier working with organizations like Axis Bank, Aditya Birla Group, Fractal Analytics, and Housing.com. It has raised funds of Rs. 100 Crore from finance industry stalwarts and is growing by leaps and bounds.
- Ensuring ease of data availability, with relevant dimensions, using Business Intelligence tools.
- Providing strong reporting and analytical information support to the management team.
- Transforming raw data into essential metrics basis needs of relevant stakeholders.
- Performing data analysis for generating reports on a periodic basis.
- Converting essential data into easy to reference visuals using Data Visualization tools (PowerBI, Metabase).
- Providing recommendations to update current MIS to improve reporting efficiency and consistency.
- Bringing fresh ideas to the table and keen observers of trends in the analytics and financial services industry.
What you need to have:
- MBA/ BE/ Graduate, with work experience of 3+ years.
- B.Tech /B.E.; MBA / PGDM
- Experience in Reporting, Data Management (SQL, MongoDB), Visualization (PowerBI, Metabase, Data studio)
- Work experience (into financial services, Indian Banks/ NBFCs in-house analytics units or Fintech/ analytics start-ups would be a plus.)
- Skilled at writing & optimizing large complicated SQL queries & MongoDB scripts.
- Strong knowledge of Banking/ Financial Services domain
- Experience with some of the modern relational databases
- Ability to work on multiple projects of different nature and self- driven,
- Liaise with cross-functional teams to resolve data issues and build strong reports
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Job Description
We are looking for a data scientist that will help us to discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products.
Responsibilities
- Selecting features, building and optimizing classifiers using machine learning techniques
- Data mining using state-of-the-art methods
- Extending company’s data with third party sources of information when needed
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- Processing, cleansing, and verifying the integrity of data used for analysis
- Doing ad-hoc analysis and presenting results in a clear manner
- Creating automated anomaly detection systems and constant tracking of its performance
Skills and Qualifications
- Excellent understanding of machine learning techniques and algorithms, such as Linear regression, SVM, Decision Forests, LSTM, CNN etc.
- Experience with Deep Learning preferred.
- Experience with common data science toolkits, such as R, NumPy, MatLab, etc. Excellence in at least one of these is highly desirable
- Great communication skills
- Proficiency in using query languages such as SQL, Hive, Pig
- Good applied statistics skills, such as statistical testing, regression, etc.
- Good scripting and programming skills
- Data-oriented personality