Big Data Engineer
at Altimetrik
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)
Similar jobs
Job Title: SQL Query Writer - Analytics Automation
Location: Thane (West), Mumbai
Experience: 4-5 years
Responsibilities:
- Develop and optimize SQL queries for efficient data retrieval and analysis.
- Automate analytics processes using platforms like SQL, Python, ACL, Alteryx, Analyzer, Excel Macros, and Access Query.
- Collaborate with cross-functional teams to understand analytical requirements and provide effective solutions.
- Ensure data accuracy, integrity, and security in automated processes.
- Troubleshoot and resolve issues related to analytics automation.
Qualifications:
- Minimum 3 years of experience in SQL query writing and analytics automation.
- Proficiency in SQL, Python, ACL, Alteryx, Analyzer, Excel Macros, and Access Query.
- Strong analytical skills and attention to detail.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_science.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fmachine_learning.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
- Partners with business stakeholders to translate business objectives into clearly defined analytical projects.
- Identify opportunities for text analytics and NLP to enhance the core product platform, select the best machine learning techniques for the specific business problem and then build the models that solve the problem.
- Own the end-end process, from recognizing the problem to implementing the solution.
- Define the variables and their inter-relationships and extract the data from our data repositories, leveraging infrastructure including Cloud computing solutions and relational database environments.
- Build predictive models that are accurate and robust and that help our customers to utilize the core platform to the maximum extent.
Skills and Qualification
- 12 to 15 yrs of experience.
- An advanced degree in predictive analytics, machine learning, artificial intelligence; or a degree in programming and significant experience with text analytics/NLP. He shall have a strong background in machine learning (unsupervised and supervised techniques). In particular, excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc.
- Experience with text mining, parsing, and classification using state-of-the-art techniques.
- Experience with information retrieval, Natural Language Processing, Natural Language
- Understanding and Neural Language Modeling.
- Ability to evaluate the quality of ML models and to define the right performance metrics for models in accordance with the requirements of the core platform.
- Experience in the Python data science ecosystem: Pandas, NumPy, SciPy, sci-kit-learn, NLTK, Gensim, etc.
- Excellent verbal and written communication skills, particularly possessing the ability to share technical results and recommendations to both technical and non-technical audiences.
- Ability to perform high-level work both independently and collaboratively as a project member or leader on multiple projects.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Professional experience in Python – Mandatory experience
Basic knowledge of any BI Tool (Microsoft Power BI, Tableau etc.) and experience in R
will be an added advantage
Proficient in Excel
Good verbal and written communication skills
Key Responsibilities:
Analyze data trends and provide intelligent business insights, monitor operational and
business metrics
Complete ownership of business excellence dashboard and preparation of reports for
senior management stating trends, patterns, and predictions using relevant data
Review, validate and analyse data points and implement new data analysis
methodologies
Perform data profiling to identify and understand anomalies
Perform analysis to assess quality and meaning of data
Develop policies and procedures for the collection and analysis of data
Analyse existing process with the help of data and propose process change and/or lead
process re-engineering initiatives
Use BI Tools (Microsoft Power BI/Tableau) and develop and manage BI solutions
Job Overview
We are looking for a Data Engineer to join our data team to solve data-driven critical
business problems. The hire will be responsible for expanding and optimizing the existing
end-to-end architecture including the data pipeline architecture. The Data Engineer will
collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is
consistent throughout ongoing projects. The right candidate should have hands on in
developing a hybrid set of data-pipelines depending on the business requirements.
Responsibilities
- Develop, construct, test and maintain existing and new data-driven architectures.
- Align architecture with business requirements and provide solutions which fits best
- to solve the business problems.
- Build the infrastructure required for optimal extraction, transformation, and loading
- of data from a wide variety of data sources using SQL and Azure ‘big data’
- technologies.
- Data acquisition from multiple sources across the organization.
- Use programming language and tools efficiently to collate the data.
- Identify ways to improve data reliability, efficiency and quality
- Use data to discover tasks that can be automated.
- Deliver updates to stakeholders based on analytics.
- Set up practices on data reporting and continuous monitoring
Required Technical Skills
- Graduate in Computer Science or in similar quantitative area
- 1+ years of relevant work experience as a Data Engineer or in a similar role.
- Advanced SQL knowledge, Data-Modelling and experience working with relational
- databases, query authoring (SQL) as well as working familiarity with a variety of
- databases.
- Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
- architectures.
- Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
- Experience with orchestrating tool like Airflow or similar
- Experience with Azure Data Factory is good to have
- Build processes supporting data transformation, data structures, metadata,
- dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic
- environment.
- Good understanding of Git workflow, Test-case driven development and using CICD
- is good to have
- Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
- the following software/tools:
- Experience with big data tools: Hadoop, Spark, Hive, etc.
- Experience with relational SQL and NoSQL databases
- Experience with cloud data services
- Experience with object-oriented/object function scripting languages: Python, Scala, etc.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_science.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fmachine_learning.png&w=32&q=75)
- Working closely with business stakeholders to define, strategize and execute crucial business problem statements which lie at the core of improvising current and future data-backed product offerings.
- Building and refining underwriting models for extending credit to sellers and API Partners in collaboration with the lending team
- Conceiving, planning and prioritizing data projects and manage timelines
- Building analytical systems and predictive models as a part of the agile ecosystem
- Testing performance of data-driven products participating in sprint-wise feature releases
- Managing a team of data scientists and data engineers to develop, train and test predictive models
- Managing collaboration with internal and external stakeholders
- Building data-centric culture from within, partnering with every team, learning deeply about business, working with highly experienced, sharp and insanely ambitious colleagues
What you need to have:
- B.Tech/ M.Tech/ MS/ PhD in Data Science / Computer Science, Statistics, Mathematics & Computation with a demonstrated skill-set in leading an Analytics and Data Science team from IIT, BITS Pilani, ISI
- 8+ years working in the Data Science and analytics domain with 3+ years of experience in leading a data science team to understand the projects to be prioritized, how the team strategy aligns with the organization mission;
- Deep understanding of credit risk landscape; should have built or maintained underwriting models for unsecured lending products
- Should have handled a leadership team in a tech startup preferably a fintech/ lending/ credit risk startup.
- We value entrepreneurship spirit: if you have had the experience of starting your own venture - that is an added advantage.
- Strategic thinker with agility and endurance
- Aware of the latest industry trends in Data Science and Analytics with respect to Fintech, Digital Transformations and Credit-lending domain
- Excellent command over communication is the key to manage multiple stakeholders like the leadership team, product teams, existing & new investors.
- Cloud Computing, Python, SQL, ML algorithms, Analytics and problem - solving mindset
- Knowledge and demonstrated skill-sets in AWS
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
- 6+ years of recent hands-on Java development
- Developing data pipelines in AWS or Google Cloud
- Java, Python, JavaScript programming languages
- Great understanding of designing for performance, scalability, and reliability of data intensive application
- Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
- In-depth understanding of object oriented programming concepts and design patterns
- Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
- Understanding of full software development life cycle, agile development and continuous integration
- Experience in Agile methodologies including Scrum and Kanban
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
● Good communication and collaboration skills with 4-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets .
● Knowledge of building Web Scraping and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra ( data stores )
would be a plus.
● Expert in Python programming
Role and responsibilities
● Inclined towards working in a start-up environment.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Design and Build robust and scalable data engineering solutions for structured and unstructured data for
delivering business insights, reporting and analytics.
● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall
system performance.
● Build robust API ’s that powers our delivery points (Dashboards, Visualizations and other integrations).
Primary responsibilities:
- Architect, Design and Build high performance Search systems for personalization, optimization, and targeting
- Designing systems with Solr, Akka, Cassandra, Kafka
- Algorithmic development with primary focus Machine Learning
- Working with rapid and innovative development methodologies like: Kanban, Continuous Integration and Daily deployments
- Participation in design and code reviews and recommend improvements
- Unit testing with JUnit, Performance testing and tuning
- Coordination with internal and external teams
- Mentoring junior engineers
- Participate in Product roadmap and Prioritization discussions and decisions
- Evangelize the solution with Professional services and Customer Success teams
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fr.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_analytics.png&w=32&q=75)
What you will be doing:
As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -
- Dive into the data and identify patterns
- Development of end-to-end Credit models and credit policy for our existing credit products
- Leverage alternate data to develop best-in-class underwriting models
- Working on Big Data to develop risk analytical solutions
- Development of Fraud models and fraud rule engine
- Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
- Working on cutting-edge techniques e.g. machine learning and deep learning models
Example of projects done in past:
- Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
- Fraud model development, deployment and rules for EMEA region
Basic Requirements:
- 1-3 years of work experience as a Data scientist (in Credit domain)
- 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
- Strong problem solving and understand and execute complex analysis
- Experience in at least one of the languages - R/Python/SAS and SQL
- Experience in in Credit industry (Fintech/bank)
- Familiarity with the best practices of Data Science
Add-on Skills :
- Experience in working with big data
- Solid coding practices
- Passion for building new tools/algorithms
- Experience in developing Machine Learning models
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_science.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fmachine_learning.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fsearch.png&w=48&q=75)
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)