Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases .
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
About Fintech lead,
Similar jobs
Job Title: Credit Risk Analyst
Company: FatakPay FinTech
Location: Mumbai, India
Salary Range: INR 8 - 15 Lakhs per annum
Job Description:
FatakPay, a leading player in the fintech sector, is seeking a dynamic and skilled Credit Risk Analyst to join our team in Mumbai. This position is tailored for professionals who are passionate about leveraging technology to enhance financial services. If you have a strong background in engineering and a keen eye for risk management, we invite you to be a part of our innovative journey.
Key Responsibilities:
- Conduct thorough risk assessments by analyzing borrowers' financial data, including financial statements, credit scores, and income details.
- Develop and refine predictive models using advanced statistical methods to forecast loan defaults and assess creditworthiness.
- Collaborate in the formulation and execution of credit policies and risk management strategies, ensuring compliance with regulatory standards.
- Monitor and analyze the performance of loan portfolios, identifying trends, risks, and opportunities for improvement.
- Stay updated with financial regulations and standards, ensuring all risk assessment processes are in compliance.
- Prepare comprehensive reports on credit risk analyses and present findings to senior management.
- Work closely with underwriting, finance, and sales teams to provide critical input influencing lending decisions.
- Analyze market trends and economic conditions, adjusting risk assessment models and strategies accordingly.
- Utilize cutting-edge financial technologies for more efficient and accurate data analysis.
- Engage in continual learning to stay abreast of new tools, techniques, and best practices in credit risk management.
Qualifications:
- Minimum qualification: B.Tech or Engineering degree from a reputed institution.
- 2-4 years of experience in credit risk analysis, preferably in a fintech environment.
- Proficiency in data analysis, statistical modeling, and machine learning techniques.
- Strong analytical and problem-solving skills.
- Excellent communication skills, with the ability to present complex data insights clearly.
- A proactive approach to work in a fast-paced, technology-driven environment.
- Up-to-date knowledge of financial regulations and compliance standards.
We look forward to discovering how your expertise and innovative ideas can contribute to the growth and success of FatakPay. Join us in redefining the future of fintech!
- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred
-
Fix issues with plugins for our Python-based ETL pipelines
-
Help with automation of standard workflow
-
Deliver Python microservices for provisioning and managing cloud infrastructure
-
Responsible for any refactoring of code
-
Effectively manage challenges associated with handling large volumes of data working to tight deadlines
-
Manage expectations with internal stakeholders and context-switch in a fast-paced environment
-
Thrive in an environment that uses AWS and Elasticsearch extensively
-
Keep abreast of technology and contribute to the engineering strategy
-
Champion best development practices and provide mentorship to others
-
First and foremost you are a Python developer, experienced with the Python Data stack
-
You love and care about data
-
Your code is an artistic manifest reflecting how elegant you are in what you do
-
You feel sparks of joy when a new abstraction or pattern arises from your code
-
You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
-
You are a continuous learner
-
You have a natural willingness to automate tasks
-
You have critical thinking and an eye for detail
-
Excellent ability and experience of working to tight deadlines
-
Sharp analytical and problem-solving skills
-
Strong sense of ownership and accountability for your work and delivery
-
Excellent written and oral communication skills
-
Mature collaboration and mentoring abilities
-
We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
-
Delivering complex software, ideally in a FinTech setting
-
Experience with CI/CD tools such as Jenkins, CircleCI
-
Experience with code versioning (git / mercurial / subversion)
-
Deliver plugins for our Python-based ETL pipelines
-
Deliver Python microservices for provisioning and managing cloud infrastructure
-
Implement algorithms to analyse large data sets
-
Draft design documents that translate requirements into code
-
Effectively manage challenges associated with handling large volumes of data working to tight deadlines
-
Manage expectations with internal stakeholders and context-switch in a fast-paced environment
-
Thrive in an environment that uses AWS and Elasticsearch extensively
-
Keep abreast of technology and contribute to the engineering strategy
-
Champion best development practices and provide mentorship to others
-
First and foremost you are a Python developer, experienced with the Python Data stack
-
You love and care about data
-
Your code is an artistic manifest reflecting how elegant you are in what you do
-
You feel sparks of joy when a new abstraction or pattern arises from your code
-
You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
-
You are a continuous learner
-
You have a natural willingness to automate tasks
-
You have critical thinking and an eye for detail
-
Excellent ability and experience of working to tight deadlines
-
Sharp analytical and problem-solving skills
-
Strong sense of ownership and accountability for your work and delivery
-
Excellent written and oral communication skills
-
Mature collaboration and mentoring abilities
-
We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
-
Delivering complex software, ideally in a FinTech setting
-
Experience with CI/CD tools such as Jenkins, CircleCI
-
Experience with code versioning (git / mercurial / subversion)
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.
Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Job Location: Chennai
Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery
•
Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Role & responsibilities:
- Developing ETL pipelines for data replication
- Analyze, query and manipulate data according to defined business rules and procedures
- Manage very large-scale data from a multitude of sources into appropriate sets for research and development for data science and analysts across the company
- Convert prototypes into production data engineering solutions through rigorous software engineering practices and modern deployment pipelines
- Resolve internal and external data exceptions in timely and accurate manner
- Improve multi-environment data flow quality, security, and performance
Skills & qualifications:
- Must have experience with:
- virtualization, containers, and orchestration (Docker, Kubernetes)
- creating log ingestion pipelines (Apache Beam) both batch and streaming processing (Pub/Sub, Kafka)
- workflow orchestration tools (Argo, Airflow)
- supporting machine learning models in production
- Have a desire to continually keep up with advancements in data engineering practices
- Strong Python programming and exploratory data analysis skills
- Ability to work independently and with team members from different backgrounds
- At least a bachelor's degree in an analytical or technical field. This could be applied mathematics, statistics, computer science, operations research, economics, etc. Higher education is welcome and encouraged.
- 3+ years of work in software/data engineering.
- Superior interpersonal, independent judgment, complex problem-solving skills
- Global orientation, experience working across countries, regions and time zones
Job Description
We are looking for a highly capable machine learning engineer to optimize our deep learning systems. You will be evaluating existing deep learning (DL) processes, do hyperparameter tuning, performing statistical analysis (logging and evaluating model’s performance) to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.
You will be working with technologies like AWS Sagemaker, TensorFlow JS, TensorFlow/ Keras/TensorBoard to create Deep Learning backends that powers our application.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in Deep Learning role. A first-class machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software. To do this job successfully, you need exceptional skills in DL and programming.
Responsibilities
-
Consulting with managers to determine and refine machine learning objectives.
-
Designing deep learning systems and self-running artificial intelligence (AI) software to
automate predictive models.
-
Transforming data science prototypes and applying appropriate ML algorithms and
tools.
-
Carry out data engineering subtasks such as defining data requirements, collecting,
labeling, inspecting, cleaning, augmenting, and moving data.
-
Carry out modeling subtasks such as training deep learning models, defining
evaluation metrics, searching hyperparameters, and reading research papers.
-
Carry out deployment subtasks such as converting prototyped code into production
code, working in-depth with AWS services to set up cloud environment for training,
improving response times and saving bandwidth.
-
Ensuring that algorithms generate robust and accurate results.
-
Running tests, performing analysis, and interpreting test results.
-
Documenting machine learning processes.
-
Keeping abreast of developments in machine learning.
Requirements
-
Proven experience as a Machine Learning Engineer or similar role.
-
Should have indepth knowledge of AWS Sagemaker and related services (like S3).
-
Extensive knowledge of ML frameworks, libraries, algorithms, data structures, data
modeling, software architecture, and math & statistics.
-
Ability to write robust code in Python & Javascript (TensorFlow JS).
-
Experience with Git and Github.
-
Superb analytical and problem-solving abilities.
-
Excellent troubleshooting skills.
-
Good project management skills.
-
Great communication and collaboration skills.
-
Excellent time management and organizational abilities.
-
Bachelor's degree in computer science, data science, mathematics, or a related field;
Master’s degree is a plus.
About Us
Zupee is India’s fastest-growing innovator in real money gaming with a focus on predominant skill-focused games. Started by 2 IIT-Kanpur alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures, and Smile Group.
Know more about our recent funding: https://bit.ly/3AHmSL3
Our focus has been on innovating in the board, strategy, and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play.
Location: We are location agnostic & our teams work from anywhere. Physically we are based out of Gurugram
Core Responsibilities:
- Responsible for designing, developing, testing, and deploying ML models that can leverage multiple signal sources, to build a personalized user experience
- Work closely with the business stakeholders to identify the potential Data science applications
- Contribute by doing opportunity analysis, building project proposals, designing and implementation of ML projects, in the areas of Ranking, Embeddings, Recommendation engines, etc
- Collaborate with software engineering teams to design experiments, model implementations, and new feature creation
- Clearly communicate technical details, strategies, and outcomes to a business audience
What are we looking for
- 4+ years of hands-on experience in data science using techniques including but not limited to regression, classification, NLP etc.
- Previous experience in model deployment, model monitoring, optimization and model interpretability.
- Expertise with Random forests, Gradient boosting, KNN, Regression and unsupervised learning algorithms
- Experience in using Neural Networks like ANN, RNN, Reinforcement Learning or Deep Learning etc.
- Solid understanding of Python and common machine learning frameworks such as XGBoost, scikit-learn, PyTorch/Tensorflow
- Outstanding problem-solving skills, with demonstrated ability to think creatively and strategically
- Technology-driven mindset, up-to-date with digital and technology literature, trends.
- Must have knowledge of Experimentation and Basic Statistics
- Must have experience in Predictive Analytics
Non-Desired Skills:
- Experience in Descriptive Analytics – Don’t apply
- Experience in NLP Text Analysis, Image, Speech Analysis – Don’t apply
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.
Client Name: Samsung
We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.
Responsibilities for Data Engineer
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
Experience building and optimizing big data ETL pipelines, architectures and data sets.
Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data
stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with big data tools: Spark, Kafka, HBase, Hive etc.
Experience with relational SQL and NoSQL databases
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Skills: Big Data, AWS, Hive, Spark, Python, SQL