Basic Qualifications:
∙Bachelors in Computer Science/Mathematics + Research (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from Tier1 tech institutes.
∙3+ years of relevant experience in building large scale machine learning or deep learning models and/or systems.
∙1 year or more of experience specifically with deep learning (CNN, RNN, LSTM, RBM etc).
∙Strong working knowledge of deep learning, machine learning, and statistics.
- Deep domain understanding of Personalization, Search and Visual.
∙Strong math skills with statistical modeling / machine learning.
∙Hands-on experience building models with deep learning frameworks like MXNet or Tensorflow.
∙Experience in using Python, statistical/machine learning libs.
∙Ability to think creatively and solve problems.
∙Data presentation skills.
Preferred:
∙MS/ Ph.D. (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from IISc and other Top Global Universities.
∙Or, Publications in highly accredited journals (If available, please share links to your published work.).
∙Or, history of scaling ML/Deep learning algorithm at massively large scale.
About Couture.ai
Couture.ai is founded by global innovators and entrepreneurs with experience and self-funding of creating global startup success stories in past. The core team consists of some of the best minds in India in Machine learning and Deep learning research.
Similar jobs
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Company Name: Curl Tech
Location: Bangalore
Website: www.curl.tech
Company Profile: Curl Tech is a deep-tech firm, based out of Bengaluru, India. Curl works on developing Products & Solutions leveraging emerging technologies such as Machine Learning, Blockchain (DLT) & IoT. We work on domains such as Commodity Trading, Banking & Financial Services, Healthcare, Logistics & Retail.
Curl has been founded by technology enthusiasts with rich industry experience. Products and solutions that have been developed at Curl, have gone on to have considerable success and have in turn become separate companies (focused on that product / solution).
If you are looking for a job, that would challenge you and desire to work with an organization that disrupts entire value chain; Curl is the right one for you!
Designation: Data Scientist or Junior Data Scientist (according to experience)
Job Description:
Good with Machine Learning and Deep learning, good with programming and maths.
Details: The candidate will be working on many image analytics/ numerical data analytics projects. The work involves, data collection, building the machine learning models, deployment, client interaction and publishing academic papers.
Responsibilities:
-
The candidate will be working on many image analytics/numerical data projects.
-
Candidate will be building various machine learning models depending upon the requirements.
-
Candidate would be responsible for deployment of the machine learning models.
-
Candidate would be the face of the company in front of the clients and will have regular client interactions to understand that client requirements.
What we are looking for candidates with:
-
Basic Understanding of Statistics, Time Series, Machine Learning, Deep Learning, and their fundamentals and mathematical underpinnings.
-
Proven code proficiency in Python,C/C++ or any other AI language of choice.
-
Strong algorithmic thinking, creative problem solving and the ability to take ownership and do independent
research.
-
Understanding how things work internally in ML and DL models is a must.
-
Understanding of the fundamentals of Computer Vision and Image Processing techniques would be a plus.
-
Expertise in OpenCV, ML/Neural networks technologies and frameworks such as PyTorch, Tensorflow would be a
plus.
-
Educational background in any quantitative field (Computer Science / Mathematics / Computational Sciences and related disciplines) will be given preference.
Education: BE/ BTech/ B.Sc.(Physics or Mathematics)/Masters in Mathematics, Physics or related branches.
Job Description
Mandatory Requirements
-
Experience in AWS Glue
-
Experience in Apache Parquet
-
Proficient in AWS S3 and data lake
-
Knowledge of Snowflake
-
Understanding of file-based ingestion best practices.
-
Scripting language - Python & pyspark
CORE RESPONSIBILITIES
-
Create and manage cloud resources in AWS
-
Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
-
Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
-
Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
-
Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
-
Define process improvement opportunities to optimize data collection, insights and displays.
-
Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
-
Identify and interpret trends and patterns from complex data sets
-
Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
-
Key participant in regular Scrum ceremonies with the agile teams
-
Proficient at developing queries, writing reports and presenting findings
-
Mentor junior members and bring best industry practices.
QUALIFICATIONS
-
5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
-
Strong background in math, statistics, computer science, data science or related discipline
-
Advanced knowledge one of language: Java, Scala, Python, C#
-
Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
-
Proficient with
-
Data mining/programming tools (e.g. SAS, SQL, R, Python)
-
Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
-
Data visualization (e.g. Tableau, Looker, MicroStrategy)
-
Comfortable learning about and deploying new technologies and tools.
-
Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
-
Good written and oral communication skills and ability to present results to non-technical audiences
-
Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
-
AWS certification
-
Spark Streaming
-
Kafka Streaming / Kafka Connect
-
ELK Stack
-
Cassandra / MongoDB
-
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
• Create and maintain data pipeline
• Build and deploy ETL infrastructure for optimal data delivery
• Work with various including product, design and executive team to troubleshoot data
related issues
• Create tools for data analysts and scientists to help them build and optimise the product
• Implement systems and process for data access controls and guarantees
• Distill the knowledge from experts in the field outside the org and optimise internal data
systems
Preferred qualifications/skills:
• 5+ years experience
• Strong analytical skills
____ 04
Freight Commerce Solutions Pvt Ltd.
• Degree in Computer Science, Statistics, Informatics, Information Systems
• Strong project management and organisational skills
• Experience supporting and working with cross-functional teams in a dynamic environment
• SQL guru with hands on experience on various databases
• NoSQL databases like Cassandra, MongoDB
• Experience with Snowflake, Redshift
• Experience with tools like Airflow, Hevo
• Experience with Hadoop, Spark, Kafka, Flink
• Programming experience in Python, Java, Scala
Hiring - Python Developer Freelance Consultant (WFH-Remote)
Greetings from Deltacubes Technology!!
Skillset Required:
Python
Pyspark
AWS
Scala
Experience:
5+ years
Thanks
Bavithra
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment