• Excellent understanding of machine learning techniques and algorithms, such as SVM, Decision Forests, k-NN, Naive Bayes etc.
• Experience in selecting features, building and optimizing classifiers using machine learning techniques.
• Prior experience with data visualization tools, such as D3.js, GGplot, etc..
• Good knowledge on statistics skills, such as distributions, statistical testing, regression, etc..
• Adequate presentation and communication skills to explain results and methodologies to non-technical stakeholders.
• Basic understanding of the banking industry is value add
Develop, process, cleanse and enhance data collection procedures from multiple data sources.
• Conduct & deliver experiments and proof of concepts to validate business ideas and potential value.
• Test, troubleshoot and enhance the developed models in a distributed environments to improve it's accuracy.
• Work closely with product teams to implement algorithms with Python and/or R.
• Design and implement scalable predictive models, classifiers leveraging machine learning, data regression.
• Facilitate integration with enterprise applications using APIs to enrich implementations
About Societe Generale Global Solution Centre
Similar jobs
About the role:
Looking for an engineer to apply Deep Learning algorithms to implement and improve perception algorithms related to Autonomous vehicles. The position requires you to work on the full life-cycle of Deep learning development including data collection, feature engineering, model training, and testing. One will have the opportunity to implement a state-of-the-art Deep learning algorithm and apply it to real end-to-end production. You will be working with the team and team lead on challenging Deep Learning projects to deliver product quality improvements.
Responsibilities:
- Build novel architectures for classifying, detecting, and tracking objects.
- Develop efficient Deep Learning architectures that can run in real-time on NVIDIA devices.
- Optimize the stack for deployment on embedded devices
- Work on Data pipeline – Data Acquisition, pre-processing, and analysis.
Skillsets:
- Languages: C++, Python.
- Frameworks: CUDA, TensorRT, Pytorch, Tensorflow, ONNX.
- Good understanding of Linux and Version Control (Git, GitHub, GitLab).
- Experienced with OpenCV, Deep Learning to solve image domain problems.
- Strong understanding of ROS.
- Skilled with software design, development, and bug-fixing.
- Coordinate with team members for the development and maintenance of the package.
- Strong mathematical skills and understanding of probabilistic techniques.
- Experience handling large data sets efficiently.
- Experience with deploying Deep Learning models for real-time applications on Nvidia platforms like Drive AGX Pegasus, Jetson AGX Xavier, etc.
Add On Skills:
- Frameworks: Pytorch Lighting
- Experience with autonomous robots
- OpenCV projects, Deep Learning projects
- Experience with 3D data and representations (point clouds, meshes, etc.)
- Experience with a wide variety of Deep learning Models (e.g: LSTM, RNN, CNN, GAN, etc.)
Big Data Engineer
at Propellor.ai
Big Data Engineer/Data Engineer
What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge
• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries
These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation
At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment
What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily
Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!
To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0
About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.
Role Description
• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.
Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.
Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP) • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure
• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design
Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams
Preferred Immediate Joiners
Who Are We?
Vahak (https://www.vahak.in) is India’s largest & most trusted online transport marketplace & directory for road transport businesses and individual commercial vehicle (Trucks, Trailers, Containers, Hyva, LCVs) owners for online truck and load booking, transport business branding and transport business network expansion. Lorry owners can find intercity and intracity loads from all over India and connect with other businesses to find trusted transporters and best deals in the Indian logistics services market. With the Vahak app, users can book loads and lorries from a live transport marketplace with over 7 Lakh + Transporters and Lorry owners in over 10,000+ locations for daily transport requirements.
Vahak has raised a capital of $5+ Million in a Pre-Series A round from RTP Global along with participation from Luxor Capital and Leo Capital. The other marquee angel investors include Kunal Shah, Founder and CEO, CRED; Jitendra Gupta, Founder and CEO, Jupiter; Vidit Aatrey and Sanjeev Barnwal, Co-founders, Meesho; Mohd Farid, Co-founder, Sharechat; Amrish Rau, CEO, Pine Labs; Harsimarbir Singh, Co-founder, Pristyn Care; Rohit and Kunal Bahl, Co-founders, Snapdeal; and Ravish Naresh, Co-founder and CEO, Khatabook.
Manager Data Science:
We at Vahak, are looking for an enthusiastic and passionate Manager of Data Science, to join our young & diverse team.You will play a key role in the data science group, working with different teams, identifying the use cases that could be solved by application of data science techniques.
Our goal as a group is to drive powerful, big data analytics products with scalable results.We love people who are humble and collaborative with hunger for excellence.
Responsibilities:
- Mine and Analyze end to end business data and generate actionable insights. Work will involve analyzing Customer transaction data, Marketing Campaign performance analysis, identifying process bottlenecks, business performance analysis etc.
- Identify data driven opportunities to drive optimization and improvement of product development, marketing techniques and business strategies.
- Collaborate with Product and growth teams to test and learn at unprecedented pace and help the team achieve substantial upside in key metrics
- Actively participate in the OKR process and help team democratize the key KPIs and metrics that drive various objectives
- Comfortable with digital marketing campaign concepts, use of marketing campaign platforms such as Google Adwords and Facebook Ads
- Responsible for design of algorithms that require different advanced analytics techniques and heuristics to work together
- Create dashboard and visualization from scratch and present data in logical manner to all the stakeholders
- Collaborates with internal teams to create actionable items based off analysis; works with the datasets to conduct complex quantitative analysis and helps drive the innovation for our customers
Requirements:
- Bachelor’s or Masters degree in Engineering, Science, Maths, Economics or other quantitative fields. MBA is a plus but not required
- 5+ years of proven experience working in Data Science field preferably in ecommerce/web based or consumer technology companies
- Thorough understanding of implementation and analysis of product and marketing metrics at scale
- Strong problem solving skills with an emphasis on product development.
- Fluency in statistical computer languages like SQL, Python, R as well as a deep understanding of statistical analysis, experiments designs and common pitfalls of data analysis
- Should have worked in a relational database like Oracle or Mysql, experience in Big Data systems like Bigquery or Redshift a definite plus
- Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Data Scientist
Experience in Pricing models will be definite plus
Data Scientist
- Adept at Machine learning techniques and algorithms.
Feature selection, dimensionality reduction, building and
- optimizing classifiers using machine learning techniques
- Data mining using state-of-the-art methods
- Doing ad-hoc analysis and presenting results
- Proficiency in using query languages such as N1QL, SQL
Experience with data visualization tools, such as D3.js, GGplot,
- Plotly, PyPlot, etc.
Creating automated anomaly detection systems and constant tracking
- of its performance
- Strong in Python is a must.
- Strong in Data Analysis and mining is a must
- Deep Learning, Neural Network, CNN, Image Processing (Must)
Building analytic systems - data collection, cleansing and
- integration
Experience with NoSQL databases, such as Couchbase, MongoDB,
Cassandra, HBase
- 6+ years of recent hands-on Java development
- Developing data pipelines in AWS or Google Cloud
- Java, Python, JavaScript programming languages
- Great understanding of designing for performance, scalability, and reliability of data intensive application
- Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
- In-depth understanding of object oriented programming concepts and design patterns
- Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
- Understanding of full software development life cycle, agile development and continuous integration
- Experience in Agile methodologies including Scrum and Kanban
This is NOT remote work position and the selected candidate is expected to commute to the office location in Navi Mumbai.
Job Description
- Implement and deliver a product in the healthcare domain using machine learning and AI capabilities
- Ability to work on challenging tasks
- Ability to research ways of doing things efficiently
- Keen approach to problem solving
- Strong experience in statistics, python, SQL, NLP and predictiv modeling
Basic Qualifications
- Bachelors Degree with a total of 9+ years of experience in software development with 2-3 years experience in Data Science
- Experienced in working with large and multiple datasets, data warehouses and ability to pull data using relevant programs and coding
- Well versed with necessary data preprocessing and feature engineering skills
- Background in healthcare IT space will be preferred
Responsibilities:
- The Machine & Deep Machine Learning Software Engineer (Expertise in Computer Vision) will be an early member of a growing team with responsibilities for designing and developing highly scalable machine learning solutions that impact many areas of our business.
- The individual in this role will help in the design and development of Neural Network (especially Convolution Neural Networks) & ML solutions based on our reference architecture which is underpinned by big data & cloud technology, micro-service architecture and high performing compute infrastructure.
- Typical daily activities include contributing to all phases of algorithm development including ideation, prototyping, design, and development production implementation.
Required Skills:
- An ideal candidate will have a background in software engineering and data science with expertise in machine learning algorithms, statistical analysis tools, and distributed systems.
- Experience in building machine learning applications, and broad knowledge of machine learning APIs, tools, and open-source libraries
- Strong coding skills and fundamentals in data structures, predictive modeling, and big data concepts
- Experience in designing full stack ML solutions in a distributed computing environment
- Experience working with Python, Tensor Flow, Kera’s, Sci-kit, pandas, NumPy, AZURE, AWS GPU
- Excellent communication skills with multiple levels of the organization
- Image CNN, Image processing, MRCNN, FRCNN experience is a must.
Data Governance Engineer
at European Bank headquartered at Copenhagen, Denmark.
Roles & Responsibilities
- Designing and delivering a best-in-class, highly scalable data governance platform
- Improving processes and applying best practices
- Contribute in all scrum ceremonies; assuming the role of ‘scum master’ on a rotational basis
- Development, management and operation of our infrastructure to ensure it is easy to deploy, scalable, secure and fault-tolerant
- Flexible on working hours as per business needs