Senior Artificial intelligence/ Machine Learning Developer
This person MUST have:
- B.E Computer Science or equivalent
- 5 years experience with the Django framework
- Experience with building APIs (REST or GraphQL)
- Strong Troubleshooting and debugging skills
- React.js knowledge would be an added bonus
- Understanding on how to use a database like Postgres (prefered choice), SQLite, MongoDB, MySQL.
- Sound knowledge of object-oriented design and analysis.
- A strong passion for writing simple, clean and efficient code.
- Proficient understanding of code versioning tools Git.
- Strong communication skills.
Experience:
- Min 5 year experience
- Startup experience is a must.
Location:
- Remote developer
Timings:
- 40 hours a week but with 4 hours a day overlapping with client timezone. Typically clients are in California PST Timezone.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
About A firm which woks with US clients. Permanent WFH.
Similar jobs
- Minimum 2.5 years of experience as a Python Developer.
- Minimum 2.5 years of experience in any framework like Django/Flask/Fast API
- Minimum 2.5 years of experience in SQL/ Postgress
- Minimum 2.5 years of experience in Git/Gitlab/Bit-Bucket
- Minimum 2+ years of experience in deployment (CICD with Jenkins)
- Minimum 2.5 years of experience in any cloud like AWS/GCP/Azure
- At least 4 to 7 years of relevant experience as Big Data Engineer
- Hands-on experience in Scala or Python
- Hands-on experience on major components in Hadoop Ecosystem like HDFS, Map Reduce, Hive, Impala.
- Strong programming experience in building applications/platform using Scala or Python.
- Experienced in implementing Spark RDD Transformations, actions to implement business analysis
We are specialized in productizing solutions of new technology.
Our vision is to build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value to our clients.
We strive to develop innovation and passion into everything we do, whether it is services or products, or solutions.
What you will do:
- Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
- Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
- Back-testing investment appraisal models at regular intervals to improve the same
- Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
- Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
- Identifying relevant sub-sector criteria to score and rate investment opportunities internally
Desired Candidate Profile
What you need to have:
- Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
- Experience in working in lending/investing fintech (mandatory)
- Strong Excel skills (mandatory)
- Previous experience in credit rating or credit scoring or investment analysis (preferred)
- Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
- Proficiency in data analysis (preferred)
- Good verbal and written skills
Big Data Engineer/Data Engineer
What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge
• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries
These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation
At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment
What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily
Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!
To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0
About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.
Role Description
• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.
Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.
Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP) • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure
• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design
Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams
Preferred Immediate Joiners
Outplay is building the future of sales engagement, a solution that helps sales teams personalize at scale while consistently staying on message and on task, through true multi-channel outreach including email, phone, SMS, chat and social media. Outplay is the only tool your sales team will ever need to crush their goals. Funded by Sequoia - Headquartered in the US. Sequoia not only led a $2 million seed round in Outplay early this year, but also followed with $7.3 million Series - A recently. The team is spread remotely all over the globe.
Perks of being an Outplayer :
• Fully remote job - You can be on the mountains or at the beach, and still work with us. Outplay is a 100% remote company.
• Flexible work hours - We believe mental health is way more important than a 9-5 job.
• Health Insurance - We are a family, and we take care of each other - we provide medical insurance coverage to all employees and their family members. We also provide an additional benefit of doctor consultation along with the insurance plan.
• Annual company retreat - we work hard, and we party harder.
• Best tools - we buy you the best tools of the trade
• Celebrations - No, we never forget your birthday or anniversary (be it work or wedding) and we never leave an opportunity to celebrate milestones and wins.
• Safe space to innovate and experiment
• Steady career growth and job security
About the Role:
We are looking for a Senior Data Scientist to help research, develop and advance the charter of AI at Outplay and push the threshold of conversational intelligence.
Job description :
• Lead AI initiatives that dissects data for creating new feature prototypes and minimum viable products
• Conduct product research in natural language processing, conversation intelligence, and virtual assistant technologies
• Use independent judgment to enhance product by using existing data and building AI/ML models
• Collaborate with teams, provide technical guidance to colleagues and come up with new ideas for rapid prototyping. Convert prototypes into scalable and efficient products.
• Work closely with multiple teams on projects using textual and voice data to build conversational intelligence
• Prototype and demonstrate AI augmented capabilities in the product for customers
• Conduct experiments to assess the precision and recall of language processing modules and study the effect of such experiments on different application areas of sales
• Assist business development teams in the expansion and enhancement of a feature pipeline to support short and long-range growth plans
• Identify new business opportunities and prioritize pursuits of AI for different areas of conversational intelligence
• Build reusable and scalable solutions for use across a varied customer base
• Participate in long range strategic planning activities designed to meet the company’s objectives and revenue goals
Required Skills :
• Bachelors or Masters in a quantitative field such as Computer Science, Statistics, Mathematics, Operations Research or related field with focus on applied Machine Learning, AI, NLP and data-driven statistical analysis & modelling.
• 4+ years of experience applying AI/ML/NLP/Deep Learning/ data-driven statistical analysis & modelling solutions to multiple domains. Experience in the Sales and Marketing domain is a plus.
• Experience in building Natural Language Processing (NLP), Conversational Intelligence, and Virtual Assistants based features.
• Excellent grasp on programming languages like Python. Experience in GoLang would be a plus.
• Proficient in analysis using python packages like Pandas, Plotly, Numpy, Scipy, etc.
• Strong and proven programming skills in machine learning and deep learning with experience in frameworks such as TensorFlow/Keras, Pytorch, Transformers, Spark etc
• Excellent communication skills to explain complex solutions to stakeholders across multiple disciplines.
• Experience in SQL, RDBMS, Data Management and Cloud Computing (AWS and/or Azure) is a plus.
• Extensive experience of training and deploying different Machine Learning models
• Experience in monitoring deployed models to proactively capture data drifts, low performing models, etc.
• Exposure to Deep Learning, Neural Networks or related fields
• Passion for solving AI/ML problems for both textual and voice data.
• Fast learner, with great written and verbal communication skills, and be able to work independently as
well as in a team environment
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
- Extract and present valuable information from data
- Understand business requirements and generate insights
- Build mathematical models, validate and work with them
- Explain complex topics tailored to the audience
- Validate and follow up on results
- Work with large and complex data sets
- Establish priorities with clear goals and responsibilities to achieve a high level of performance.
- Work in an agile and iterative manner on solving problems
- Evaluate different options proactively and the ability to solve problems in an innovative way. Develop new solutions or combine existing methods to create new approaches.
- Good understanding of Digital & analytics
- Strong communication skills, orally and in writing
Job Overview:
As a Data Scientist, you will work in collaboration with our business and engineering people, on creating value from data. Often the work requires solving complex problems by turning vast amounts of data into business insights through advanced analytics, modeling, and machine learning. You have a strong foundation in analytics, mathematical modeling, computer science, and math - coupled with a strong business sense. You proactively fetch information from various sources and analyze it for better understanding of how the business performs. Furthermore, you model and build AI tools that automate certain processes within the company. The solutions produced will be implemented to impact business results.
Primary Responsibilities:
- Develop an understanding of business obstacles, create solutions based on advanced analytics and draw implications for model development
- Combine, explore, and draw insights from data. Often large and complex data assets from different parts of the business.
- Design and build explorative, predictive- or prescriptive models, utilizing optimization, simulation, and machine learning techniques
- Prototype and pilot new solutions and be a part of the aim of ‘productizing’ those valuable solutions that can have an impact at a global scale
- Guides and coaches other chapter colleagues to help solve data/technical problems at an operational level, and in methodologies to help improve development processes
- Identifies and interprets trends and patterns in complex data sets to enable the business to make data-driven decisions
Responsibilities:
- The Machine & Deep Machine Learning Software Engineer (Expertise in Computer Vision) will be an early member of a growing team with responsibilities for designing and developing highly scalable machine learning solutions that impact many areas of our business.
- The individual in this role will help in the design and development of Neural Network (especially Convolution Neural Networks) & ML solutions based on our reference architecture which is underpinned by big data & cloud technology, micro-service architecture and high performing compute infrastructure.
- Typical daily activities include contributing to all phases of algorithm development including ideation, prototyping, design, and development production implementation.
Required Skills:
- An ideal candidate will have a background in software engineering and data science with expertise in machine learning algorithms, statistical analysis tools, and distributed systems.
- Experience in building machine learning applications, and broad knowledge of machine learning APIs, tools, and open-source libraries
- Strong coding skills and fundamentals in data structures, predictive modeling, and big data concepts
- Experience in designing full stack ML solutions in a distributed computing environment
- Experience working with Python, Tensor Flow, Kera’s, Sci-kit, pandas, NumPy, AZURE, AWS GPU
- Excellent communication skills with multiple levels of the organization
- Image CNN, Image processing, MRCNN, FRCNN experience is a must.