Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics?
What is your role?
As a Computer Vision & Machine Learning Engineer at Datasee.AI, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Datasee.AI grows.
What you’ll do
- You will be working with the core R&D team which drives the computer vision and image processing development.
- Build deep learning model for our data and object detection on large scale images.
- Design and implement real-time algorithms for object detection, classification, tracking, and segmentation
- Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions.
- Automate the workflow process between the fast-paced data delivery systems.
What we are looking for
- 1 to 3+ years of professional experience in computer vision and machine learning.
- Extensive use of Python
- Experience in python libraries such as OpenCV, Tensorflow and Numpy
- Familiarity with a deep learning library such as Keras and PyTorch
- Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO
- Experienced in hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment
- B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree
- Dockerization, AWS modules and Production level modelling
- Basic knowledge of the Fundamentals of GIS would be added advantage
Prefered Requirements
- Experience with Qt, Desktop application development, Desktop Automation
- Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS
About Datasee.AI:
Datasee>AI, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach.
With Sales & Operations based out of US, Europe & India, Datasee.AI is a team of 32 people located across different geographies and with varied domain expertise and interests.
A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.
About DATASEE.AI, INC.
Datasee.AI was started to serve one purpose – make the power of big data and artificial intelligence easily accessible for every business at every stage of their operational cycle. With our image analytics platform, businesses can digitize their assets with a simple pane for all teams, avoid operational and organizational silos, mitigate risks, and increase profitability. While predominantly used by players in the renewable energy and farming sectors, our team is working to expand our capabilities across multiple industries. At Datasee.AI, we use cutting-edge technology to minimize human errors.
KEY POINTERS
1. Our average age of employees is 24!
2. One of the very few multi-disciplinary engineering companies in India with global operations (six countries & counting)
3. Multi-Disciplinary Engineering is our Forte - Engineers from Mechanical, Environmental, Geo-Informatics, Computer Science, Energy Engineering, Aerospace Engg, Product Design, Electrical Engg & even Irrigation Management
4. Focused on building products for clean energy management & monitoring
Join us to learn and grow together!
Similar jobs
B1 – Data Scientist - Kofax Accredited Developers
Requirement – 3
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA
Chatbot Developer
We are a Conversational AI- Product Development company which is located in the USA, Bangalore.
We are looking for a Senior Chatbot /Javascript Developer to join the Avaamo PSG(delivery) team.
Responsibilities:
- Independent team member for analyzing requirements, designing, coding, and implementing Conversation AI products.
- Its a product expert work closely with IT Managers and Business Groups to gather requirements and translate those into the required technical solution.
- Drive solution implementation using the Conversational design approach.
- Develop, deploy and maintain customized extensions to the Avaamo platform-specific to customer requirements.
- Conduct training and technical guidance sessions for partner and customer development teams.
- Evaluating reported defects and the correction of prioritized defects.
- Travel onsite to customer locations for close support.
- Document how to's and implement best practices for Avaamo product solutions.
Requirements:
- Strong programming experience in javascript, HTML/CSS.
- Experience of creating and consuming REST APIs and SOAP services.
- Strong knowledge and awareness of Web Technologies and current web trends.
- Working knowledge of Security in Web applications and services.
- experience in using the NodeJS framework with good understanding of the underlying architecture.
- Experience of deploying web applications on Linux servers in production environment.
- Excellent communication skills.
Good to haves:
- Full stack experience UI and UX design experience or insights
- Working knowledge of AI, ML and NLP.
- Experience of enterprise systems integration like MS Dynamics CRM, Salesforce, ServiceNow, MS Active Directory etc.
- Experience of building Single Sign On in web/mobile applications.
- Ability to learn latest technologies and handle small engineering teams.
Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Principal Accountabilities :
1. Good in communication and converting business requirements to functional requirements
2. Develop data-driven insights and machine learning models to identify and extract facts from sales, supply chain and operational data
3. Sound Knowledge and experience in statistical and data mining techniques: Regression, Random Forest, Boosting Trees, Time Series Forecasting, etc.
5. Experience in SOTA Deep Learning techniques to solve NLP problems.
6. End-to-end data collection, model development and testing, and integration into production environments.
7. Build and prototype analysis pipelines iteratively to provide insights at scale.
8. Experience in querying different data sources
9. Partner with developers and business teams for the business-oriented decisions
10. Looking for someone who dares to move on even when the path is not clear and be creative to overcome challenges in the data.
1.Advanced knowledge of statistical techniques, NLP, machine learning algorithms and deep
learning
frameworks like Tensorflow, Theano, Keras, Pytorch
2. Proficiency with modern statistical modeling (regression, boosting trees, random forests,
etc.),
machine learning (text mining, neural network, NLP, etc.), optimization (linear
optimization,
nonlinear optimization, stochastic optimization, etc.) methodologies.
3. Build complex predictive models using ML and DL techniques with production quality
code and jointly
own complex data science workflows with the Data Engineering team.
4. Familiar with modern data analytics architecture and data engineering technologies
(SQL and No-SQL databases)
5. Knowledge of REST APIs and Web Services
6. Experience with Python, R, sh/bash
Required Skills (Non-Technical):-
1. Fluent in English Communication (Spoken and verbal)
2. Should be a team player
3. Should have a learning aptitude
4. Detail-oriented, analytical and inquisitive
5. Ability to work independently and with others
6. Extremely organized with strong time-management skills
7. Problem Solving & Critical Thinking
Required Experience Level :- Senior level- 4+Years
Work Location : Pune preferred, Remote option available
Work Timing : 2:30 PM to 11:30 PM IST
Job Summary
Condé Nast is seeking an experienced and highly motivated Software engineer-ML who will support
productionizing projects in a databricks environment for the data science team. We expect the person
to be a software/data engineer experienced in building robust ML systems & deploying ML pipelines
in production, Study and transform data science prototypes into an engineering product and is
knowledgeable about machine learning models.
**This role is NOT for building Machine Learning models **
Primary Responsibilities
● Operationalize ML models into production environment(s) by building data
pipelines ● Designing and developing Machine Learning Systems
● Keep abreast of developments in the field
● Come up with engineering ideas to resolve problems faced with respect to ML pipeline ●
Design and code highly scalable, machine learning frameworks processing large volumes of data
● Engineer a near-real-time system that can process massive amounts of data efficiently ●
Collaborate with other Machine Learning Engineers and Data Scientists in architecting &
engineering the solution
● Participate in the entire development lifecycle, from concept to release
● Participate in all phases of quality assurance and defect resolution
Desired Skills & Qualifications
● 5+ years software development experience with highly scalable systems involving
machine learning and big data
● Understanding of data structures, data modeling and software architecture
● Strong software development skills with proficiency in Python/Pyspark
● Experience with Big Data technologies such as Spark, Hadoop
● Familiarity with machine learning frameworks and libraries would be a good-to-have
skill ● Excellent communication skills
● Ability to work in a team
● Outstanding analytical and problem-solving skills
● Applicants should have a Undergraduate/Postgraduate degree in Computer Science or a
related discipline
About Condé Nast
CONDÉ NAST GLOBAL
Condé Nast is a global media house with over a century of distinguished publishing history. With a
portfolio of iconic brands like Vogue, GQ, Vanity Fair, The New Yorker and Bon Appétit, we at Condé Nast
aim to tell powerful, compelling stories of communities, culture and the contemporary world. Our
operations are headquartered in New York and London, with colleagues and collaborators in 32 markets
across the world, including France, Germany, India, China, Japan, Spain, Italy, Russia, Mexico, and Latin
America.
Condé Nast has been raising the industry standards and setting records for excellence in the publishing
space. Today, our brands reach over 1 billion people in print, online, video, and social media.
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social platforms -
in other words, a staggering amount of user data. Condé Nast made the right move to invest heavily in
understanding this data and formed a whole new Data team entirely dedicated to data processing,
engineering, analytics, and visualization. This team helps drive engagement, fuel process innovation,
further content enrichment, and increase market revenue. The Data team aimed to create a company
culture where data was the common language and facilitate an environment where insights shared in
real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at Condé
Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost online
revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data Science, and
Operations (including Product and Marketing Ops, Client Services) along with Data Strategy and
monetization. The teams-built capabilities and products to create data-driven solutions for better
audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We are a
media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.
About IDfy
IDfy is ranked amongst the World's Top 100 Regulatory Technology companies for the last two years. IDfy's AI-powered technology solutions help real people unlock real opportunities. We create the confidence required for people and businesses to engage with each other in the digital world. If you have used any major payment wallets, digitally opened a bank account , have used a self-drive car, have played a real-money online game, or hosted people through AirBnB, it's quite likely that your identity has been verified through IDfy at some point.
About the team
- The machine learning team is a closely knit team responsible for building models and services that support key workflows for IDfy.
- Our models are critical for these workflows and as such are expected to perform accurately and with low latency. We use a mix of conventional and hand-crafted deep learning models.
- The team comes from diverse backgrounds and experience. We respect opinions and believe in honest, open communication.
- We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.
About the role
In this role you will:
- Work on all aspects of a production machine learning platform: acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
- Work on performance tuning of models
- From time to time work on support and debugging of these production systems
- Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
- Building workflows for training and production systems
- Contribute to documentation
While the emphasis will be on researching, building and deploying models into production, you will be expected to contribute to aspects mentioned above.
About you
You are a seasoned machine learning engineer (or data scientist). Our ideal candidate is someone with 5+ years of experience in production machine learning.
Must Haves
- You should be experienced in framing and solving complex problems with the application of machine learning or deep learning models.
- Deep expertise in computer vision or NLP with the experience of putting it into production at scale.
- You have experienced that and understand that modelling is only a small part of building and delivering AI solutions and know what it takes to keep a high-performance system up and running.
- Managing a large scale production ML system for at least a couple of years
- Optimization and tuning of models for deployment at scale
- Monitoring and debugging of production ML systems
- An enthusiasm and drive to learn, assimilate and disseminate the state of the art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
- Past experience of mentoring junior colleagues
- Knowledge of and experience in ML Ops and tooling for efficient machine learning processes
Good to Have
- Our stack also includes languages like Go and Elixir. We would love it if you know any of these or take interest in functional programming.
- We use Docker and Kubernetes for deploying our services, so an understanding of this would be useful to have.
- Experience in using any other platform, frameworks, tools.
Other things to keep in mind
- Our goal is to help a significant part of the world’s population unlock real opportunities. This is an opportunity to make a positive impact here, and we hope you like it as much as we do.
Life At IDfy
People at IDfy care about creating value. We take pride in the strong collaborative culture that we have built, and our love for solving challenging problems. Life at IDfy is not always what you’d expect at a tech start-up that’s growing exponentially every quarter. There’s still time and space for balance.
We host regular talks, events and performances around Life, Art, Sports, and Technology; continuously sparking creative neurons in our people to keep their intellectual juices flowing. There’s never a dull day at IDfy. The office environment is casual and it goes beyond just the dress code. We have no conventional hierarchies and believe in an open-door policy where everyone is approachable.
Responsibilities:
- The Machine & Deep Machine Learning Software Engineer (Expertise in Computer Vision) will be an early member of a growing team with responsibilities for designing and developing highly scalable machine learning solutions that impact many areas of our business.
- The individual in this role will help in the design and development of Neural Network (especially Convolution Neural Networks) & ML solutions based on our reference architecture which is underpinned by big data & cloud technology, micro-service architecture and high performing compute infrastructure.
- Typical daily activities include contributing to all phases of algorithm development including ideation, prototyping, design, and development production implementation.
Required Skills:
- An ideal candidate will have a background in software engineering and data science with expertise in machine learning algorithms, statistical analysis tools, and distributed systems.
- Experience in building machine learning applications, and broad knowledge of machine learning APIs, tools, and open-source libraries
- Strong coding skills and fundamentals in data structures, predictive modeling, and big data concepts
- Experience in designing full stack ML solutions in a distributed computing environment
- Experience working with Python, Tensor Flow, Kera’s, Sci-kit, pandas, NumPy, AZURE, AWS GPU
- Excellent communication skills with multiple levels of the organization
- Image CNN, Image processing, MRCNN, FRCNN experience is a must.
Required skill
- Around 6- 8.5 years of experience and around 4+ years in AI / Machine learning space
- Extensive experience in designing large scale machine learning solution for the ML use case, large scale deployments and establishing continues automated improvement / retraining framework.
- Strong experience in Python and Java is required.
- Hands on experience on Scikit-learn, Pandas, NLTK
- Experience in Handling of Timeseries data and associated techniques like Prophet, LSTM
- Experience in Regression, Clustering, classification algorithms
- Extensive experience in buildings traditional Machine Learning SVM, XGBoost, Decision tree and Deep Neural Network models like RNN, Feedforward is required.
- Experience in AutoML like TPOT or other
- Must have strong hands on experience in Deep learning frameworks like Keras, TensorFlow or PyTorch
- Knowledge of Capsule Network or reinforcement learning, SageMaker is a desirable skill
- Understanding of Financial domain is desirable skill
Responsibilities
- Design and implementation of solutions for ML Use cases
- Productionize System and Maintain those
- Lead and implement data acquisition process for ML work
- Learn new methods and model quickly and utilize those in solving use cases