- Partners with business stakeholders to translate business objectives into clearly defined analytical projects.
- Identify opportunities for text analytics and NLP to enhance the core product platform, select the best machine learning techniques for the specific business problem and then build the models that solve the problem.
- Own the end-end process, from recognizing the problem to implementing the solution.
- Define the variables and their inter-relationships and extract the data from our data repositories, leveraging infrastructure including Cloud computing solutions and relational database environments.
- Build predictive models that are accurate and robust and that help our customers to utilize the core platform to the maximum extent.
Skills and Qualification
- 12 to 15 yrs of experience.
- An advanced degree in predictive analytics, machine learning, artificial intelligence; or a degree in programming and significant experience with text analytics/NLP. He shall have a strong background in machine learning (unsupervised and supervised techniques). In particular, excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc.
- Experience with text mining, parsing, and classification using state-of-the-art techniques.
- Experience with information retrieval, Natural Language Processing, Natural Language
- Understanding and Neural Language Modeling.
- Ability to evaluate the quality of ML models and to define the right performance metrics for models in accordance with the requirements of the core platform.
- Experience in the Python data science ecosystem: Pandas, NumPy, SciPy, sci-kit-learn, NLTK, Gensim, etc.
- Excellent verbal and written communication skills, particularly possessing the ability to share technical results and recommendations to both technical and non-technical audiences.
- Ability to perform high-level work both independently and collaboratively as a project member or leader on multiple projects.
Similar jobs
AI/ML Engineer
at Matellio India Private Limited
Principal Accountabilities :
1. Good in communication and converting business requirements to functional requirements
2. Develop data-driven insights and machine learning models to identify and extract facts from sales, supply chain and operational data
3. Sound Knowledge and experience in statistical and data mining techniques: Regression, Random Forest, Boosting Trees, Time Series Forecasting, etc.
5. Experience in SOTA Deep Learning techniques to solve NLP problems.
6. End-to-end data collection, model development and testing, and integration into production environments.
7. Build and prototype analysis pipelines iteratively to provide insights at scale.
8. Experience in querying different data sources
9. Partner with developers and business teams for the business-oriented decisions
10. Looking for someone who dares to move on even when the path is not clear and be creative to overcome challenges in the data.
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
Job Description
-
Design, development and deployment of highly-available and fault-tolerant enterprise business software at scale.
-
Demonstrate tech expertise to go very deep or broad in solving classes of problems or creating broadly leverage-able solutions.
-
Execute large-scale projects - Provide technical leadership in architecting and building product solutions.
-
Collaborate across teams to deliver a result, from hardworking team members within your group, through smart technologists across lines of business.
-
Be a role model on acting with good judgment and responsibility, helping teams to commit and move forward.
-
Be a humble mentor and trusted advisor for both our talented team members and passionate leaders alike. Deal with differences in opinion in a mature and fair way.
-
Raise the bar by improving standard methodologies, producing best-in-class efficient solutions, code, documentation, testing, and monitoring.
Qualifications
• 15+ years of relevant engineering experience.
-
Proven record of building and productionizing highly reliable products at scale.
-
Experience with Java and Python
-
Experience with the Big Data technologie is a plus.
-
Ability to assess new technologies and make pragmatic choices that help guide us towards a long-term vision
-
Can collaborate well with several other engineering orgs to articulate requirements and system design
Additional Information
Professional Attributes:
• Team player!
• Great interpersonal skills, deep technical ability, and a portfolio of successful execution.
• Excellent written and verbal communication skills, including the ability to write detailed technical documents.
• Passionate about helping teams grow by inspiring and mentoring engineers.
Lead Data Engineer
at Discite Analytics Private Limited
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.
Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Senior Computer Vision Developer
This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 4 year in the industry.
Roles & Responsibilities
- Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
- Deep understanding of Linux from kernel mechanisms through user space management
- Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
- Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards. Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.
- Wide understanding of IP networking as well as data centre infrastructure
Skills
- Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
- Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
- Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
- Strong understanding and must have experience:
- Apache spark framework, specifically spark core and spark streaming,
- Orchestration platforms, mesos and kubernetes,
- Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
- Core presentation technologies kibana, and grafana.
- Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products
Certification
Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms
Senior Artificial Intelligence/Machine Learning Engineer
at Artivatic.ai
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.
Client Name: Samsung
We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.
Responsibilities for Data Engineer
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
Experience building and optimizing big data ETL pipelines, architectures and data sets.
Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data
stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with big data tools: Spark, Kafka, HBase, Hive etc.
Experience with relational SQL and NoSQL databases
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Skills: Big Data, AWS, Hive, Spark, Python, SQL
Alternative Data Programmer for Equity Fund
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python
Some examples of work:
Machine Learning Engineer
at CloudMoyo
Job Description:
Roles & Responsibilities:
· You will be involved in every part of the project lifecycle, right from identifying the business problem and proposing a solution, to data collection, cleaning, and preprocessing, to training and optimizing ML/DL models and deploying them to production.
· You will often be required to design and execute proof-of-concept projects that can demonstrate business value and build confidence with CloudMoyo’s clients.
· You will be involved in designing and delivering data visualizations that utilize the ML models to generate insights and intuitively deliver business value to CXOs.
Desired Skill Set:
· Candidates should have strong Python coding skills and be comfortable working with various ML/DL frameworks and libraries.
· Hands-on skills and industry experience in one or more of the following areas is necessary:
1) Deep Learning (CNNs/RNNs, Reinforcement Learning, VAEs/GANs)
2) Machine Learning (Regression, Random Forests, SVMs, K-means, ensemble methods)
3) Natural Language Processing
4) Graph Databases (Neo4j, Apache Giraph)
5) Azure Bot Service
6) Azure ML Studio / Azure Cognitive Services
7) Log Analytics with NLP/ML/DL
· Previous experience with data visualization, C# or Azure Cloud platform and services will be a plus.
· Candidates should have excellent communication skills and be highly technical, with the ability to discuss ideas at any level from executive to developer.
· Creative problem-solving, unconventional approaches and a hacker mindset is highly desired.