Job Description
Responsibilities:
- Collaborate with stakeholders to understand business objectives and requirements for AI/ML projects.
- Conduct research and stay up-to-date with the latest AI/ML algorithms, techniques, and frameworks.
- Design and develop machine learning models, algorithms, and data pipelines.
- Collect, preprocess, and clean large datasets to ensure data quality and reliability.
- Train, evaluate, and optimize machine learning models using appropriate evaluation metrics.
- Implement and deploy AI/ML models into production environments.
- Monitor model performance and propose enhancements or updates as needed.
- Collaborate with software engineers to integrate AI/ML capabilities into existing software systems.
- Perform data analysis and visualization to derive actionable insights.
- Stay informed about emerging trends and advancements in the field of AI/ML and apply them to improve existing solutions.
Strong experience in Apache pyspark is must
Requirements:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience of 3-5 years as an AI/ML Engineer or a similar role.
- Strong knowledge of machine learning algorithms, deep learning frameworks, and data science concepts.
- Proficiency in programming languages such as Python, Java, or C++.
- Experience with popular AI/ML libraries and frameworks, such as TensorFlow, Keras, PyTorch, or scikit-learn.
- Familiarity with cloud platforms, such as AWS, Azure, or GCP, and their AI/ML services.
- Solid understanding of data preprocessing, feature engineering, and model evaluation techniques.
- Experience in deploying and scaling machine learning models in production environments.
- Strong problem-solving skills and ability to work on multiple projects simultaneously.
- Excellent communication and teamwork skills.
Preferred Skills:
- Experience with natural language processing (NLP) techniques and tools.
- Familiarity with big data technologies, such as Hadoop, Spark, or Hive.
- Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes.
- Understanding of DevOps practices for AI/ML model deployment
-Apache ,Pyspark
About Kanerika Software
Kanerika has been delivering results since we opened in 2015. Our goal is to provide both a superior customer experience and tremendous value for our customers. With over 75 years of collective experience in Software Product Development and is passionate about exceeding expectations of our clients. Our passion is developing path-breaking software products. We love challenges. We are a small team that always ensures the job is done right. It is all about : Partnership, Experience and Creativity.
Similar jobs
Data Scientist – Delivery & New Frontiers Manager
Job Description:
We are seeking highly skilled and motivated data scientist to join our Data Science team. The successful candidate will play a pivotal role in our data-driven initiatives and be responsible for designing, developing, and deploying data science solutions that drives business values for stakeholders. This role involves mapping business problems to a formal data science solution, working with wide range of structured and unstructured data, architecture design, creating sophisticated models, setting up operations for the data science product with the support from MLOps team and facilitating business workshops. In a nutshell, this person will represent data science and provide expertise in the full project cycle. Expectation of the successful candidate will be above that of a typical data scientist. Beyond technical expertise, problem solving in complex set-up will be key to the success for this role.
Responsibilities:
- Collaborate with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand business needs and identify data science opportunities.
- Map complex business problems to data science problem, design data science solution using GCP/Azure Databricks platform.
- Collect, clean, and preprocess large datasets from various internal and external sources.
- Streamlining data science process working with Data Engineering, and Technology teams.
- Managing multiple analytics projects within a Function to deliver end-to-end data science solutions, creation of insights and identify patterns.
- Develop and maintain data pipelines and infrastructure to support the data science projects
- Communicate findings and recommendations to stakeholders through data visualizations and presentations.
- Stay up to date with the latest data science trends and technologies, specifically for GCP companies
Education / Certifications:
Bachelor’s or Master’s in Computer Science, Engineering, Computational Statistics, Mathematics.
Job specific requirements:
- Brings 5+ years of deep data science experience
∙ Strong knowledge of machine learning and statistical modeling techniques in a in a clouds-based environment such as GCP, Azure, Amazon
- Experience with programming languages such as Python, R, Spark
- Experience with data visualization tools such as Tableau, Power BI, and D3.js
- Strong understanding of data structures, algorithms, and software design principles
- Experience with GCP platforms and services such as Big Query, Cloud ML Engine, and Cloud Storage
- Experience in configuring and setting up the version control on Code, Data, and Machine Learning Models using GitHub.
- Self-driven, be able to work with cross-functional teams in a fast-paced environment, adaptability to the changing business needs.
- Strong analytical and problem-solving skills
- Excellent verbal and written communication skills
- Working knowledge with application architecture, data security and compliance team.
Company Name: Curl Tech
Location: Bangalore
Website: www.curl.tech
Company Profile: Curl Tech is a deep-tech firm, based out of Bengaluru, India. Curl works on developing Products & Solutions leveraging emerging technologies such as Machine Learning, Blockchain (DLT) & IoT. We work on domains such as Commodity Trading, Banking & Financial Services, Healthcare, Logistics & Retail.
Curl has been founded by technology enthusiasts with rich industry experience. Products and solutions that have been developed at Curl, have gone on to have considerable success and have in turn become separate companies (focused on that product / solution).
If you are looking for a job, that would challenge you and desire to work with an organization that disrupts entire value chain; Curl is the right one for you!
Designation: Data Scientist or Junior Data Scientist (according to experience)
Job Description:
Good with Machine Learning and Deep learning, good with programming and maths.
Details: The candidate will be working on many image analytics/ numerical data analytics projects. The work involves, data collection, building the machine learning models, deployment, client interaction and publishing academic papers.
Responsibilities:
-
The candidate will be working on many image analytics/numerical data projects.
-
Candidate will be building various machine learning models depending upon the requirements.
-
Candidate would be responsible for deployment of the machine learning models.
-
Candidate would be the face of the company in front of the clients and will have regular client interactions to understand that client requirements.
What we are looking for candidates with:
-
Basic Understanding of Statistics, Time Series, Machine Learning, Deep Learning, and their fundamentals and mathematical underpinnings.
-
Proven code proficiency in Python,C/C++ or any other AI language of choice.
-
Strong algorithmic thinking, creative problem solving and the ability to take ownership and do independent
research.
-
Understanding how things work internally in ML and DL models is a must.
-
Understanding of the fundamentals of Computer Vision and Image Processing techniques would be a plus.
-
Expertise in OpenCV, ML/Neural networks technologies and frameworks such as PyTorch, Tensorflow would be a
plus.
-
Educational background in any quantitative field (Computer Science / Mathematics / Computational Sciences and related disciplines) will be given preference.
Education: BE/ BTech/ B.Sc.(Physics or Mathematics)/Masters in Mathematics, Physics or related branches.
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with
For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
-Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight.
-Experience in developing lambda functions with AWS Lambda.
-
Expertise with Spark/PySpark
– Candidate should be hands on with PySpark code and should be able to do transformations with Spark
-Should be able to code in Python and Scala.
-
Snowflake experience will be a plus
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- 4+ years of industrial experience in computer vision and/or deep learning
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
Responsibilities:
* 3+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.
* SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries
* Languages: Python
* Good communicator, shows initiative, works well with stakeholders.
* Experience working closely with Data Analysts and provide the data they need and guide them on the issues.
* Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL
* Solid communication and articulation skills
* Able to handle stakeholders independently with less interventions of reporting manager.
* Develop strategies to solve problems in logical yet creative ways.
* Create custom reports and presentations accompanied by strong data visualization and storytelling
We would be excited if you have:
* Excellent communication and interpersonal skills
* Ability to meet deadlines and manage project delivery
* Excellent report-writing and presentation skills
* Critical thinking and problem-solving capabilities
Location : Andheri East
Notice Period: Immediate-15 days
Responsibilities:
1. Study and transform data science prototypes.
2. Design NLP applications.
3. Select appropriate annotated datasets for Supervised Learning methods.
4. Find and implement the right algorithms and tools for NLP tasks.
5. Develop NLP systems according to requirements.
6. Train the developed model and run evaluation experiments.
7. Perform statistical analysis of results and refine models.
8. Extend ML libraries and frameworks to apply in NLP tasks.
9. Use effective text representations to transform natural language into useful features.
10. Develop APIs to deliver deciphered results, optimized for time and memory.
Requirements:
1. Proven experience as an NLP Engineer of at least one year.
2. Understanding of NLP techniques for text representation, semantic extraction techniques, data
structures and modeling.
3. Ability to effectively design software architecture.
4. Deep understanding of text representation techniques (such as n-grams, bag of words, sentiment
analysis etc), statistics and classification algorithms.
5. Hands on Experience of Knowledge of Python of more than a year.
6. Ability to write robust and testable code.
7. Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
8. Strong communication skills.
9. An analytical mind with problem-solving abilities.
10. Bachelor Degree in Computer Science, Mathematics, Computational Linguistics.