50+ PyTorch Jobs in India
Apply to 50+ PyTorch Jobs on CutShort.io. Find your next job, effortlessly. Browse PyTorch Jobs and apply today!
Salary: INR 15 to INR 30 lakhs per annum
Performance Bonus: Up to 10% of the base salary can be added
Location: Bangalore or Pune
Experience: 2-5 years
About AbleCredit:
AbleCredit is on a mission to solve the Credit Gap of emerging economies. In India alone, the Credit Gap is over USD 5T (Trillion!). This is the single largest contributor to poverty, poor genie index and lack of opportunities. Our Vision is to deploy AI reliably, and safely to solve some of the greatest problems of humanity.
Job Description:
This role is ideal for someone with a strong foundation in deep learning and hands-on experience with AI technologies.
- You will be tasked with solving complex, real-world problems using advanced machine learning models in a privacy-sensitive domain, where your contributions will have a direct impact on business-critical processes.
- As a Machine Learning Engineer at AbleCredit, you will collaborate closely with the founding team, who bring decades of industry expertise to the table.
- You’ll work on deploying cutting-edge Generative AI solutions at scale, ensuring they align with strict privacy requirements and optimize business outcomes.
This is an opportunity for experienced engineers to bring creative AI solutions to one of the most challenging and evolving sectors, while making a significant difference to the company’s growth and success.
Requirements:
- Experience: 2-4 years of hands-on experience in applying machine learning and deep learning techniques to solve complex business problems.
- Technical Skills: Proficiency in standard ML tools and languages, including:
- Python: Strong coding ability for building, training, and deploying machine learning models.
- PyTorch (or MLX or Jax): Solid experience in one or more deep learning frameworks for developing and fine-tuning models.
- Shell scripting: Familiarity with Unix/Linux shell scripting for automation and system-level tasks.
- Mathematical Foundation: Good understanding of the mathematical principles behind machine learning and deep learning (linear algebra, calculus, probability, optimization).
- Problem Solving: A passion for solving tough, ambiguous problems using AI, especially in data-sensitive, large-scale environments.
- Privacy & Security: Awareness and understanding of working in privacy-sensitive domains, adhering to best practices in data security and compliance.
- Collaboration: Ability to work closely with cross-functional teams, including engineers, product managers, and business stakeholders, and communicate technical ideas effectively.
- Work Experience: This position is for experienced candidates only.
Additional Information:
- Location: Pune or Bangalore
- Work Environment: Collaborative and entrepreneurial, with close interactions with the founders.
- Growth Opportunities: Exposure to large-scale AI systems, GenAI, and working in a data-driven privacy-sensitive domain.
- Compensation: Competitive salary and ESOPs, based on experience and performance
- Industry Impact: You’ll be at the forefront of applying Generative AI to solve high-impact problems in the finance/credit space, helping shape the future of AI in the business world.
Artificial Intelligence Resercher (Computer vision)
Responsibility
• Work on Various SOTA Computer Vision Models, Dataset Augmentation & Dataset Generation
Techniques that help improve model accuracy & precision.
• Work on development & improvement of End-to-End Pipeline use cases running at scale.
• Programming skills with multi-threaded GPU CUDA computing and API Solutions.
• Proficient with Training of Detection, Classification & Segmentation Models with TensorFlow,
Pytorch, MX Net etc
Required Skills
• Strong development skills required in Python and C++.
• Ability to architect a solution based on given requirements and convert the business requirements into a technical computer vision problem statement.
• Ability to work in a fast-paced environment and coordinate across different parts of different projects.
• Bringing in the technical expertise around the implementation of best coding standards and
practices across the team.
• Extensive experience of working on edge devices like Jetson Nano, Raspberry Pi and other GPU powered low computational devices.
• Experience with using Docker, Nvidia Docker, Nvidia NGC containers for Computer Vision Deep
Learning
• Experience with Scalable Cloud Deployment Architecture for Video Analytics(Involving Kubernetes
and or Kafka)
• Good experience with any of one cloud technologies like AWS, Azure and Google Cloud.
• Experience in working with Model Optimisation for Nvidia Hardware (Tensors Conversion of both TensorFlow & Pytorch models.
• Proficient understanding of code versioning tools, such as Git.
• Proficient in Data Structures & Algorithms.
• Well versed in software design paradigms and good development practices.
• Experience with Scalable Cloud Deployment Architecture for Video Analytics(Involving Kubernetes
and or Kafka).
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Job Description
A job where you increase the depth of your expertise in computer vision.
A job where you learn and implement the SOTA papers.
A job where you write vectorized code that runs in seconds, not in minutes.
A job where models learn to see and understand the world around them.
A job where models run real-time because you optimize every byte.
A job where you keep the career promises that you made to yourself.
A job where you keep the learning promises that you made to yourself.
If this scares you, don't read.
If this excites you, we might love you.
Role
We are looking for a passionate Machine Learning Engineer to join our team.
The ideal candidate will be an enthusiastic developer with 2-5 years of experience in the field of Computer Vision and Artificial Intelligence.
If building things and writing code excite you, this is the startup you belong.
Key Technologies
Must be an expert in Python and Numpy.
Experience with Tensorflow/Keras/Pytorch is required.
Unsatiable hunger for writing beautiful code.
Knowledge of python design-patterns.
Some experience with C++ is preferred.
Knowledge working closely with version control (GIT).
Excellent communication skills and being able to work independently.
Strong problem-solving and coding skills override everything else written above.
About AIMonk
Run by IIT Kanpur alumni, AIMonk is a deep tech startup.
We build beautiful and scalable software using computer vision and deep learning.
We pride ourselves in solving problems nobody else can in the space
First & foremost, this role is not for you if you don’t enjoy solving really deeeep-tech problems with a high surface area, which means that no person would fit in for solving the complete problem (we know that) & there’ll be a lot of things to learn on the way, read further if you’re only interested in something of this sort.
We are building a Social Network for Fashion & E-commerce powered by AI with the vision of redefining how people buy things on the internet! We believe we are one of the very few companies that are here to solve a real consumer problem & not just cash on to the AI hype. We’ve raised $1M in pre-seed funding from top VCs & supported by the best entrepreneurs of India.
As a founding AI-ML Engineer, you will build & train foundation models from scratch, fine-tune existing models as per the use-case, scraping large sums of data, design pipelines for data ingestion, processing & scalability of systems. Not just this, you’ll also be working on recommendation systems, particularly algorithm design & planning the execution from day one.
Now, What we’re looking for & our expectations:
Note: We don’t really care about your qualifications as long as we can see that you have sufficient knowledge required for the role & a strong sense of ownership in whatever you do.
- Design and deploy advanced machine learning models in Computer Vision including object detection & similarity matching
- Implement scalable data pipelines, optimize models for performance and accuracy, and ensure they are production-ready with MLOps
- Take part in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices
- In terms of technical skills, you should have a high proficiency in Python, machine learning frameworks like TensorFlow & PyTorch
- Experience with cloud platforms and knowledge of deploying models in production environments
- Have decent understanding of Reinforcement Learning and some understanding of Agentic AI & LLM-Ops
- First-hand experience with scalable Backend Engineering would put you on the first consideration over other candidates
A few things about us
- Building the next 100 people $100 Billion AI-first Company
- Speed of execution is really important to us
- Delivering exceptional experiences that exceed user expectations
- We embrace a Culture of continuous learning and innovation
- Users are at the heart of everything we do
- We believe in open communication, authenticity, and transparency
- Solving problems through First Principles
Benefits of joining us
- Top of the market Compensation
- Any hardware you need to do your best work
- Open to hybrid work (although we prefer in-person over remote to maximise learning)
- Flexible work timings
- Learning & Development budget
- Flexible allowances with hassle-free reimbursements
- Quarterly team outings covered by us
- First Hand experience of shipping world class products
- Having loads of fun with a GenZ team at a Hacker House
Requirements:
- Must have 5+ years of experience
- Strong proficiency in data engineering concepts and practices.
- Extensive experience in applying data science and machine learning techniques.
- Working knowledge of extracting information from unstructured data sources, particularly PDFs.
- Hands-on experience with Large Language Models (LLMs) such as GPT and Gemini.
- Knowledge of prompt engineering and fine-tuning techniques.
- Practical experience in building Retrieval-Augmented Generation (RAG) systems.
- Experience in processing large volumes of unstructured data, preferably in the insurance, legal, or healthcare sectors.
- Proven experience in extracting valuable insights from diverse data sets.
- Familiarity with Vector Databases, Azure Cloud, LangChain, Lama Index, and OCR tools and techniques.
- Working knowledge of PDF to text extraction tools like PDFMiner, PyMuPDF, or PDFPlumber, OCR tools.
- Skilled in machine learning, deep learning, computer vision, natural language processing (NLP), and generative AI models.
Good to have:
Working knowledge of Python and libraries such as NumPy, pandas, scikit-learn, and TensorFlow/PyTorch.
Preferred Qualifications:
BE/MS/PhD in Computer Science, Data Science, Machine Learning, or related fields.
Note: We are looking for passionate candidates who can join immediately.
Founded by IIT Delhi Alumni, Convin is a conversation intelligence platform that helps organisations improve sales/collections and elevate customer experience while automating the quality & coaching for reps, and backing it up with super deep business insights for leaders.
At Convin, we are leveraging AI/ML to achieve these larger business goals while focusing on bringing efficiency and reducing cost. We are already helping the leaders across Health-tech, Ed-tech, Fintech, E-commerce, and consumer services like Treebo, SOTC, Thomas Cook, Aakash, MediBuddy, PlanetSpark.
If you love AI, understand SaaS, love selling and looking to join a ship bound to fly- then Convin is the place for you!
We are seeking a talented and motivated Core Machine Learning Engineer with a passion for the audio domain. As a member of our dynamic team, you will play a crucial role in developing state-of-the-art solutions in speech-to-text, speaker separation, diarization, and related areas.
Responsibilities
- Collaborate with cross-functional teams to design, develop, and implement machine learning models and algorithms in the audio domain.
- Contribute to the research, prototyping, and deployment of speech-to-text, speaker separation, and diarization solutions.
- Explore and experiment with various techniques to improve the accuracy and efficiency of audio processing models.
- Work closely with senior engineers to optimize and integrate machine learning components into our products.
- Participate in code reviews, provide constructive feedback, and adhere to coding standards and best practices.
- Communicate effectively with team members, sharing insights and progress updates.
- Stay updated with the latest developments in machine learning, AI, NLP, and signal processing, and apply relevant advancements to our projects.
- Collaborate on the development of end-to-end systems that involve speech and language technologies.
- Assist in building and training large-scale language models like chatGPT, LLAMA, Falcon, etc., leveraging their capabilities as required.
Requirements
- Bachelor's or Master's degree in Computer Science or a related field from a reputed institution.
- 5+ years of hands-on experience in Machine Learning, Artificial Intelligence, Natural Language Processing, or signal processing.
- Strong programming skills in languages such as Python, and familiarity with relevant libraries and frameworks (e.g., TensorFlow, PyTorch).
- Knowledge of speech-to-text, text-to-speech, speaker separation, and diarization techniques is a plus.
- Solid understanding of machine learning fundamentals and algorithms.
- Excellent problem-solving skills and the ability to learn quickly.
- Strong communication skills to collaborate effectively within a team environment.
- Enthusiasm for staying updated with the latest trends and technologies in the field.
- Familiarity with large language models like chatGPT, LLAMA, Falcon, etc., is advantageous.
Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.
Responsibilities
- Work on execution and scheduling of all tasks related to assigned projects' deliverable dates
- Optimize and debug existing codes to make them scalable and improve performance
- Design, development, and delivery of tested code and machine learning models into production environments
- Work effectively in teams, managing and leading teams
- Provide effective, constructive feedback to the delivery leader
- Manage client expectations and work with an agile mindset with machine learning and AI technology
- Design and prototype data-driven solutions
Eligibility
- Highly experienced in designing, building, and shipping scalable and production-quality machine learning algorithms in the field of Python applications
- Working knowledge and experience in NLP core components (NER, Entity Disambiguation, etc.)
- In-depth expertise in Data Munging and Storage (Experienced in SQL, NoSQL, MongoDB, Graph Databases)
- Expertise in writing scalable APIs for machine learning models
- Experience with maintaining code logs, task schedulers, and security
- Working knowledge of machine learning techniques, feed-forward, recurrent and convolutional neural networks, entropy models, supervised and unsupervised learning
- Experience with at least one of the following: Keras, Tensorflow, Caffe, or PyTorch
The Role:
As an ML Engineer at TIFIN, you will be responsible for driving research and innovation in a result-oriented direction. Your role will involve staying updated on the latest research trends and exploring advancements in Natural Language Understanding (NLU) and their applications in conversational AI. You will also play a mentoring role and contribute to improving NLU capabilities using transfer learning from historical conversational data.
Requirements:
- Experience with training and fine-tuning machine learning models on large text datasets.
- Strong computer science fundamentals and at least 3 years of software development experience.
- Track record of thinking big and finding simple solutions while dealing with ambiguity.
- Proven experience as a Natural Language Processing Engineer or in similar roles.
- Good understanding of NLP tricks and techniques for semantic extraction, data structure, and modeling.
- Familiarity with text representation techniques, algorithms, and statistics.
- Knowledge of programming languages such as R, Python, and Java.
- Proficiency in Machine Learning frameworks like TensorFlow, PyTorch, etc.
- Ability to design software architectures and solve complex problems.
- Strong analytical and problem-solving skills.
- Experience in projects related to information retrieval, machine comprehension, entity recognition, text classification, semantic frame parsing, or machine translation is a plus.
- Publications, patents, or conference talks in relevant fields are a bonus.
You will be part of the core engineering team that is working on developing AI/ML models, Algorithms, and Frameworks in the areas of Video Analytics, Business Intelligence, IoT Predictive Analytics.
For more information visit www.gyrus.ai
Candidate must have the following qualifications
- Engineering or Masters degree in CS, EC, EE or related domains
- Proficient in OpenCV
- Profficiency in Python programming
- Exposure to one of the AI platforms like Tensorflow, Caffe, PyTorch
- Must have trained and deployed at least one fairly big AI model
- Exposure to AI models for Audio/Image/Video Analytics
- Exposure to one of the Cloud Computing platforms AWS/GCP
- Strong mathematical background with special emphasis towards Linear Algebra and Statistics
We are seeking a highly skilled and innovative 3+ years of experienced Generative AI Engineer to join our dynamic team. As a Generative AI Engineer, you will play a key role in developing cutting-edge algorithms and models to create generative solutions that push the boundaries of artificial intelligence.
Key Responsibilities
- Design, develop, and implement cutting-edge computer vision algorithms, encompassing areas such as image processing, object detection, and segmentation.
- Leverage Generative Adversarial Network (GAN) models to synthesize novel data and enhance the quality of existing datasets.
- Engage in collaborative efforts with cross-functional teams to seamlessly integrate AI-driven solutions into our product portfolio.
- Spearhead initiatives to refine and optimize model architectures, ensuring top-tier performance and efficiency.
- Stay at the forefront of AI and computer vision advancements, ensuring our solutions are consistently state-of-the-art.
- Provide mentorship to budding data scientists, promoting a cohesive, inclusive, and knowledge-sharing team culture.
Qualifications:
- A minimum of 3 years of specialized experience in computer vision, with a focus on deep learning. Proficiency in neural architectures such as CNNs, RNNs, Encoder-Decoders, and generative models including GANs and GPT is essential.
- Demonstrated expertise in crafting, fine-tuning, and deploying AI models for real-world applications.
- Mastery in programming languages, notably Python, and familiarity with deep learning frameworks such as TensorFlow and PyTorch. Experience with image processing libraries like OpenCV and Pillow is a plus.
- Exceptional analytical and problem-solving prowess, complemented by meticulous attention to detail.
- Stellar communication and presentation abilities, with a knack for distilling intricate technical data into clear, actionable business insights.
Preferred Skills:
- Hands-on experience with cloud platforms, especially AWS, and adeptness with container orchestration tools like Docker and Kubernetes.
- Proficiency in version control utilities, particularly Git and GitHub.
- Prior contributions to the AI research community, such as publications or significant community involvement, will be highly regarded.
- Acquaintance with agile development practices.
- A compelling portfolio showcasing a range of projects in computer vision and image segmentation is desirable
Are you passionate about pushing the boundaries of Artificial Intelligence and its applications in the software development lifecycle? Are you excited about building AI models that can revolutionize how developers ship, refactor, and onboard to legacy or existing applications faster? If so, Zevo.ai has the perfect opportunity for you!
As an AI Researcher/Engineer at Zevo.ai, you will play a crucial role in developing cutting-edge AI models using CodeBERT and codexGLUE to achieve our goal of providing an AI solution that supports developers throughout the sprint cycle. You will be at the forefront of research and development, harnessing the power of Natural Language Processing (NLP) and Machine Learning (ML) to revolutionize the way software development is approached.
Responsibilities:
- AI Model Development: Design, implement, and refine AI models utilizing CodeBERT and codexGLUE to comprehend codebases, facilitate code understanding, automate code refactoring, and enhance the developer onboarding process.
- Research and Innovation: Stay up-to-date with the latest advancements in NLP and ML research, identifying novel techniques and methodologies that can be applied to Zevo.ai's AI solution. Conduct experiments, perform data analysis, and propose innovative approaches to enhance model performance.
- Data Collection and Preparation: Collaborate with data engineers to identify, collect, and preprocess relevant datasets necessary for training and evaluating AI models. Ensure data quality, correctness, and proper documentation.
- Model Evaluation and Optimization: Develop robust evaluation metrics to measure the performance of AI models accurately. Continuously optimize and fine-tune models to achieve state-of-the-art results.
- Code Integration and Deployment: Work closely with software developers to integrate AI models seamlessly into Zevo.ai's platform. Ensure smooth deployment and monitor the performance of the deployed models.
- Collaboration and Teamwork: Collaborate effectively with cross-functional teams, including data scientists, software engineers, and product managers, to align AI research efforts with overall company objectives.
- Documentation: Maintain detailed and clear documentation of research findings, methodologies, and model implementations to facilitate knowledge sharing and future developments.
- Ethics and Compliance**: Ensure compliance with ethical guidelines and legal requirements related to AI model development, data privacy, and security.
Requirements
- Educational Background: Bachelor's/Master's or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A strong academic record with a focus on NLP and ML is highly desirable.
- Technical Expertise: Proficiency in NLP, Deep Learning, and experience with AI model development using frameworks like PyTorch or TensorFlow. Familiarity with CodeBERT and codexGLUE is a significant advantage.
- Programming Skills: Strong programming skills in Python and experience working with large-scale software projects.
- Research Experience: Proven track record of conducting research in NLP, ML, or related fields, demonstrated through publications, conference papers, or open-source contributions.
- Problem-Solving Abilities: Ability to identify and tackle complex problems related to AI model development and software engineering.
- Team Player: Excellent communication and interpersonal skills, with the ability to collaborate effectively in a team-oriented environment.
- Passion for AI: Demonstrated enthusiasm for AI and its potential to transform software development practices.
If you are eager to be at the forefront of AI research, driving innovation and impacting the software development industry, join Zevo.ai's talented team of experts as an AI Researcher/Engineer. Together, we'll shape the future of the sprint cycle and revolutionize how developers approach code understanding, refactoring, and onboarding!
We are seeking a talented and motivated AI Verification Engineer to join our team. The ideal candidate will be responsible for the validation of our AI and Machine Learning systems, ensuring that they meet all necessary quality assurance requirements and work reliably and optimally in real-world scenarios. The role requires strong analytical skills, a good understanding of AI and ML technologies, and a dedication to achieving excellence in the production of state-of-the-art systems.
Key Responsibilities:
- Develop and execute validation strategies and test plans for AI and ML systems, during development and on production environments.
- Work closely with AI/ML engineers and data scientists in understanding system requirements and capabilities and coming up with key metrics for system efficacy.
- Evaluate the system performance under various operating conditions, data variety, and scenarios.
- Perform functional, stress, system, and other testing types to ensure our systems' reliability and robustness.
- Create automated test procedures and systems for regular verification and validation processes, and detect any abnormal anomalies in usage.
- Report and track defects, providing detailed information to facilitate problem resolution.
- Lead the continuous review and improvement of validation and testing methodologies, procedures, and tools.
- Provide detailed reports and documentation on system performance, issues, and validation results.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proven experience in the testing and validation of AI/ML systems or equivalent complex systems.
- Good knowledge and understanding of AI and ML concepts, tools, and frameworks.
- Proficient in scripting and programming languages such as Python, shell scripts etc.
- Experience with AI/ML platforms and libraries such as TensorFlow, PyTorch, Keras, or Scikit-Learn.
- Excellent problem-solving abilities and attention to detail.
- Strong communication skills, with the ability to document and explain complex technical concepts clearly.
- Ability to work in a fast-paced, collaborative environment.
Preferred Skills & Qualifications:
- A good understanding of various large language models, image models, and their comparative strengths and weaknesses.
- Knowledge of CI/CD pipelines and experience with tools such as Jenkins, Git, Docker.
- Experience with cloud platforms like AWS, Google Cloud, or Azure.
- Understanding of Data Analysis and Visualization tools and techniques.
Roles & Responsibilities:
-Adopt novel and breakthrough Deep Learning/Machine Learning technology to fully solve real world problems for different industries. -Develop prototypes of machine learning models based on existing research papers.
-Utilize published/existing models to meet business requirements. Tweak existing implementations to improve efficiencies and adapt for use-case variations.
-Optimize machine learning model training and inference time. -Work closely with development and QA teams in transitioning prototypes to commercial products
-Independently work end-to-end from data collection, preparation/annotation to validation of outcomes.
-Define and develop ML infrastructure to improve efficiency of ML development workflows.
Must Have:
- Experience in productizing and deployment of ML solutions.
- AI/ML expertise areas: Computer Vision with Deep Learning. Experience with object detection, classification, recognition; document layout and understanding tasks, OCR/ICR
. - Thorough understanding of full ML pipeline, starting from data collection to model building to inference.
- Experience with Python, OpenCV and at least a few framework/libraries (TensorFlow / Keras / PyTorch / spaCy / fastText / Scikit-learn etc.)
- Years with relevant experience:
5+ -Experience or Knowledge in ML OPS.
Good to Have: NLP: Text classification, entity extraction, content summarization. AWS, Docker.
About UpSolve
We built and deliver complex AI solutions which help drive business decisions faster and more accurately. We are a typical AI company and have a range of solutions developed on Video, Image and Text.
What you will do
- Stay informed on new technologies and implement cautiously
- Maintain necessary documentation for the project
- Fix the issues reported by application users
- Plan, build, and design solutions with a mental note of future requirements
- Coordinate with the development team to manage fixes, code changes, and merging
Location: Mumbai
Working Mode: Remote
What are we looking for
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Minimum 2 years of professional experience in software development, with a focus on machine learning and full stack development.
- Strong proficiency in Python programming language and its machine learning libraries such as TensorFlow, PyTorch, or scikit-learn.
- Experience in developing and deploying machine learning models in production environments.
- Proficiency in web development technologies including HTML, CSS, JavaScript, and front-end frameworks such as React, Angular, or Vue.js.
- Experience in designing and developing RESTful APIs and backend services using frameworks like Flask or Django.
- Knowledge of databases and SQL for data storage and retrieval.
- Familiarity with version control systems such as Git.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a fast-paced and dynamic team environment.
- Good to have Cloud Exposure
Accrete.ai
Responsibilities:
- Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality.
- Design and implement cloud solutions, build MLOps on the cloud (preferably AWS)
- Work with workflow orchestration tools like Kubeflow, Airflow, Argo, or similar tools
- Data science models testing, validation, and test automation.
- Communicate with a team of data scientists, data engineers, and architects, and document the processes.
Eligibility:
- Rich hands-on experience in writing object-oriented code using python
- Min 3 years of MLOps experience (Including model versioning, model and data lineage, monitoring, model hosting and deployment, scalability, orchestration, continuous learning, and Automated pipelines)
- Understanding of Data Structures, Data Systems, and software architecture
- Experience in using MLOps frameworks like Kubeflow, MLFlow, and Airflow Pipelines for building, deploying, and managing multi-step ML workflows based on Docker containers and Kubernetes.
- Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc. )
Requirements
Experience
- 5+ years of professional experience in implementing MLOps framework to scale up ML in production.
- Hands-on experience with Kubernetes, Kubeflow, MLflow, Sagemaker, and other ML model experiment management tools including training, inference, and evaluation.
- Experience in ML model serving (TorchServe, TensorFlow Serving, NVIDIA Triton inference server, etc.)
- Proficiency with ML model training frameworks (PyTorch, Pytorch Lightning, Tensorflow, etc.).
- Experience with GPU computing to do data and model training parallelism.
- Solid software engineering skills in developing systems for production.
- Strong expertise in Python.
- Building end-to-end data systems as an ML Engineer, Platform Engineer, or equivalent.
- Experience working with cloud data processing technologies (S3, ECR, Lambda, AWS, Spark, Dask, ElasticSearch, Presto, SQL, etc.).
- Having Geospatial / Remote sensing experience is a plus.
Job Description:
Machine Learning / AI Engineer (with 3+ years of experience)
We are seeking a highly skilled and passionate Machine Learning / AI Engineer to join our newly established data science practice area. In this role, you will primarily focus on working with Large Language Models (LLMs) and contribute to building generative AI applications. This position offers an exciting opportunity to shape the future of AI technology while charting an interesting career path within our organization.
Responsibilities:
1. Develop and implement machine learning models: Utilize your expertise in machine learning and artificial intelligence to design, develop, and deploy cutting-edge models, with a particular emphasis on Large Language Models (LLMs). Apply your knowledge to solve complex problems and optimize performance.
2. Building generative AI applications: Collaborate with cross-functional teams to conceptualize, design, and build innovative generative AI applications. Work on projects that push the boundaries of AI technology and deliver impactful solutions to real-world problems.
3. Data preprocessing and analysis: Collect, clean, and preprocess large volumes of data for training and evaluation purposes. Conduct exploratory data analysis to gain insights and identify patterns that can enhance the performance of AI models.
4. Model training and evaluation: Develop robust training pipelines for machine learning models, incorporating best practices in model selection, feature engineering, and hyperparameter tuning. Evaluate model performance using appropriate metrics and iterate on the models to improve accuracy and efficiency.
5. Research and stay up to date: Keep abreast of the latest advancements in machine learning, natural language processing, and generative AI. Stay informed about industry trends, emerging techniques, and open-source libraries, and apply relevant findings to enhance the team's capabilities.
6. Collaborate and communicate effectively: Work closely with a multidisciplinary team of data scientists, software engineers, and domain experts to drive AI initiatives. Clearly communicate complex technical concepts and findings to both technical and non-technical stakeholders.
7. Experimentation and prototyping: Explore novel ideas, experiment with new algorithms, and prototype innovative solutions. Foster a culture of innovation and contribute to the continuous improvement of AI methodologies and practices within the organization.
Requirements:
1. Education: Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Relevant certifications in machine learning, deep learning, or AI are a plus.
2. Experience: A minimum of 3+ years of professional experience as a Machine Learning / AI Engineer, with a proven track record of developing and deploying machine learning models in real-world applications.
3. Strong programming skills: Proficiency in Python and experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, pandas). Experience with cloud platforms (e.g., AWS, Azure, GCP) for model deployment is preferred.
4. Deep-learning expertise: Strong understanding of deep learning architectures (e.g., convolutional neural networks, recurrent neural networks, transformers) and familiarity with Large Language Models (LLMs) such as GPT-3, GPT-4, or equivalent.
5. Natural Language Processing (NLP) knowledge: Familiarity with NLP techniques, including tokenization, word embeddings, named entity recognition, sentiment analysis, text classification, and language generation.
6. Data manipulation and preprocessing skills: Proficiency in data manipulation using SQL and experience with data preprocessing techniques (e.g., cleaning, normalization, feature engineering). Familiarity with big data tools (e.g., Spark) is a plus.
7. Problem-solving and analytical thinking: Strong analytical and problem-solving abilities, with a keen eye for detail. Demonstrated experience in translating complex business requirements into practical machine learning solutions.
8. Communication and collaboration: Excellent verbal and written communication skills, with the ability to explain complex technical concepts to diverse stakeholders
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.
We at Thena are looking for a Machine Learning Engineer with 2-4 years of industry experience to join our team. The ideal candidate will be passionate about developing and deploying ML models that drive business value and have a strong background in ML Ops.
Responsibilities:
- Develop, fine-tune, and deploy ML models for B2B customer communication and collaboration use cases.
- Collaborate with cross-functional teams to define requirements, design models, and deploy them in production.
- Optimize model performance and accuracy through experimentation, iteration, and testing.
- Build and maintain ML infrastructure and tools to support model development and deployment.
- Stay up-to-date with the latest research and best practices in ML, and share knowledge with the team.
Qualifications:
- 2-4 years of industry experience in machine learning engineering, with a focus on natural language processing (NLP) and text classification models.
- Experience with ML Ops, including deploying and managing ML models in production environments.
- Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
- Experience with Embeddings and building on top of LLMs.
- Strong problem-solving and analytical skills, with the ability to develop creative solutions to complex problems.
- Strong communication skills, with the ability to collaborate effectively with cross-functional teams.
- Bachelor's or Master's degree in Computer Science, Electrical Engineering, or a related field.
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Roles & Responsibilities:
Work on implementation of real-time and batch data pipelines for disparate data sources.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
- Identify improvement areas in the current data system and implement optimizations.
- Work on specific areas of data governance including metadata management and data quality management.
- Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
- Develop Proof-of-Concepts to validate new technology solutions or advancements.
- Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
- Work on building intelligent systems using various AI/ML algorithms.
Desired Experience/Skill:
- Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
- Experience with private and public cloud architectures with pros/cons.
- Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
- Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
- Knowledge of Kafka, Redis is preferred
- Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
About the Role:
As a Speech Engineer you will be working on development of on-device multilingual speech recognition systems.
- Apart from ASR you will be working on solving speech focused research problems like speech enhancement, voice analysis and synthesis etc.
- You will be responsible for building complete pipeline for speech recognition from data preparation to deployment on edge devices.
- Reading, implementing and improving baselines reported in leading research papers will be another key area of your daily life at Saarthi.
Requirements:
- 2-3 year of hands-on experience in speech recognitionbased projects
- Proven experience as a Speech engineer or similar role
- Should have experience of deployment on edge devices
- Candidate should have hands-on experience with open-source tools such as Kaldi, Pytorch-Kaldi and any of the end-to-end ASR tools such as ESPNET or EESEN or DeepSpeech Pytorch
- Prior proven experience in training and deployment of deep learning models on scale
- Strong programming experience in Python,C/C++, etc.
- Working experience with Pytorch and Tensorflow
- Experience contributing to research communities including publications at conferences and/or journals
- Strong communication skills
- Strong analytical and problem-solving skills
Location: Ahmedabad / Pune
Team: Technology
Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and the Internet of Things.
PRIMARY RESPONSIBILITIES:
● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics/modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework/pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science/computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adapting them for our
purposes.
● Feature engineering to add new features that improve model
performance.
KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.
EDUCATION:
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- 4+ years of industrial experience in computer vision and/or deep learning
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
Location : Gurgaon
About the company:
The company is changing the way cataloging is done across the Globe. Our vision is to empower the smallest of sellers, situated in the farthest of corners, to create superior product images and videos, without the need for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly compose High-Quality "tuned-in" product visuals, instantly. The company has built the world’s leading image editing AI software, to capture and process beautiful product images for online selling. We are also fortunate and proud to be backed by the biggest names in the investment community including the likes of Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that there is an intelligent and efficient way of doing Digital Production than how the world operates currently.
Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of e
commerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.
Job Requirements:
- Min 3+ years of work experience in Computer Vision with 5-10 years work experience overall
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
Job Description –Sr. Python Developer
Job Brief
The job requires Python experience as well as expertise with AI/ML. This Developer is expected to have strong technical skills, to work closely with the other team members in development and managing key projects. Ability to work on a small team with minimal supervision, Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality
Job Requirement
- 4 plus Years of Python relevant experience
- Good at communication skills and Email etiquette
- Quick learner and should be a team player
- Experience in working on python framework
- Experience in Developing With Python & MySQL on LAMP/LEMP Stack
- Experience in Developing an MVC Application with Python
- Experience with Threading, Multithreading and pipelines
- Experience in Creating RESTful API’s With Python in JSON, XMLs
- Experience in Designing Relational Database using MySQL And Writing Raw SQL Queries
- Experience with GitHub Version Control
- Ability of Write Custom Python Code
- Excellent working knowledge of AI/ML based application
- Experience in OpenCV/TensorFlow/ SimpleCV/PyTorch
- Experience working in agile software development methodology
- Understanding of end-to-end ML project lifecycle
- Understanding of cross platform OS systems like Windows, Linux or UNIX with hands-on working experience
Responsibilities
- Participate in the entire development lifecycle, from planning through implementation, documentation, testing, and deployment, all the way to monitoring.
- Produce high quality, maintainable code with great test coverage
- Integration of user-facing elements developed by front-end developers
- Build efficient, testable, and reusable Python/AI/ML modules
- Solve complex performance problems and architectural challenges
- Help with designing and architecting the product
- Design and develop the web application modules or APIs
- Troubleshoot and debug applications.
Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics?
What is your role?
As a Computer Vision & Machine Learning Engineer at Datasee.AI, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Datasee.AI grows.
What you’ll do
- You will be working with the core R&D team which drives the computer vision and image processing development.
- Build deep learning model for our data and object detection on large scale images.
- Design and implement real-time algorithms for object detection, classification, tracking, and segmentation
- Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions.
- Automate the workflow process between the fast-paced data delivery systems.
What we are looking for
- 1 to 3+ years of professional experience in computer vision and machine learning.
- Extensive use of Python
- Experience in python libraries such as OpenCV, Tensorflow and Numpy
- Familiarity with a deep learning library such as Keras and PyTorch
- Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO
- Experienced in hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment
- B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree
- Dockerization, AWS modules and Production level modelling
- Basic knowledge of the Fundamentals of GIS would be added advantage
Prefered Requirements
- Experience with Qt, Desktop application development, Desktop Automation
- Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS
About Datasee.AI:
Datasee>AI, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach.
With Sales & Operations based out of US, Europe & India, Datasee.AI is a team of 32 people located across different geographies and with varied domain expertise and interests.
A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.
at Synapsica Technologies Pvt Ltd
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
Synapsica is looking for a Principal AI Researcher to lead and drive AI based research and development efforts. Ideal candidate should have extensive experience in Computer Vision and AI Research, either through studies or industrial R&D projects and should be excited to work on advanced exploratory research and development projects in computer vision and machine learning to create the next generation of advanced radiology solutions.
The role involves computer vision tasks including development customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.), and traditional Image Processing (OpenCV, etc.). The role is research-focused and would involve going through and implementing existing research papers, deep dive of problem analysis, frequent review of results, generating new ideas, building new models from scratch, publishing papers, automating and optimizing key processes. The role will span from real-world data handling to the most advanced methods such as transfer learning, generative models, reinforcement learning, etc., with a focus on understanding quickly and experimenting even faster. Suitable candidate will collaborate closely both with the medical research team, software developers and AI research scientists. The candidate must be creative, ask questions, and be comfortable challenging the status quo. The position is based in our Bangalore office.
Primary Responsibilities
- Interface between product managers and engineers to design, build, and deliver AI models and capabilities for our spine products.
- Formulate and design AI capabilities of our stack with special focus on computer vision.
- Strategize end-to-end model training flow including data annotation, model experiments, model optimizations, model deployment and relevant automations
- Lead teams, engineers, and scientists to envision and build new research capabilities and ensure delivery of our product roadmap.
- Organize regular reviews and discussions.
- Keep the team up-to-date with latest industrial and research updates.
- Publish research and clinical validation papers
Requirements
- 6+ years of relevant experience in solving complex real-world problems at scale using computer vision-based deep learning.
- Prior experience in leading and managing a team.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Background in publishing research papers and/or patents
- Computer Vision and AI Research background in medical domain will be a plus
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet the deadline
-
Build, Train and Test multiple CNN models.
-
Optimizing model training & inference by utilizing multiple GPUs and CPU cores.
-
Keen interest in Life Sciences, Image processing, Genomics, Multi-omics analysis
-
Interested in reading and implementing research papers of relevant field.
-
Strong experience of Deep Learning frameworks TensorFlow, Keras, PyTorch.
-
Strong programming skills in python and experience of Ski-Learn/NumPy libraries.
-
Experience of training of Object detection Models like YOLOv3/Mask CNN and semantic segmentation models like DeepLab, Unet etc.
-
Good understanding of image processing and computer vision algorithm like watershed, histogram matching etc.
-
Experience of cell segmentation and membrane segmentation using CNNs (Optional)
-
Individual Contributor
-
Experience with image processing
-
Experience required : 2-10 Years
-
CTC :15-40 LPA
-
Good python programming and algorithmic skill.
-
Experience with deep learning model training using any known framework.
-
Working knowledge of the genomics data in R&D
-
Understanding of one or more omics data types (transcriptomics, metabolomics, proteomics, genomics, epigenomics etc.)
-
Prior work experience as a data scientist, bioinformatician or computational biologist will be a big plus
Be a part of the growth story of a rapidly growing organization in AI. We are seeking a passionate Machine Learning (ML) Engineer, with a strong background in developing and deploying state-of-the-art models on Cloud. You will participate in the complete cycle of building machine learning models from conceptualization of ideas, data preparation, feature selection, training, evaluation, and productionization.
On a typical day, you might build data pipelines, develop a new machine learning algorithm, train a new model or deploy the trained model on the cloud. You will have a high degree of autonomy, ownership, and influence over your work, machine learning organizations' evolution, and the direction of the company.
Required Qualifications
- Bachelor's degree in computer science/electrical engineering or equivalent practical experience
- 7+ years of Industry experience in Data Science, ML/AI projects. Experience in productionizing machine learning in the industry setting
- Strong grasp of statistical machine learning, linear algebra, deep learning, and computer vision
- 3+ years experience with one or more general-purpose programming languages including but not limited to: R, Python.
- Experience with PyTorch or TensorFlow or other ML Frameworks.
- Experience in using Cloud services such as AWS, GCP, Azure. Understand the principles of developing cloud-native application development
In this role you will:
- Design and implement ML components, systems and tools to automate and enable our various AI industry solutions
- Apply research methodologies to identify the machine learning models to solve a business problem and deploy the model at scale.
- Own the ML pipeline from data collection, through the prototype development to production.
- Develop high-performance, scalable, and maintainable inference services that communicate with the rest of our tech stack
Responsibilities
At Dolat, code is our business, so naturally, the Core Engineering and Systems team is at the center of what we do. Our community of developers has designed and continues to enhance one of the fastest
trading platforms using the latest tools and technologies. As a Software Developer, you’ll draw upon your computer science, mathematical, and analytical abilities to develop complex and nimble code used to grow our business and increase the efficiency of the global financial markets.
Your responsibilities may include any of the following, which will require you to exercise discretion and independent judgment:
• Augmenting, improving, redesigning, and/or re-implementing Dolat's low-latency/highthroughput production trading environment, which collects data from and disseminates
orders to exchanges around the world
• Optimizing this platform by using network and systems programming, as well as other advanced techniques
• Developing systems that provide easy access to historical market data and trading simulations
• Building risk-management and performance-tracking tools
• Shaping the future of Dolat through regular interviewing and infrequent campus recruiting trips
• Implementing domain-optimized data structures
• Learn and internalize the theories behind current trading system
• Participate in the design, architecture and implementation of automated trading systems
• Take ownership of system from design through implementation
JOB SKILLS & QUALIFICATIONS
WHAT YOU'LL DO
- Design model serving solutions and develop machine learning-based applications. services, and APIs so as to productionise machine learning models.
- Set and maintain engineering standards while to grow and go far.
- Partner with the Data Scientists (those who actually build, train and evaluate ML models) to provide an end-to-end solution for machine learning-based projects.
- Foster the technological evolution of services and improve their end-to-end quality attributes.
- Be committed to Continuous Integration and Continuous Deployment.
Preferred Skills
- Familiarity with the engineering aspects of some of popular machine learning practices, libraries, and platforms (e.g. MLflow, Kubeflow, Mleap, Michelangelo, Feast, HopsWorks, MetaFlow, Zipline, Databricks, Spark, MLlib, PyTorch, TensorFlow, and Scikit-learn among others).
- Comfortable dealing with trade-offs project delivery and quality, especially those involving latency, throughput, and http://transactions.proven/">transactions.
- Experience Continuous Integration & Continuous Deployment processes and platforms, software design patterns and APIs.
- A person that enjoys staying on top of all the best practices and tools of modern software engineering, while being a advocate of code quality and continuous improvement.
- Someone interested in large-scale systems and passionate about solving complex problems while being open and comfortable with changes in the tech stack the teams use.
Sizzle is an exciting new startup in the world of gaming. At Sizzle, we’re building AI to automatically create highlights of gaming streamers and esports tournaments.
For this role, we're looking for someone that loves to play and watch games, and is eager to roll up their sleeves and build up a new gaming platform. Specifically, we’re looking for a technical program manager - someone that can drive timelines, manage dependencies and get things done. You will work closely with the founders and the engineering team to iterate and launch new products and features. You will constantly report on status and maintain a dashboard across product, engineering, and user behavior.
You will:
- Be responsible for speedy and timely shipping of all products and features
- Work closely with front end engineers, product managers, and UI/UX teams to understand the product requirements in detail, and map them out to delivery timeframes
- Work closely with backend engineers to understand and map deployment timeframes and integration into pipelines
- Manage the timeline and delivery of numerous A/B tests on the website design, layout, color scheme, button placement, images/videos, and other objects to optimize time on site and conversion
- Keep track of all dependencies between projects and engineers
- Track all projects and tasks across all engineers and address any delays. Ensure tight coordination with management.
You should have the following qualities:
- Strong track record of successful delivery of complex projects and product launches
- 2+ years of software development; 2+ years of program management
- Excellent verbal and communication skills
- Deep understanding of AI model development and deployment
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Technical program management, ML algorithms, Tensorflow, AWS, Python
Work Experience: 3 years to 10 years
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis.
You will be responsible for:
Developing audio algorithms to detect key moments within popular online games, such as:
Streamer speaking, shouting, etc.
Gunfire, explosions, and other in-game audio events
Speech-to-text and sentiment analysis of the streamer’s narration
Leveraging baseline technologies such as TensorFlow and others -- and building models on top of them
Building neural network architectures for audio analysis as it pertains to popular games
Specifying exact requirements for training data sets, and working with analysts to create the data sets
Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
Solid understanding of AI frameworks and algorithms, especially pertaining to audio analysis, speech-to-text, sentiment analysis, and natural language processing
Experience using Python, TensorFlow and other AI tools
Demonstrated understanding of various algorithms for audio analysis, such as CNNs, LSTM for natural language processing, and others
Nice to have: some familiarity with AI-based audio analysis including sentiment analysis
Familiarity with AWS environments
Excited about working in a fast-changing startup environment
Willingness to learn rapidly on the job, try different things, and deliver results
Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Work Experience: 2 years to 10 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at http://www.sizzle.gg">www.sizzle.gg.
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with computer vision and AI technologies around image and video analysis.
You will be responsible for:
- Developing computer vision algorithms to detect key moments within popular online games
- Leveraging baseline technologies such as TensorFlow, OpenCV, and others -- and building models on top of them
- Building neural network (CNN) architectures for image and video analysis, as it pertains to popular games
- Specifying exact requirements for training data sets, and working with analysts to create the data sets
- Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
- Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
- Solid understanding of computer vision and AI frameworks and algorithms, especially pertaining to image and video analysis
- Experience using Python, TensorFlow, OpenCV and other computer vision tools
- Understand common computer vision object detection models in use today e.g. Inception, R-CNN, Yolo, MobileNet SSD, etc.
- Demonstrated understanding of various algorithms for image and video analysis, such as CNNs, LSTM for motion and inter-frame analysis, and others
- Familiarity with AWS environments
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Computer Vision, Image Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Seniority: We are open to junior or senior engineers. We're more interested in the proper skillsets.
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply. However, if you don't have AI or computer vision experience, please do not apply.
Dilaton
About the Company:
This opportunity is for an AI Drone Technology startup funded by the Indian Army. It is working to develop cutting-edge products to help the Indian Army gain an edge in New Age Enemy Warfare.
They are working on using drones to neutralize terrorists hidden in deep forests. Get a chance to contribute to secure our borders against the enemy.
Responsibilities:
- Extensive knowledge in machine learning and deep learning techniques
- Solid background in image processing/computer vision
- Experience in building datasets for computer vision tasks
- Experience working with and creating data structures/architectures
- Proficiency in at least one major machine learning framework such as Tensorflow, Pytorch
- Experience visualizing data to stakeholders
- Ability to analyze and debug complex algorithms
- Highly skilled in Python scripting language
- Creativity and curiosity for solving highly complex problems
- Excellent communication and collaboration skills
Educational Qualification:
MS in Engineering, Applied Mathematics, Data Science, Computer Science or equivalent field, with 3 years industry experience, a PhD degree or equivalent industry experience.
- 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
- Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
- Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
- Comfortable working with deep-learning libraries (eg. PyTorch)
- Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
- A Masters candidate in machine learning.
- Can source candidates from Mu Sigma and Manthan.
Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)
● Working on an awesome AI product for the eCommerce domain.
● Build the next-generation information extraction, computer vision product powered
by state-of-the-art AI and Deep Learning techniques.
● Work with an international top-notch engineering team with full commitment to
Machine Learning development.
Desired Candidate Profile
● Passionate about search & AI technologies. Open to collaborating with colleagues &
external contributors.
● Good understanding of the mainstream deep learning models from multiple domains:
computer vision, NLP, reinforcement learning, model optimization, etc.
● Hands-on experience on deep learning frameworks, e.g. Tensorflow, Pytorch, MXNet,
BERT. Able to implement the latest DL model using existing API, open-source libraries
in a short time.
● Hands-on experience with the Cloud-Native techniques. Good understanding of web
services and modern software technologies.
● Maintained/contributed machine learning projects, familiar with the agile software
development process, CICD workflow, ticket management, code-review, version
control, etc.
● Skilled in the following programming languages: Python 3.
● Good English skills especially for writing and reading documentation
Job Type: Full-time
CTC Offering : 3.6L PA to 6L PA
Job Location: Remote for 6-9 months due to the pandemic, then Mumbai, Maharashtra
Required experience:
-
Minimum 1.5 to 2 year of experience in Web & Backend Development using Python with experience in some form of Machine Learning ML Algorithms
Overview
We are looking for Python developers with a strong understanding of object orientation and experience in web and backend development. Experience with Analytical algorithms and mathematical calculations using libraries such as Numpy and Pandas are a must. Experience in some form of Machine Learning. We require candidates who have working experience using Django Framework
Key Skills required (Items in Bold are mandatory keywords) :
1. Proficiency in Python 3.x based web and backend development
2. Solid understanding of Python concepts
3. Experience with some form of Machine Learning (ML)
4. Experience in using libraries such as Numpy and Pandas
5. Some form of experience with NLP and Deep Learning using any of Pytorch, Tensorflow, Keras, Scikit-learn or similar
6. Hands on experience with RDBMS such as Postgres or MySQL
7. Experience building REST APIs using DRF or Flask
8. Comfort with Git repositories, branching and deployment using Git
9. Working experience with Docker
10. Basic working knowledge of ReactJs
11. Experience in deploying Django applications to AWS,Digital Ocean or Heroku
KRAs includes:
1. Understanding the scope of work
2. Understanding and adopting the current internal development workflow and processes
3. Understanding client requirements as communicated by the project manager
4. Arriving on timelines for projects, either independently or as a part of a team
5. Executing projects either independently or as a part of a team
6. Developing products and projects using Python
7. Writing code to collect and mathematically analyse large volumes of data.
8. Creating backend modules in Python by building or reutilizing existing modules in a manner so as to
provide optimal deliveries on time
9. Writing Scalable, maintainable code
10. Building secured REST APIs
11. Setting up batch task processing environments using Celery
12. Unit testing prepared modules
13. Bug fixing issues as reported by the QA team
14. Optimization and performance tuning of code
Bonus but not mandatory
1. Nodejs
2. Redis
3. PHP
4. CI/CD
5. AWS
Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote