43+ TensorFlow Jobs in Bangalore (Bengaluru) | TensorFlow Job openings in Bangalore (Bengaluru)
Apply to 43+ TensorFlow Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest TensorFlow Job opportunities across top companies like Google, Amazon & Adobe.
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Data Scientist is responsible to discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver better products. Your primary focus will be in applying Machine Learning and Generative AI techniques for data mining and statistical analysis, Text analytics using NLP/LLM and building high quality prediction systems integrated with our products. The ideal candidate should have a prior background in Generative AI, NLP (Natural Language Processing), and Computer Vision techniques. Additionally, experience in working with current state of the art Large Language Models (LLMs), and Computer Vision algorithms.
Job Responsibilities:
» Building models using best in AI/ML technology.
» Leveraging your expertise in Generative AI, Computer Vision, Python, Machine Learning, and Data Science to develop cutting-edge solutions for our products.
» Integrating NLP techniques, and utilizing LLM's in our products.
» Training/fine tuning models with new/modified training dataset.
» Selecting features, building and optimizing classifiers using machine learning techniques.
» Conducting data analysis, curation, preprocessing, modelling, and post-processing to drive data-driven decision-making.
» Enhancing data collection procedures to include information that is relevant for building analytic systems
» Working understanding of cloud platforms (AWS).
» Collaborating with cross-functional teams to design and implement advanced AI models and algorithms.
» Involving in R&D activities to explore the latest advancements in AI technologies, frameworks, and tools.
» Documenting project requirements, methodologies, and outcomes for stakeholders.
Technical skills
Mandatory
» Minimum of 5 years of experience as Machine Learning Researcher or Data Scientist.
» Master's degree or Ph.D. (preferable) in Computer Science, Data Science, or a related field.
» Should have knowledge and experience in working with Deep Learning projects using CNN, Transformers, Encoder and decoder architectures.
» Working experience with LLM's (Large Language Models) and their applications (For e.g., tuning embedding models, data curation, prompt engineering, LoRA, etc.).
» Familiarity with LLM Agents and related frameworks.
» Good programming skills in Python and experience with relevant libraries and frameworks (e.g., PyTorch, and TensorFlow).
» Good applied statistics skills, such as distributions, statistical testing, regression, etc.
» Excellent understanding of machine learning and computer vision based techniques and algorithms.
» Strong problem-solving abilities and a proactive attitude towards learning and adopting new technologies.
» Ability to work independently, manage multiple projects simultaneously, and collaborate effectively with diverse stakeholders.
Nice to have
» Exposure to financial research domain
» Experience with JIRA, Confluence
» Understanding of scrum and Agile methodologies
» Basic understanding of NoSQL databases, such as MongoDB, Cassandra
Experience with data visualization tools, such as Grafana, GGplot, etc.
Job Description: AI/ML Engineer
Location: Bangalore (On-site)
Experience: 2+ years of relevant experience
About the Role:
We are seeking a skilled and passionate AI/ML Engineer to join our team in Bangalore. The ideal candidate will have over two years of experience in developing, deploying, and maintaining AI and machine learning models. As an AI/ML Engineer, you will work closely with our data science team to build innovative solutions and deploy them in a production environmen
Key Responsibilities:
- Develop, implement, and optimize machine learning models.
- Perform data manipulation, exploration, and analysis to derive actionable insights.
- Use advanced computer vision techniques, including YOLO and other state-of-the-art methods, for image processing and analysis.
- Collaborate with software developers and data scientists to integrate AI/ML solutions into the company's applications and products.
- Design, test, and deploy scalable machine learning solutions using TensorFlow, OpenCV, and other related technologies.
- Ensure the efficient storage and retrieval of data using SQL and data manipulation libraries such as pandas and NumPy.
- Contribute to the development of backend services using Flask or Django for deploying AI models.
- Manage code using Git and containerize applications using Docker when necessary.
- Stay updated with the latest advancements in AI/ML and integrate them into existing projects.
Required Skills:
- Proficiency in Python and its associated libraries (NumPy, pandas).
- Hands-on experience with TensorFlow for building and training machine learning models.
- Strong knowledge of linear algebra and data augmentation techniques.
- Experience with computer vision libraries like OpenCV and frameworks like YOLO.
- Proficiency in SQL for database management and data extraction.
- Experience with Flask for backend development.
- Familiarity with version control using Git.
Optional Skills:
- Experience with PyTorch, Scikit-learn, and Docker.
- Familiarity with Django for web development.
- Knowledge of GPU programming using CuPy and CUDA.
- Understanding of parallel processing techniques.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Demonstrated experience in AI/ML, with a portfolio of past projects.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork skills.
Why Join Us?
- Opportunity to work on cutting-edge AI/ML projects.
- Collaborative and dynamic work environment.
- Competitive salary and benefits.
- Professional growth and development opportunities.
If you're excited about using AI/ML to solve real-world problems and have a strong technical background, we'd love to hear from you!
Apply now to join our growing team and make a significant impact!
Founded by IIT Delhi Alumni, Convin is a conversation intelligence platform that helps organisations improve sales/collections and elevate customer experience while automating the quality & coaching for reps, and backing it up with super deep business insights for leaders.
At Convin, we are leveraging AI/ML to achieve these larger business goals while focusing on bringing efficiency and reducing cost. We are already helping the leaders across Health-tech, Ed-tech, Fintech, E-commerce, and consumer services like Treebo, SOTC, Thomas Cook, Aakash, MediBuddy, PlanetSpark.
If you love AI, understand SaaS, love selling and looking to join a ship bound to fly- then Convin is the place for you!
We are seeking a talented and motivated Core Machine Learning Engineer with a passion for the audio domain. As a member of our dynamic team, you will play a crucial role in developing state-of-the-art solutions in speech-to-text, speaker separation, diarization, and related areas.
Responsibilities
- Collaborate with cross-functional teams to design, develop, and implement machine learning models and algorithms in the audio domain.
- Contribute to the research, prototyping, and deployment of speech-to-text, speaker separation, and diarization solutions.
- Explore and experiment with various techniques to improve the accuracy and efficiency of audio processing models.
- Work closely with senior engineers to optimize and integrate machine learning components into our products.
- Participate in code reviews, provide constructive feedback, and adhere to coding standards and best practices.
- Communicate effectively with team members, sharing insights and progress updates.
- Stay updated with the latest developments in machine learning, AI, NLP, and signal processing, and apply relevant advancements to our projects.
- Collaborate on the development of end-to-end systems that involve speech and language technologies.
- Assist in building and training large-scale language models like chatGPT, LLAMA, Falcon, etc., leveraging their capabilities as required.
Requirements
- Bachelor's or Master's degree in Computer Science or a related field from a reputed institution.
- 5+ years of hands-on experience in Machine Learning, Artificial Intelligence, Natural Language Processing, or signal processing.
- Strong programming skills in languages such as Python, and familiarity with relevant libraries and frameworks (e.g., TensorFlow, PyTorch).
- Knowledge of speech-to-text, text-to-speech, speaker separation, and diarization techniques is a plus.
- Solid understanding of machine learning fundamentals and algorithms.
- Excellent problem-solving skills and the ability to learn quickly.
- Strong communication skills to collaborate effectively within a team environment.
- Enthusiasm for staying updated with the latest trends and technologies in the field.
- Familiarity with large language models like chatGPT, LLAMA, Falcon, etc., is advantageous.
Are you passionate about pushing the boundaries of Artificial Intelligence and its applications in the software development lifecycle? Are you excited about building AI models that can revolutionize how developers ship, refactor, and onboard to legacy or existing applications faster? If so, Zevo.ai has the perfect opportunity for you!
As an AI Researcher/Engineer at Zevo.ai, you will play a crucial role in developing cutting-edge AI models using CodeBERT and codexGLUE to achieve our goal of providing an AI solution that supports developers throughout the sprint cycle. You will be at the forefront of research and development, harnessing the power of Natural Language Processing (NLP) and Machine Learning (ML) to revolutionize the way software development is approached.
Responsibilities:
- AI Model Development: Design, implement, and refine AI models utilizing CodeBERT and codexGLUE to comprehend codebases, facilitate code understanding, automate code refactoring, and enhance the developer onboarding process.
- Research and Innovation: Stay up-to-date with the latest advancements in NLP and ML research, identifying novel techniques and methodologies that can be applied to Zevo.ai's AI solution. Conduct experiments, perform data analysis, and propose innovative approaches to enhance model performance.
- Data Collection and Preparation: Collaborate with data engineers to identify, collect, and preprocess relevant datasets necessary for training and evaluating AI models. Ensure data quality, correctness, and proper documentation.
- Model Evaluation and Optimization: Develop robust evaluation metrics to measure the performance of AI models accurately. Continuously optimize and fine-tune models to achieve state-of-the-art results.
- Code Integration and Deployment: Work closely with software developers to integrate AI models seamlessly into Zevo.ai's platform. Ensure smooth deployment and monitor the performance of the deployed models.
- Collaboration and Teamwork: Collaborate effectively with cross-functional teams, including data scientists, software engineers, and product managers, to align AI research efforts with overall company objectives.
- Documentation: Maintain detailed and clear documentation of research findings, methodologies, and model implementations to facilitate knowledge sharing and future developments.
- Ethics and Compliance**: Ensure compliance with ethical guidelines and legal requirements related to AI model development, data privacy, and security.
Requirements
- Educational Background: Bachelor's/Master's or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A strong academic record with a focus on NLP and ML is highly desirable.
- Technical Expertise: Proficiency in NLP, Deep Learning, and experience with AI model development using frameworks like PyTorch or TensorFlow. Familiarity with CodeBERT and codexGLUE is a significant advantage.
- Programming Skills: Strong programming skills in Python and experience working with large-scale software projects.
- Research Experience: Proven track record of conducting research in NLP, ML, or related fields, demonstrated through publications, conference papers, or open-source contributions.
- Problem-Solving Abilities: Ability to identify and tackle complex problems related to AI model development and software engineering.
- Team Player: Excellent communication and interpersonal skills, with the ability to collaborate effectively in a team-oriented environment.
- Passion for AI: Demonstrated enthusiasm for AI and its potential to transform software development practices.
If you are eager to be at the forefront of AI research, driving innovation and impacting the software development industry, join Zevo.ai's talented team of experts as an AI Researcher/Engineer. Together, we'll shape the future of the sprint cycle and revolutionize how developers approach code understanding, refactoring, and onboarding!
Requirements
Experience
- 5+ years of professional experience in implementing MLOps framework to scale up ML in production.
- Hands-on experience with Kubernetes, Kubeflow, MLflow, Sagemaker, and other ML model experiment management tools including training, inference, and evaluation.
- Experience in ML model serving (TorchServe, TensorFlow Serving, NVIDIA Triton inference server, etc.)
- Proficiency with ML model training frameworks (PyTorch, Pytorch Lightning, Tensorflow, etc.).
- Experience with GPU computing to do data and model training parallelism.
- Solid software engineering skills in developing systems for production.
- Strong expertise in Python.
- Building end-to-end data systems as an ML Engineer, Platform Engineer, or equivalent.
- Experience working with cloud data processing technologies (S3, ECR, Lambda, AWS, Spark, Dask, ElasticSearch, Presto, SQL, etc.).
- Having Geospatial / Remote sensing experience is a plus.
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
We at Thena are looking for a Machine Learning Engineer with 2-4 years of industry experience to join our team. The ideal candidate will be passionate about developing and deploying ML models that drive business value and have a strong background in ML Ops.
Responsibilities:
- Develop, fine-tune, and deploy ML models for B2B customer communication and collaboration use cases.
- Collaborate with cross-functional teams to define requirements, design models, and deploy them in production.
- Optimize model performance and accuracy through experimentation, iteration, and testing.
- Build and maintain ML infrastructure and tools to support model development and deployment.
- Stay up-to-date with the latest research and best practices in ML, and share knowledge with the team.
Qualifications:
- 2-4 years of industry experience in machine learning engineering, with a focus on natural language processing (NLP) and text classification models.
- Experience with ML Ops, including deploying and managing ML models in production environments.
- Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
- Experience with Embeddings and building on top of LLMs.
- Strong problem-solving and analytical skills, with the ability to develop creative solutions to complex problems.
- Strong communication skills, with the ability to collaborate effectively with cross-functional teams.
- Bachelor's or Master's degree in Computer Science, Electrical Engineering, or a related field.
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
YOptima is a well capitalized digital startup pioneering full funnel marketing via programmatic media. YOptima is trusted by leading marketers and agencies in India and is expanding its footprint globally.
We are expanding our tech team and looking for a prolific Staff Engineer to lead our tech team as a leader (without necessarily being a people manager). Our tech is hosted on Google cloud and the stack includes React, Node.js, AirFlow, Python, Cloud SQL, BigQuery, TensorFlow.
If you have hands-on experience and passion for building and running scalable cloud-based platforms that change the lives of the customers globally and drive industry leadership, please read on.
- You have 6+ years of quality experience in building scalable digital products/platforms with experience in full stack development, big data analytics and Devops.
- You are great at identifying risks and opportunities, and have the depth that comes with willingness and capability to be hands-on. Do you still code? Do you love to code? Do you love to roll up your sleeves and debug things?
- Do you enjoy going deep into that part of the 'full stack' that you are not an expert of?
Responsibilities:
- You will help build a platform that supports large scale data, with multi-tenancy and near real-time analytics.
- You will lead and mentor a team of data engineers and full stack engineers to build the next generation data-driven marketing platform and solutions.
- You will lead exploring and building new tech and solutions that solve business problems of today and tomorrow.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science or equivalent discipline.
- Excellent computer systems fundamentals, DS/Algorithms and problem solving skills.
- Experience in conceiving, designing, architecting, developing and operating full stack, data-driven platforms using Big data and cloud tech in GCP/AWS environments.
What you get: Opportunity to build a global company. Amazing learning experience. Transparent work culture. Meaningful equity in the business.
At YOptima, we value people who are driven by a higher sense of responsibility, bias for action, transparency, persistence with adaptability, curiosity and humility. We believe that successful people have more failures than average people have attempts. And that success needs the creative mindset to deal with ambiguities when you start, the courage to handle rejections and failure and rise up, and the persistence and humility to iterate and course correct.
- We look for people who are initiative driven, and not interruption driven. The ones who challenge the status quo with humility and candor.
- We believe startup managers and leaders are great individual contributors too, and that there is no place for context free leadership.
- We believe that the curiosity and persistence to learn new skills and nuances, and to apply the smartness in different contexts matter more than just academic knowledge.
Location:
- Brookefield, Bangalore
- Jui Nagar, Navi Mumbai
About Us:
Small businesses are the backbone of the US economy, comprising almost half of the GDP and the private workforce. Yet, big banks don’t provide the access, assistance and modern tools that owners need to successfully grow their business.
We started Novo to challenge the status quo—we’re on a mission to increase the GDP of the modern entrepreneur by creating the go-to banking platform for small businesses (SMBs). Novo is flipping the script of the banking world, and we’re excited to lead the small business banking revolution.
At Novo, we’re here to help entrepreneurs, freelancers, startups and SMBs achieve their financial goals by empowering them with an operating system that makes business banking as easy as iOS. We developed modern bank accounts and tools to help to save time and increase cash flow. Our unique product integrations enable easy access to tracking payments, transferring money internationally, managing business transactions and more. We’ve made a big impact in a short amount of time, helping thousands of organizations access powerfully simple business banking.
We are looking for a Senior Data Scientist who is enthusiastic about using data and technology to solve complex business problems. If you're passionate about leading and helping to architect and develop thoughtful data solutions, then we want to chat. Are you ready to revolutionize the small business banking industry with us?
About the Role: (specific to the role-- describe the role activities/duties, who they interact with, what they are accountable for, how the role operates in the team, department and organization)
- Build and manage predictive models focussed on credit risk, fraud, conversions, churn, consumer behaviour etc
- Provides best practices, direction for data analytics and business decision making across multiple projects and functional areas
- Implements performance optimizations and best practices for scalable data models, pipelines and modelling
- Resolve blockers and help the team stay productive
- Take part in building the team and iterating on hiring processes
Requirements for the Role: (these are specific to the role-- technical skills and requirements to fulfill the job duties, certifications, years of experience, degree)
- 4+ years of experience in data science roles focussed on managing data processes, modelling and dashboarding
- Strong experience in python, SQL and in-depth understanding of modelling techniques
- Experience working with Pandas, scikit learn, visualization libraries like plotly, bokeh etc.
- Prior experience with credit risk modelling will be preferred
- Deep Knowledge of Python to write scripts to manipulate data and generate automated reports
How We Define Success: (these are specific to the role-- should be tied to performance management, OKRs or general goals)
- Expand access to data driven decision making across the organization
- Solve problems in risk, marketing, growth, customer behaviour through analytics models that increase efficacy
Nice To Have, but Not Required:
- Experience in dashboarding libraries like Python Dash and exposure to CI/CD
- Exposure to big data tools like Spark, and some core tech knowledge around API’s, data streaming etc.
Novo values diversity as a core tenant of the work we do and the businesses we serve. We are an equal opportunity employer, indiscriminate of race, religion, ethnicity, national origin, citizenship, gender, gender identity, sexual orientation, age, veteran status, disability, genetic information or any other protected characteristic.
Contact Center software that leverages AI to improve custome
As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.
Basic Qualifications
• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
Job Description –Sr. Python Developer
Job Brief
The job requires Python experience as well as expertise with AI/ML. This Developer is expected to have strong technical skills, to work closely with the other team members in development and managing key projects. Ability to work on a small team with minimal supervision, Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality
Job Requirement
- 4 plus Years of Python relevant experience
- Good at communication skills and Email etiquette
- Quick learner and should be a team player
- Experience in working on python framework
- Experience in Developing With Python & MySQL on LAMP/LEMP Stack
- Experience in Developing an MVC Application with Python
- Experience with Threading, Multithreading and pipelines
- Experience in Creating RESTful API’s With Python in JSON, XMLs
- Experience in Designing Relational Database using MySQL And Writing Raw SQL Queries
- Experience with GitHub Version Control
- Ability of Write Custom Python Code
- Excellent working knowledge of AI/ML based application
- Experience in OpenCV/TensorFlow/ SimpleCV/PyTorch
- Experience working in agile software development methodology
- Understanding of end-to-end ML project lifecycle
- Understanding of cross platform OS systems like Windows, Linux or UNIX with hands-on working experience
Responsibilities
- Participate in the entire development lifecycle, from planning through implementation, documentation, testing, and deployment, all the way to monitoring.
- Produce high quality, maintainable code with great test coverage
- Integration of user-facing elements developed by front-end developers
- Build efficient, testable, and reusable Python/AI/ML modules
- Solve complex performance problems and architectural challenges
- Help with designing and architecting the product
- Design and develop the web application modules or APIs
- Troubleshoot and debug applications.
at Digitectura Technologies Private Limited
Require Someone skilled in python / C/C++ to work on new products and also support existing AI based products .
Should be open to learning new frameworks
-
Build, Train and Test multiple CNN models.
-
Optimizing model training & inference by utilizing multiple GPUs and CPU cores.
-
Keen interest in Life Sciences, Image processing, Genomics, Multi-omics analysis
-
Interested in reading and implementing research papers of relevant field.
-
Strong experience of Deep Learning frameworks TensorFlow, Keras, PyTorch.
-
Strong programming skills in python and experience of Ski-Learn/NumPy libraries.
-
Experience of training of Object detection Models like YOLOv3/Mask CNN and semantic segmentation models like DeepLab, Unet etc.
-
Good understanding of image processing and computer vision algorithm like watershed, histogram matching etc.
-
Experience of cell segmentation and membrane segmentation using CNNs (Optional)
-
Individual Contributor
-
Experience with image processing
-
Experience required : 2-10 Years
-
CTC :15-40 LPA
-
Good python programming and algorithmic skill.
-
Experience with deep learning model training using any known framework.
-
Working knowledge of the genomics data in R&D
-
Understanding of one or more omics data types (transcriptomics, metabolomics, proteomics, genomics, epigenomics etc.)
-
Prior work experience as a data scientist, bioinformatician or computational biologist will be a big plus
- Writing efficient, reusable, testable, and scalable code
- Understanding, analyzing, and implementing – Business needs, feature modification requests, conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of application
- Working with Python libraries like Pandas, NumPy, etc.
- Creating predictive models for AI and ML-based features
- Keeping abreast with the latest technology and trends
- Fine-tune and develop AI/ML-based algorithms based on results
Technical Skills-
Good proficiency in,
- Python frameworks like Django, etc.
- Web frameworks and RESTful APIs
- Core Python fundamentals and programming
- Code packaging, release, and deployment
- Database knowledge
- Circles, conditional and control statements
- Object-relational mapping
- Code versioning tools like Git, Bitbucket
Fundamental understanding of,
- Front-end technologies like JS, CSS3 and HTML5
- AI, ML, Deep Learning, Version Control, Neural networking
- Data visualization, statistics, data analytics
- Design principles that are executable for a scalable app
- Creating predictive models
- Libraries like Tensorflow, Scikit-learn, etc
- Multi-process architecture
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
Software Architect
Symbl is hiring a Software Architect who is passionate about leading cross-functional R&D teams. This role will serve as the Architect across the global organization driving product architecture, reducing information silos across the org to improve decision making, and coordinating with other engineering teams to ensure seamless integration with other Symbl services.
*Symbl is seeking for a leader with a demonstrated track record of leading cross-functional dev team, you are fit for the role if *
- You have a track record of designing and building large-scale, cloud-based, highly available software platforms.
- You have 8+ years of experience in software development with 2+ years in an architect role.
- You have experience working on customer-facing machine learning implementations (predictions, recommendations, anomaly detection)
- You are an API first developer who understands the power of platforms.
- You are passionate about enabling other developers through your leadership and driving teams to efficient decisions.
- You have the ability to balance long-term objectives with urgent short-term needs
- You can successfully lead teams through very challenging engineering problems.
- You are domain Expertise in one or more of: Data pipelines and workflow, telephony systems, real time audio and video streaming machine learning.
- You have bachelor's degree in a computer science-related field is a minimum requirement
- You’ll bring your deep experience with distributed systems and platform engineering principles to the table.
- You are passionate about operational excellence and know-how to build software that delivers it.
- You are able to think at scale, define, and meet stringent availability and performance SLAs while ensuring quality and resiliency challenges across our diverse product and tech stacks are addressed with NodeJs as mandatory, Java, Python, Javascript, ReactJS with intersection with ML platform + open source DBs.
- You understand end-user use cases and are driven to design optimal software that meets business needs.
Your day would look like:
- Work with your team providing engineering leadership and ensuring your resources are solving the most critical engineering problems while ensuring your products are scalable, performant, and highly available.
- Focused on delivering the highest quality of services, and you support your team as they push production code that impacts hundreds of Symbl customers.
- Spent time with engineering managers and developers to create and deliver critical new products and/or features that empower them to introduce change with quality and speed.
- Made sure to connect with your team, both local and remote, to ensure they are delivering on engineering and operational excellence.
*Job Location : Anywhere – Currently WFH due to COVID
Compensation, Perks, and Differentiators:
- Healthcare
- Unlimited PTO
- Paid sick days
- Paid holidays
- Flexi working
- Continuing education
- Equity and performance-based pay options
- Rewards & Recognition
- As our company evolves, so do our benefits. We’re actively innovating how we support our employees.
Sizzle is an exciting new startup in the world of gaming. At Sizzle, we’re building AI to automatically create highlights of gaming streamers and esports tournaments.
For this role, we're looking for someone that loves to play and watch games, and is eager to roll up their sleeves and build up a new gaming platform. Specifically, we’re looking for a technical program manager - someone that can drive timelines, manage dependencies and get things done. You will work closely with the founders and the engineering team to iterate and launch new products and features. You will constantly report on status and maintain a dashboard across product, engineering, and user behavior.
You will:
- Be responsible for speedy and timely shipping of all products and features
- Work closely with front end engineers, product managers, and UI/UX teams to understand the product requirements in detail, and map them out to delivery timeframes
- Work closely with backend engineers to understand and map deployment timeframes and integration into pipelines
- Manage the timeline and delivery of numerous A/B tests on the website design, layout, color scheme, button placement, images/videos, and other objects to optimize time on site and conversion
- Keep track of all dependencies between projects and engineers
- Track all projects and tasks across all engineers and address any delays. Ensure tight coordination with management.
You should have the following qualities:
- Strong track record of successful delivery of complex projects and product launches
- 2+ years of software development; 2+ years of program management
- Excellent verbal and communication skills
- Deep understanding of AI model development and deployment
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Technical program management, ML algorithms, Tensorflow, AWS, Python
Work Experience: 3 years to 10 years
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis.
You will be responsible for:
Developing audio algorithms to detect key moments within popular online games, such as:
Streamer speaking, shouting, etc.
Gunfire, explosions, and other in-game audio events
Speech-to-text and sentiment analysis of the streamer’s narration
Leveraging baseline technologies such as TensorFlow and others -- and building models on top of them
Building neural network architectures for audio analysis as it pertains to popular games
Specifying exact requirements for training data sets, and working with analysts to create the data sets
Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
Solid understanding of AI frameworks and algorithms, especially pertaining to audio analysis, speech-to-text, sentiment analysis, and natural language processing
Experience using Python, TensorFlow and other AI tools
Demonstrated understanding of various algorithms for audio analysis, such as CNNs, LSTM for natural language processing, and others
Nice to have: some familiarity with AI-based audio analysis including sentiment analysis
Familiarity with AWS environments
Excited about working in a fast-changing startup environment
Willingness to learn rapidly on the job, try different things, and deliver results
Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Work Experience: 2 years to 10 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at http://www.sizzle.gg">www.sizzle.gg.
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with computer vision and AI technologies around image and video analysis.
You will be responsible for:
- Developing computer vision algorithms to detect key moments within popular online games
- Leveraging baseline technologies such as TensorFlow, OpenCV, and others -- and building models on top of them
- Building neural network (CNN) architectures for image and video analysis, as it pertains to popular games
- Specifying exact requirements for training data sets, and working with analysts to create the data sets
- Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
- Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
- Solid understanding of computer vision and AI frameworks and algorithms, especially pertaining to image and video analysis
- Experience using Python, TensorFlow, OpenCV and other computer vision tools
- Understand common computer vision object detection models in use today e.g. Inception, R-CNN, Yolo, MobileNet SSD, etc.
- Demonstrated understanding of various algorithms for image and video analysis, such as CNNs, LSTM for motion and inter-frame analysis, and others
- Familiarity with AWS environments
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Computer Vision, Image Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Seniority: We are open to junior or senior engineers. We're more interested in the proper skillsets.
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply. However, if you don't have AI or computer vision experience, please do not apply.
Job Description
JD - Python Developer
Responsibilities
- Design and implement software features based on requirements
- Architect new features for products or tools
- Articulate and document designs as needed
- Prepare and present technical training
- Provide estimates and status for development tasks
- Work effectively in a highly collaborative and iterative development process
- Work effectively with the Product, QA, and DevOps team.
- Troubleshoot issues and correct defects when required
- Build unit and integration tests that assure correct behavior and increase the maintainability of the code base
- Apply dev-ops and automation as needed
- Commit to continuous learning and enhancement of skills and product knowledge
Required Qualifications
- Minimum of 5 years of relevant experience in development and design
- Proficiency in Python and extensive knowledge of the associated libraries Extensive experience with Python data science libraries: TensorFlow, NumPy, SciPy, Pandas, etc.
- Strong skills in producing visuals with algorithm results
- Strong SQL and working knowledge of Microsoft SQL Server and other data storage technologies
- Strong web development skills Advance knowledge with ORM and data access patterns
- Experienced working using Scrum and Agile methodologies
- Excellent debugging and troubleshooting skills
- Deep knowledge of DevOps practices and cloud services
- Strong collaboration and verbal and written communication skills
- Self-starter, detail-oriented, organized, and thorough
- Strong interpersonal skills and a team-oriented mindset
- Fast learner and creative capacity for developing innovative solutions to complex problems
Skills
PYTHON, SQL, TensorFlow, NumPy, SciPy, Pandas
Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)
Looking to hire a Machine Learning Engineer
Job Description :
Sr. Machine Learning Engineer will support our various business vertical teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Accountabilities :
- Collaborate with product management and engineering departments to understand company needs and devise possible solutions
- Keep up-to-date with latest technology trends
- Communicate results and ideas to key decision makers
- Implement new statistical or other mathematical methodologies as needed for specific models or analysis
- Optimize joint development efforts through appropriate database use and project design
Skills & Requirements :
Technical Skills :
- Demonstrated skill in the use of one or more analytic software tools or languages (e.g., R, Python, Pyomo, Julia/Jump, Matlab, SAS,SQL)
- Demonstrated skill at data cleansing, data quality assessment, and using analytics for data assessment
- End-to-end system design: data analysis, feature engineering, technique selection & implementation, debugging, and maintenance in production.
- Profound understanding of skills like outlier handling, data imputation, bias, variance, cross validation etc.
- Demonstrated skill in modeling techniques, including but not limited to Predictive modeling, Supervised learning, Unsupervised learning, Machine Learning, Statistical Modeling, Natural language processing, Recommendation engines,
- Demonstrated skill in analytic prototyping, analytic scaling, and solutions integration
- Developing hypotheses and set up your own problem frameworks to test for the best solutions
- Knowledge of data visualization tools - ggplot, Dash, d3.js and Matplottlib (or any other data visualization like Tableau, Qlikview)
- Generating insights for a business context
Desirable :
- Experience with cloud technologies for building, deploying and delivering data science applications is desired (preferably in Microsoft Azure)
- Experience in Tensorflow, Keras, Theano, Text Mining is desirable but not mandatory
- Experience to work in Agile and DevOps processes.
Core Skills :
- Bachelor or master degree in information technology, computer science, business administration or a related discipline.
- Certified in Agile Product Owner / SCRUM master and/or other Agile techniques
Leadership Skills :
- Strong stakeholder management and influencing skills. Able to articulate a vision and build support for that vision in the wider team and organization.
- Ability to self-start and direct efforts based on high-level business objectives
- Strong collaboration and leadership skills with the ability to coach and develop teams to meet new challenges.
- Strong interpersonal, communication, facilitation and presentation skills.
- Work through complex interfaces across organizational and geographic boundaries
- Excellent analytical, planning and problem solving skills
Job Experience Requirements :
- Utilize an advanced knowledge level of the Data Science Toolbox to participate in the entire Data Science Project Life cycle and execute end-to-end Data Science project
- Work end-to-end on Data Science developments contributing to all aspects of the project life cycle
- Keep customers as focus of analysis insight and recommendation.
- Help define business objectives/customer needs by capturing the right requirements from the right customers.
- Can take defined problems and identify resolution paths and opportunities to solve them; which you validate by defining hypotheses and driving experiments
- Can identify unstructured problems and articulate opportunities to form new analytics project ideas
- Use and understand the key performance indicators (KPIs) and diagnostics to measure performance against business goals
- Compile integrate and analyze data from multiple sources to identify trends expose new opportunities and answer ongoing business questions
- Execute hypothesis-driven analysis to address business questions issues and opportunities
- Build validate and manage advanced models (e.g. explanatory predictive) using statistical and/or other analytical methods
- Are familiar working within Agile Project Management methodologies / structures
- Analyze results using statistical methods and work with senior team members to make recommendations to improve customer experience and business results
- Have the ability to conceptualize formulate prototype and implement algorithms to capture customer behavior and solve business problems
- Analyze results using statistical methods to make recommendations to improve customer experience and business results
3D AI company that helps large enterprise customers. (AV1)
- Develop and optimize machine learning models to run efficiently on mobile web and best exploit modern parallel environments.
- Develop and deploy real-time AR computer vision algorithms for the web in areas related to 3D reconstruction, WebGL rendering, object detection and tracking, and camera calibration.
- Work with research and engineering teams to productionize machine learning services for Augmented Reality experiences.
What we look for:
- Minimum 2+ years of experience working with the following languages (any 2): C++, Javascript, Rust.
- Strong experience working with at least one deep-learning library (e.g., PyTorch, Jax, TensorFlow, Caffe2). This experience should include formulation, training, and optimization of new algorithms.
Also Preferred:
- Prior experience with WebGL, WebAssembly.
- Strong grasp of CPU and GPU performance challenges and optimization techniques.
- Experience in shipping computer vision or image processing products to customers.
Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypotheses through theoretical and empirical approaches.
● Productize has proven or working models into production-quality code.
● Collaborate with product management, marketing, and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with the latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision-makers.
● File patents for innovative solutions that add to the company's IP portfolio
Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/Ph.D. in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes ( only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Fluency in analytical tools such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis
● Strong communication and collaboration skills.
We’re creating the infrastructure to enable crypto's safe
Responsibilities
-
Building out and manage a young data science vertical within the organization
-
Provide technical leadership in the areas of machine learning, analytics, and data sciences
-
Work with the team and create a roadmap to solve the company’s requirements by solving data-mining, analytics, and ML problems by Identifying business problems that could be solved using Data Science and scoping it out end to end.
-
Solve business problems by applying advanced Machine Learning algorithms and complex statistical models on large volumes of data.
-
Develop heuristics, algorithms, and models to deanonymize entities on public blockchains
-
Data Mining - Extend the organization’s proprietary dataset by introducing new data collection methods and by identifying new data sources.
-
Keep track of the latest trends in cryptocurrency usage on open-web and dark-web and develop counter-measures to defeat concealment techniques used by criminal actors.
-
Develop in-house algorithms to generate risk scores for blockchain transactions.
-
Work with data engineers to implement the results of your work.
-
Assemble large, complex data sets that meet functional / non-functional business requirements.
-
Build, scale and deploy holistic data science products after successful prototyping.
-
Clearly articulate and present recommendations to business partners, and influence future plans based on insights.
Preferred Experience
-
>8+ years of relevant experience as a Data Scientist or Analyst. A few years of work experience solving NLP problems or other ML problems is a plus
-
Must have previously managed a team of at least 5 data scientists or analysts or demonstrate that they have prior experience in scaling a data science function from the ground
-
Good understanding of python, bash scripting, and basic cloud platform skills (on GCP or AWS)
-
Excellent communication skills and analytical skills
What you’ll get
-
Work closely with the Founders in helping grow the organization to the next level alongside some of the best and brightest talents around you
-
An excellent culture, we encourage collaboration, growth, and learning amongst the team
-
Competitive salary and equity
-
An autonomous and flexible role where you will be trusted with key tasks.
-
An opportunity to have a real impact and be part of a company with purpose.
at Synapsica Technologies Pvt Ltd
Sr AI Scientist, Bengaluru |
Job Description
Introduction
Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis. Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by YCombinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.
Your Roles and Responsibilities
The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.) and traditional Image Processing (OpenCV etc.). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.
Requirements:
- 4+ years of relevant experience in solving complex real-world problems at scale via computer vision based deep learning.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet deadlines
- Your responsibilities:
- Build, improve and extend NLP capabilities
- Research and evaluate different approaches to NLP problems
- Must be able to write code that is well designed, produce deliverable results
- Write code that scales and can be deployed to production
- Fundamentals of statistical methods is a must
- Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM
- A solid foundation in Python, data structures, algorithms, and general software development skills.
- Ability to apply machine learning to problems that deal with language
- Engineering ability to build robustly scalable pipelines
- Ability to work in a multi-disciplinary team with a strong product focus
We are looking for an engineer with ML/DL background.
Ideal candidate should have the following skillset
1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications
Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:
1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production
Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)
WE ARE GRAPHENE
Graphene is an award-winning AI company, developing customized insights and data solutions for corporate clients. With a focus on healthcare, consumer goods and financial services, our proprietary AI platform is disrupting market research with an approach that allows us to get into the mind of customers to a degree unprecedented in traditional market research.
Graphene was founded by corporate leaders from Microsoft and P&G and works closely with the Singapore Government & universities in creating cutting edge technology. We are gaining traction with many Fortune 500 companies globally.
Graphene has a 6-year track record of delivering financially sustainable growth and is one of the few start-ups which are self-funded, yet profitable and debt free.
We already have a strong bench strength of leaders in place. Now, we are looking to groom more talents for our expansion into the US. Join us and take both our growths to the next level!
WHAT WILL THE ENGINEER-ML DO?
- Primary Purpose: As part of a highly productive and creative AI (NLP) analytics team, optimize algorithms/models for performance and scalability, engineer & implement machine learning algorithms into services and pipelines to be consumed at web-scale
- Daily Grind: Interface with data scientists, project managers, and the engineering team to achieve sprint goals on the product roadmap, and ensure healthy models, endpoints, CI/CD,
- Career Progression: Senior ML Engineer, ML Architect
YOU CAN EXPECT TO
- Work in a product-development team capable of independently authoring software products.
- Guide junior programmers, set up the architecture, and follow modular development approaches.
- Design and develop code which is well documented.
- Optimize of the application for maximum speed and scalability
- Adhere to the best Information security and Devops practices.
- Research and develop new approaches to problems.
- Design and implement schemas and databases with respect to the AI application
- Cross-pollinated with other teams.
HARD AND SOFT SKILLS
Must Have
- Problem-solving abilities
- Extremely strong programming background – data structures and algorithm
- Advanced Machine Learning: TensorFlow, Keras
- Python, spaCy, NLTK, Word2Vec, Graph databases, Knowledge-graph, BERT (derived models), Hyperparameter tuning
- Experience with OOPs and design patterns
- Exposure to RDBMS/NoSQL
- Test Driven Development Methodology
Good to Have
- Working in cloud-native environments (preferably Azure)
- Microservices
- Enterprise Design Patterns
- Microservices Architecture
- Distributed Systems
- Developing telemetry software to connect Junos devices to the cloud
- Fast prototyping and laying the SW foundation for product solutions
- Moving prototype solutions to a production cloud multitenant SaaS solution
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics specialists to strive for greater functionality in our data systems.
Qualification and Desired Experiences
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- 5+ years experiences building data pipelines for data science-driven solutions
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Good team worker with excellent interpersonal skills written, verbal and presentation
- Create and maintain optimal data pipeline architecture,
- Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
- Previous work in a start-up environment
- 3+ years experiences building data pipelines for data science-driven solutions
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and interpersonal skills.
- Experience supporting and working with multi-functional teams in a multidimensional environment.
About antuit.ai
Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.
Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.
The Role:
Antuit.ai is interested in hiring a Principal Data Scientist, this person will facilitate standing up standardization and automation ecosystem for ML product delivery, he will also actively participate in managing implementation, design and tuning of product to meet business needs.
Responsibilities:
Responsibilities includes, but are not limited to the following:
- Manage and provides technical expertise to the delivery team. This includes recommendation of solution alternatives, identification of risks and managing business expectations.
- Design, build reliable and scalable automated processes for large scale machine learning.
- Use engineering expertise to help design solutions to novel problems in software development, data engineering, and machine learning.
- Collaborate with Business, Technology and Product teams to stand-up MLOps process.
- Apply your experience in making intelligent, forward-thinking, technical decisions to delivery ML ecosystem, including implementing new standards, architecture design, and workflows tools.
- Deep dive into complex algorithmic and product issues in production
- Own metrics and reporting for delivery team.
- Set a clear vision for the team members and working cohesively to attain it.
- Mentor and coach team members
Qualifications and Skills:
Requirements
- Engineering degree in any stream
- Has at least 7 years of prior experience in building ML driven products/solutions
- Excellent programming skills in any one of the language C++ or Python or Java.
- Hands on experience on open source libraries and frameworks- Tensorflow,Pytorch, MLFlow, KubeFlow, etc.
- Developed and productized large-scale models/algorithms in prior experience
- Can drive fast prototypes/proof of concept in evaluating various technology, frameworks/performance benchmarks.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
- Good verbal, written and presentation skills.
- Ability to learn new skills and technologies.
- 3+ years working with retail or CPG preferred.
- Experience in forecasting and optimization problems, particularly in the CPG / Retail industry preferred.
Information Security Responsibilities
- Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
- Take part in Information Security training and act accordingly while handling information.
- Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).
EEOC
Antuit.ai is an at-will, equal opportunity employer. We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
Basic Qualifications:
∙Bachelors in Computer Science/Mathematics + Research (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from Tier1 tech institutes.
∙3+ years of relevant experience in building large scale machine learning or deep learning models and/or systems.
∙1 year or more of experience specifically with deep learning (CNN, RNN, LSTM, RBM etc).
∙Strong working knowledge of deep learning, machine learning, and statistics.
- Deep domain understanding of Personalization, Search and Visual.
∙Strong math skills with statistical modeling / machine learning.
∙Hands-on experience building models with deep learning frameworks like MXNet or Tensorflow.
∙Experience in using Python, statistical/machine learning libs.
∙Ability to think creatively and solve problems.
∙Data presentation skills.
Preferred:
∙MS/ Ph.D. (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from IISc and other Top Global Universities.
∙Or, Publications in highly accredited journals (If available, please share links to your published work.).
∙Or, history of scaling ML/Deep learning algorithm at massively large scale.
Responsibilities:
- Writing reusable, testable, and efficient code
- Design and implementation of low-latency, high-availability, and performant applications
- Integration of user-facing elements developed by front-end developers with server side logic
- Implementation of security and data protection
- Integration of data storage solutions (may include databases, key-value stores, blob stores, etc.)
- Expert in Python, with knowledge of at least one Python web framework (such as Django, Flask, etc depending on your technology stack)
- Familiarity with some ORM (Object Relational Mapper) libraries
- Able to integrate multiple data sources and databases into one system
- Understanding of the threading limitations of Python, and multi-process architecture
- Good understanding of server-side templating languages (such as Jinja 2, Mako, etc depending on your technology stack)
- Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3
- Understanding of accessibility and security compliance (depending on the specific project)
- Knowledge of user authentication and authorization between multiple systems, servers, and environments
- Understanding of fundamental design principles behind a scalable application
- Familiarity with event-driven programming in Python
- Understanding of the differences between multiple delivery platforms, such as mobile vs desktop, and optimizing output to match the specific platform
- Able to create database schemas that represent and support business processes
- Strong unit test and debugging skills
- Basic knowledge of machine learning algorithm and libraries like keras, tensorflow, sklearn.
at Artivatic
at Lincode Labs India Pvt Ltd
This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 4 year in the industry.