50+ Remote Machine Learning (ML) Jobs in India
Apply to 50+ Remote Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!
Client based at Bangalore location.
Data Scientist - Healthcare AI
Location: Bangalore, India
Experience: 4+ years
Skills Required: Radiology, visual images, text, classical models, LLM multi-modal
Responsibilities:
· LLM Development and Fine-tuning: Fine-tune and adapt large language models (e.g., GPT, Llama2, Mistral) for specific healthcare applications, such as text classification, named entity recognition, and question answering.
· Data Engineering: Collaborate with data engineers to build robust data pipelines for large-scale text datasets used in LLM training and fine-tuning.
· Model Evaluation and Optimization: Develop rigorous experimentation frameworks to assess model performance, identify areas for improvement, and inform model selection.
· Production Deployment: Work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
· Predictive Model Design: Leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models).
· Cross-functional Collaboration: Partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions.
· Knowledge Sharing: Mentor junior team members and stay up-to-date with the latest advancements in machine learning and LLMs.
Qualifications:
· Doctoral or master's degree in computer science, Data Science, Artificial Intelligence, or related field.
· 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models.
· 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
· Experience working with cloud-based platforms (AWS, GCP, Azure).
Preferred Qualifications:
o Experience working in the healthcare domain, particularly oncology.
o Publications in relevant scientific journals or conferences.
o Degree from a prestigious university or research institution.
We are seeking a motivated AI Engineer who is passionate about exploring the future of artificial intelligence and eager to transition into the IT field. This role is ideal for individuals with a career gap who have undergone professional training in AI and are looking to apply their skills in a dynamic environment.
Job Responsibilities
- Develop AI Models: Design and implement machine learning algorithms and deep learning models to extract insights from large datasets.
- Collaborate with Teams: Work closely with cross-functional teams to identify business needs and integrate AI solutions that enhance operational efficiency.
- Research and Experimentation: Conduct experiments to improve AI system performance and stay updated with the latest advancements in AI technologies.
- Documentation: Maintain comprehensive documentation for AI models, algorithms, and processes.
Qualifications
- Education: A bachelor’s degree in Computer Science, Engineering, or a related field is preferred.
- Training: Completion of professional training programs in AI, machine learning, or data science.
- Programming Skills: Proficiency in programming languages such as Python or R, with experience in data processing techniques.
- Analytical Skills: Strong problem-solving abilities and a keen analytical mindset.
Desired Attributes
- A genuine interest in the evolving landscape of AI technologies.
- Willingness to learn and adapt in a fast-paced environment.
- Excellent communication skills for effective collaboration within teams.
This position offers a unique opportunity for those looking to pivot their careers into IT while contributing to innovative AI projects. If you are ready to embrace this challenge, we encourage you to apply!
Thirumoolar software is seeking talented AI researchers to join our cutting-edge team and help drive innovation in artificial intelligence. As an AI researcher, you will be at the forefront of developing intelligent systems that can solve complex problems and uncover valuable insights from data.
Responsibilities:
Research and Development: Conduct research in AI areas relevant to the company's goals, such as machine learning, natural language processing, computer vision, or recommendation systems. Explore new algorithms and methodologies to solve complex problems.
Algorithm Design and Implementation: Design and implement AI algorithms and models, considering factors such as performance, scalability, and computational efficiency. Use programming languages like Python, Java, or C++ to develop prototype solutions.
Data Analysis: Analyze large datasets to extract meaningful insights and patterns. Preprocess data and engineer features to prepare it for training AI models. Apply statistical methods and machine learning techniques to derive actionable insights.
Experimentation and Evaluation: Design experiments to evaluate the performance of AI algorithms and models. Conduct thorough evaluations and analyze results to identify strengths, weaknesses, and areas for improvement. Iterate on algorithms based on empirical findings.
Collaboration and Communication: Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to integrate AI solutions into our products and services. Communicate research findings, technical concepts, and project updates effectively to stakeholders.
Preferred Location: Chennai
Job Description: Product Manager for GenAI Applications on Data Products About the Company: We are a forward-thinking technology company specializing in creating innovative data products and AI applications. Our mission is to harness the power of data and AI to drive business growth and efficiency. We are seeking a dynamic and experienced Product Manager to join our team and lead the development of cutting-edge GenAI applications. Role Overview: As a Product Manager for GenAI Applications, you will be responsible for conceptualizing, developing, and managing AI-driven products that leverage our data platforms. You will work closely with cross-functional teams, including engineering, data science, marketing, and sales, to ensure the successful delivery of high-impact AI solutions. Your understanding of business user needs and ability to translate them into effective AI applications will be crucial. Key Responsibilities: - Lead the end-to-end product lifecycle from ideation to launch for GenAI applications. - Collaborate with engineering and data science teams to design, develop, and deploy AI solutions. - Conduct market research and gather user feedback to identify opportunities for new product features and improvements. - Develop detailed product requirements, roadmaps, and user stories to guide development efforts. - Work with business stakeholders to understand their needs and ensure the AI applications meet their requirements. - Drive the product vision and strategy, aligning it with company goals and market demands. - Monitor and analyze product performance, leveraging data to make informed decisions and optimizations. - Coordinate with marketing and sales teams to create go-to-market strategies and support product launches. - Foster a culture of innovation and continuous improvement within the product development team. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Business, or a related field. - 3-5 years of experience in product management, specifically in building AI applications. - Proven track record of developing and launching AI-driven products from scratch. - Experience working with data application layers and understanding data architecture. - Strong understanding of the psyche of business users and the ability to translate their needs into technical solutions. - Excellent project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously. - Strong analytical and problem-solving skills, with a data-driven approach to decision making. - Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. - Passion for AI and a deep understanding of the latest trends and technologies in the field. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge AI technologies and products. - Collaborative and innovative work environment. - Professional development opportunities and career growth. If you are a passionate Product Manager with a strong background in AI and data products, and you are excited about building transformative AI applications, we would love to hear from you. Apply now to join our dynamic team and make an impact in the world of AI and data.
Client based at Bangalore location.
Data Scientist with LLM and Healthcare Expertise
Keywords: Data Scientist, LLM, Radiology, Healthcare, Machine Learning, Deep Learning, AI, Python, TensorFlow, PyTorch, Scikit-learn, Data Analysis, Medical Imaging, Clinical Data, HIPAA, FDA.
Responsibilities:
· Develop and deploy advanced machine learning models, particularly focusing on Large Language Models (LLMs) and their application in the healthcare domain.
· Leverage your expertise in radiology, visual images, and text data to extract meaningful insights and drive data-driven decision-making.
· Collaborate with cross-functional teams to identify and address complex healthcare challenges.
· Conduct research and explore new techniques to enhance the performance and efficiency of our AI models.
· Stay up-to-date with the latest advancements in machine learning and healthcare technology.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field.
· 6+ years of hands-on experience in data science and machine learning.
· Strong proficiency in Python and popular data science libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
· Deep understanding of LLM architectures, training methodologies, and applications.
· Expertise in working with radiology images, visual data, and text data.
· Experience in the healthcare domain, particularly in areas such as medical imaging, clinical data analysis, or patient outcomes.
· Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
· PhD in Computer Science, Data Science, or a related field.
· Experience with cloud platforms (e.g., AWS, GCP, Azure).
· Knowledge of healthcare standards and regulations (e.g., HIPAA, FDA).
· Publications in relevant academic journals or conferences.
product base company based at Bangalore location and working
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment
Must have:
- 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
- Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
- Good understanding of Generics, OOPs concepts & Design Patterns
- Solid engineering and coding skills. Ability to write high-performance production quality code in Python
- Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
- Strong expertise in time series forecasting using stat, ML, DL and foundation models
- Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
- Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
- Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
- Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
- Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
- Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders
Good To Have:
- Experience with MLOps tools and practices for continuous integration and deployment of ML models.
- Has familiarity with deploying applications on Kubernetes
- Knowledge of supply chain management principles and challenges.
- A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred
MLSecured(https://www.mlsecured.com/) an AI GRC (Governance, Risk, and Compliance) is Hiring! a Backend Software Engineer 🚀
Are you a passionate Backend Software Engineer with experience in Machine Learning and Open Source projects? Do you have a strong foundation in Python and Object-Oriented Programming (OOP) concepts? Join us at MLSecured.com and be part of our mission to solve AI Security & Compliance challenges! 🔐🤖
What We’re Looking For:
👨💻 1-2 years of professional experience in Backend Development and Open Source projects contribution
🐍 Proficiency in Python and OOP concepts
🤝 Experience with Machine Learning (NLP, GenAI)
🤝 Experience with CI/CD and cloud infra is a plus
💡 A passion for solving complex AI Security & Compliance problems
Why Join Us?
At MLSecured.com, you'll work with a talented team dedicated to pioneering AI security solutions. Be a part of our journey to make AI systems secure and compliant for everyone. 🌟
Perks of Joining a Fast-Paced Startup:
🚀 Rapid career growth and learning opportunities
🌍 Work on cutting-edge AI technologies and innovative projects
🤝 Collaborative and dynamic work environment
🎉 Flexible working hours and full remote work options
📈 Significant impact on the company's direction and success
About Springworks
At Springworks, we're on a mission to revolutionize the world of People Operations. With our innovative tools and products, we've already empowered over 500,000+ employees across 15,000+ organizations and 60+ countries in just a few short years.
But what sets us apart? Let us introduce you to our exciting product stack:
- SpringVerify: Our B2B background verification platform
- EngageWith: Spark vibrant cultures! Our recognition platform adds magic to work.
- Trivia: Fun remote team-building! Real-time games for strong bonds.
- SpringRole: Future-proof profiles! Blockchain-backed skill showcase.
- Albus: AI-powered workplace search and knowledge bot for companies
Join us at Springworks and be part of the remote work revolution. Get ready to work, play, and thrive in an environment that's anything but ordinary!
Role Overview
This role is for our Albus team. As a SDE 2 at Springworks, you will be responsible for designing, developing, and maintaining robust, scalable, and efficient web applications. You will work closely with cross-functional teams, turning innovative ideas into tangible, user-friendly products. The ideal candidate has a strong foundation in both front-end and back-end technologies, with a focus on Python, Node.js and ReactJS. Experience in Artificial Intelligence (AI), Machine Learning (ML) and Natural Language Processing (NLP) will be a significant advantage.
Responsibilities:
- Collaborate with product management and design teams to understand user requirements and translate them into technical specifications.
- Develop and maintain server-side logic using Node.js and Python.
- Design and implement user interfaces using React.js with focus on user experience.
- Build reusable and efficient code for future use.
- Implement security and data protection measures.
- Collaborate with other team members and stakeholders to ensure seamless integration of front-end and back-end components.
- Troubleshoot and debug complex issues, identifying root causes and implementing effective solutions.
- Stay up-to-date with the latest industry trends, technologies, and best practices to drive innovation within the team.
- Participate in architectural discussions and contribute to technical decision-making processes.
Goals (not limited to):
1 month into the job:
- Become familiar with the company's products, codebase, development tools, and coding standards. Aim to understand the existing architecture and code structure.
- Ensure that your development environment is fully set up and configured, and you are comfortable with the team's workflow and tools.
- Start contributing to the development process by taking on smaller tasks or bug fixes. Ensure that your code is well-documented and follows the team's coding conventions.
- Begin collaborating effectively with team members, attending daily stand-up meetings, and actively participating in discussions and code reviews.
- Understand the company's culture, values, and long-term vision to align your work with the company's goals.
3 months into the job:
- Be able to independently design, develop, and deliver small to medium-sized features or improvements to the product.
- Demonstrate consistent improvement in writing clean, efficient, and maintainable code. Receive positive feedback on code reviews.
- Continue to actively participate in team meetings, offer suggestions for process improvements, and collaborate effectively with colleagues.
- Start assisting junior team members or interns by sharing knowledge and providing mentorship.
- Seek feedback from colleagues and managers to identify areas for improvement and implement necessary changes.
6 months into the job:
- Take ownership of significant features or projects, from conception to deployment, demonstrating leadership in technical decision-making.
- Identify areas of the codebase that can benefit from refactoring or performance optimizations and work on these improvements.
- Propose and implement process improvements that enhance the team's efficiency and productivity.
- Continue to expand your technical skill set, potentially by exploring new technologies or frameworks that align with the company's needs.
- Strengthen your collaboration with other departments, such as product management or design, to ensure alignment between development and business objectives.
Requirements
- Minimum 4 years of experience working with Python along with machine learning frameworks and NLP technologies.
- Strong understanding of micro-services, messaging systems like SQS.
- Experience in designing and maintaining nosql databases (MongoDB)
- Familiarity with RESTful API design and implementation.
- Knowledge of version control systems (e.g., Git).
- Ability to work collaboratively in a team environment.
- Excellent problem-solving and communication skills, and a passion for learning. Essentially having a builder mindset is a plus.
- Proven ability to work on multiple projects simultaneously.
Nice to Have:
- Experience with containerization (e.g., Docker, Kubernetes).
- Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Knowledge of agile development methodologies.
- Contributions to open-source projects or a strong GitHub profile.
- Previous experience of working in a startup or fast paced environment.
- Strong understanding of front-end technologies such as HTML, CSS, and JavaScript.
About Company / Benefits
- Work from anywhere effortlessly with our remote setup perk: Rs. 50,000 for furniture and headphones, plus an annual addition of Rs. 5,000.
- We care about your well-being! Our health scheme covers not only your physical health but also your mental and social well-being. We've got you covered from head to toe!
- Say hello to endless possibilities with our learning and growth opportunities. We're here to fuel your curiosity and help you reach new heights.
- Take a breather and enjoy 30 annual paid leave days. It's time to relax, recharge, and make the most of your time off.
- Let's celebrate! We love company outings and celebrations that bring the team together for unforgettable moments and good vibes.
- We'll reimburse your workation trips, turning your travel dreams into reality.
- We've got your lifestyle covered. Treat yourself with our lifestyle allowance, which can be used for food, OTT, health/fitness, and more. Plus, we'll reimburse your internet expenses so you can stay connected wherever you go!
Join our remote team and experience the freedom and flexibility of asynchronous communication. Apply now!
Know more about Springworks:
- Life at Springworks: https://www.springworks.in/blog/category/life-at-springworks/
- Glassdoor Reviews: https://www.glassdoor.co.in/Overview/Working-at-Springworks-EI_IE1013270.11,22.htm
- More about Asynchronous Communication: https://www.springworks.in/blog/asynchronous-communication-remote-work/
at TensorIoT Software Services Private Limited, India
About TensorIoT
- AWS Advanced Consulting Partner (for ML and GenAI solutions)
- Pioneers in IoT and Generative AI products.
- Committed to diversity and inclusion in our teams.
TensorIoT is an AWS Advanced Consulting Partner. We help companies realize the value and efficiency of the AWS ecosystem. From building PoCs and MVPs to production-ready applications, we are tackling complex business problems every day and developing solutions to drive customer success.
TensorIoT's founders helped build world-class IoT and AI platforms at AWS and Google and are now creating solutions to simplify the way enterprises incorporate edge devices and their data into their day-to-day operations. Our mission is to help connect devices and make them intelligent. Our founders firmly believe in the transformative potential of smarter devices to enhance our quality of life, and we're just getting started!
TensorIoT is proud to be an equal-opportunity employer. This means that we are committed to diversity and inclusion and encourage people from all backgrounds to apply. We do not tolerate discrimination or harassment of any kind and make our hiring decisions based solely on qualifications, merit, and business needs at the time.
Job Description
At TensorIoT India team, we look forward to bringing on board senior Machine Learning Engineers / Data Scientists. In this section, we briefly describe the work role, the minimum and the preferred requirements to qualify for the first round of the selection process.
What are the kinds of tasks Data Scientists do at TensorIoT?
As a Data Scientist, the kinds of tasks revolve around the data that we have and the business objectives of the client. The tasks generally involve: Studying, understanding, and analyzing datasets; feature engineering, proposing and solutions, evaluating the solution scientifically, and communicating with the client. Implementing ETL pipelines with database/data lake tools. Conduct and present scientific research/experiments within the team and to the client.
Minimum Requirements:
- Masters + 6 years of work experience in Machine Learning Engineering OR B.Tech (Computer Science or related) + 8 years of work experience in Machine Learning Engineering 3 years of Cloud Experience.
- Experience working with Generative AI (LLM), Prompt Engineering, Fine Tuning of LLMs.
- Hands-on experience in MLOps (model deployment, maintenance)
- Hands-on experience with Docker.
- Clear concepts of the following:
- - Supervised Learning, Unsupervised Learning, Reinforcement Learning
- - Statistical Modelling, Deep Learning
- - Interpretable Machine Learning
- Well-rounded exposure to Computer Vision, Natural Language Processing, and Time-Series Analysis.
- Scientific & Analytical mindset, proactive learning, adaptability to changes.
- Strong interpersonal and language skills in English, to communicate within the team and with the clients.
Preferred Qualifications:
- PhD in the domain of Data Science / Machine Learning
- M.Sc | M.Tech in the domain of Computer Science / Machine Learning
- Some experience in creating cloud-native technologies, and microservices design.
- Published scientific papers in the relevant domain of work.
CV Tips:
Your CV is an integral part of your application process. We would appreciate it if the CV prioritizes the following:
- Focus:
- More focus on technical skills relevant to the job description.
- Less or no focus on your roles and responsibilities as a manager, team lead, etc.
- Less or no focus on the design aspect of the document.
- Regarding the projects you completed in your previous companies,
- Mention the problem statement very briefly.
- Your role and responsibilities in that project.
- Technologies & tools used in the project.
- Always good to mention (if relevant):
- Scientific papers published, Master Thesis, Bachelor Thesis.
- Github link, relevant blog articles.
- Link to LinkedIn profile.
- Mention skills that are relevant to the job description and you could demonstrate during the interview / tasks in the selection process.
We appreciate your interest in the company and look forward to your application.
ML Engineer
HackerPulse is a new and growing company. We help software engineers showcase their skills using AI powered profiles. As a Machine Learning Engineer, you will have the opportunity to contribute to the development and implementation of advanced Machine Learning (ML) and Natural Language Processing (NLP) solutions. You will play a crucial role in taking the innovative work done by our research team and turning it into practical solutions for production deployment. By applying to this job you agree to receive communication from us.
*Make sure to fill out the link below*
To speed up the hiring process, kindly complete the following link: https://airtable.com/appcWHN5MIs3DJEj9/shriREagoEMhlfw84
Responsibilities:
- Contribute to the development of software and solutions, emphasizing ML/NLP as a key component, to productize research goals and deployable services.
- Collaborate closely with the frontend team and research team to integrate machine learning models into deployable services.
- Utilize and develop state-of-the-art algorithms and models for NLP/ML, ensuring they align with the product and research objectives.
- Perform thorough analysis to improve existing models, ensuring their efficiency and effectiveness in real-world applications.
- Engage in data engineering tasks to clean, validate, and preprocess data for uniformity and accuracy, supporting the development of robust ML models.
- Stay abreast of new developments in research and engineering in NLP and related fields, incorporating relevant advancements into the product development process.
- Actively participate in agile development methodologies within dynamic research and engineering teams, adapting to evolving project requirements.
- Collaborate effectively within cross-functional teams, fostering open communication and cooperation between research, development, and frontend teams.
- Actively contribute to building an open, transparent, and collaborative engineering culture within the organization.
- Demonstrate strong software engineering skills to ensure the reliability, scalability, and maintainability of deployable ML services.
- Take ownership of the end-to-end deployment process, including the deployment of ML models to production environments.
- Work on continuous improvement of deployment processes and contribute to building a seamless pipeline for deploying and monitoring ML models in real-world applications.
Qualifications:
- Degree in Computer Science or related discipline or equivalent practical experience, with a strong emphasis on machine learning and natural language processing.
- Proven experience and in-depth knowledge of ML techniques, with a focus on implementing deep-learning approaches for NLP tasks in the context of productizing research goals.
- Ability to apply engineering best practices to make architectural and design decisions aligned with functionalities, user experience, performance, reliability, and scalability in the development of deployable ML services.
- Substantial experience in software development using Python, Java, and/or C or C++, with a particular emphasis on integrating machine learning models into production-ready software solutions.
- Demonstrated problem-solving skills, showcasing the ability to address complex situations effectively, especially in the context of improving models, data engineering, and deployment processes.
- Strong interpersonal and communication skills, essential for effective collaboration within cross-functional teams consisting of research, development, and frontend teams.
- Proven time management skills to handle dynamic and agile development situations, ensuring timely delivery of solutions in a fast-paced environment.
- Self-motivated contributor who frequently takes initiative to enhance the codebase and share best practices, contributing to the development of an open, transparent, and collaborative engineering culture.
at Blue Hex Software Private Limited
In this position, you will play a pivotal role in collaborating with our CFO, CTO, and our dedicated technical team to craft and develop cutting-edge AI-based products.
Role and Responsibilities:
- Develop and maintain Python-based software applications.
- Design and work with databases using SQL.
- Use Django, Streamlit, and front-end frameworks like Node.js and Svelte for web development.
- Create interactive data visualizations with charting libraries.
- Collaborate on scalable architecture and experimental tech. - Work with AI/ML frameworks and data analytics.
- Utilize Git, DevOps basics, and JIRA for project management. Skills and Qualifications:
- Strong Python programming
skills.
- Proficiency in OOP and SQL.
- Experience with Django, Streamlit, Node.js, and Svelte.
- Familiarity with charting libraries.
- Knowledge of AI/ML frameworks.
- Basic Git and DevOps understanding.
- Effective communication and teamwork.
Company details: We are a team of Enterprise Transformation Experts who deliver radically transforming products, solutions, and consultation services to businesses of any size. Our exceptional team of diverse and passionate individuals is united by a common mission to democratize the transformative power of AI.
Website: Blue Hex Software – AI | CRM | CXM & DATA ANALYTICS
Job Title: AI/ML Engineer
Expereince: 5+ Years
Location: Remote
Responsbilities
Algorithm Development:
• Design and develop cutting-edge algorithms for solving complex business problems.
• Collaborate with cross-functional teams to understand business requirements and translate them into effective AI/ML solutions.
Content Customization:
• Implement content customization strategies to enhance user experience and engagement.
• Work closely with stakeholders to tailor AI/ML models for specific business needs.
Data Collection and Preprocessing:
• Develop robust data collection strategies to ensure the availability of high-quality datasets.
• Implement preprocessing techniques to clean and prepare data for model training.
Model Training and Evaluation:
• Utilize machine learning frameworks such as TensorFlow, Pytorch, and Scikit Learn for model training.
• Conduct rigorous evaluation of models to ensure accuracy and reliability.
Monitoring and Optimization:
• Implement monitoring systems to track the performance of deployed models.
• Continuously optimize models for better efficiency and accuracy.
Data Analytics and Reporting:
• Leverage statistical modelling techniques to derive meaningful insights from data.
• Generate reports and dashboards to communicate findings to stakeholders.
Documentation:
• Create comprehensive documentation for algorithms, models, and implementation details.
• Provide documentation for training, deployment, and maintenance procedures.
Skills:
- 6+ years of experience in AI/ML engineering.
- Proficiency in Python and machine learning frameworks (TensorFlow, Pytorch, Scikit Learn).
- Experience in Natural Language Processing (NLP) and Computer Vision.
- Expertise in Generative AI techniques.
- Familiarity with cloud platforms such as Azure and AWS.
- Strong background in statistical modelling and content customization.
- Excellent problem-solving skills and ability to work independently.
- Strong communication and collaboration skills
Experience: 1- 5 Years
Job Location: WFH
No. of Position: Multiple
Qualifications: Ph.D. Must have
Work Timings: 1:30 PM IST to 10:30 PM IST
Functional Area: Data Science
NextGen Invent is currently searching for Data Scientist. This role will directly report to the VP, Data Science in Data Science Practice. The person will work on data science use-cases for the enterprise and must have deep expertise in supervised and unsupervised machine learning, modeling and algorithms with a strong focus on delivering use-cases and solutions at speed and scale to solve business problems.
Job Responsibilities:
- Leverage AI/ML modeling and algorithms to deliver on use cases
- Build modeling solutions at speed and scale to solve business problems
- Develop data science solutions that can be tested and deployed in Agile delivery model
- Implement and scale-up high-availability models and algorithms for various business and corporate functions
- Investigate and create experimental prototypes that work on specific domains and verticals
- Analyze large, complex data sets to reveal underlying patterns, and trends
- Support and enhance existing models to ensure better performance
- Set up and conduct large-scale experiments to test hypotheses and delivery of models
Skills, Knowledge, Experience:
- Must have Ph.D. in an analytical or technical field (e.g. applied mathematics, computer science)
- Strong knowledge of statistical and machine learning methods
- Hands on experience on building models at speed and scale
- Ability to work in a collaborative, transparent style with cross-functional stakeholders across the organization to lead and deliver results
- Strong skills in oral and written communication
- Ability to lead a high-functioning team and develop and train people
- Must have programming experience in SQL, Python and R
- Experience conceiving, implementing and continually improving machine learning projects
- Strong familiarity with higher level trends in artificial intelligence and open-source platforms
- Experience working with AWS, Azure, or similar cloud platform
- Familiarity with visualization techniques and software
- Healthcare experience is a plus
- Experience in Kafka, Chatbot and blockchain is a plus.
Responsibilities
> Selecting features, building and optimizing classifiers using machine
> learning techniques
> Data mining using state-of-the-art methods
> Extending company’s data with third party sources of information when
> needed
> Enhancing data collection procedures to include information that is
> relevant for building analytic systems
> Processing, cleansing, and verifying the integrity of data used for
> analysis
> Doing ad-hoc analysis and presenting results in a clear manner
> Creating automated anomaly detection systems and constant tracking of
> its performance
Key Skills
> Hands-on experience of analysis tools like R, Advance Python
> Must Have Knowledge of statistical techniques and machine learning
> algorithms
> Artificial Intelligence
> Understanding of Text analysis- Natural Language processing (NLP)
> Knowledge on Google Cloud Platform
> Advanced Excel, PowerPoint skills
> Advanced communication (written and oral) and strong interpersonal
> skills
> Ability to work cross-culturally
> Good to have Deep Learning
> VBA and visualization tools like Tableau, PowerBI, Qliksense, Qlikview
> will be an added advantage
at Mobile Programming LLC
Job Description: Salesforce Developer
Company: Mobile Programming LLC
Location: Remote
Shift Time: 8:30 PM to 5:30 AM
Experience: 5 to 7 Years
Skills: AI/ML tools, Python, R, or Java, Salesforce
Budget: 16 LPA
Mobile Programming LLC is seeking a skilled Salesforce Developer to join our team. As a Salesforce Developer, you will be responsible for designing, developing, and implementing Salesforce solutions that meet our clients' business needs. This is a remote position, and we are specifically looking for immediate joiners who can work during the shift time of 8:30 PM to 5:30 AM.
Responsibilities:
- Design, develop, and implement customized Salesforce solutions using Apex, Visualforce, and Lightning components.
- Collaborate with cross-functional teams, including business analysts and project managers, to gather requirements and translate them into technical designs.
- Customize and configure Salesforce using declarative tools like Process Builder, Workflows, and Flow.
- Integrate Salesforce with external systems using REST and SOAP APIs.
- Perform data migrations and data cleansing activities in Salesforce.
- Create and maintain technical documentation, including design specifications and test cases.
- Conduct unit testing and support system and user acceptance testing.
- Troubleshoot and resolve issues related to Salesforce configuration, customization, and integration.
Requirements:
- 5 to 7 years of experience as a Salesforce Developer.
- Proficiency in AI/ML tools, such as Python, R, or Java, is highly desired.
- Strong knowledge of Salesforce development technologies, including Apex, Visualforce, and Lightning components.
- Experience with Salesforce configuration using declarative tools.
- Familiarity with Salesforce integration using REST and SOAP APIs.
- Solid understanding of object-oriented programming concepts.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
- Ability to work independently and deliver high-quality results within tight deadlines.
- Salesforce certifications (e.g., Salesforce Platform Developer I and II) are a plus.
The budget for this position is 16 LPA (Lakh Per Annum).
If you are a talented Salesforce Developer with a passion for innovation and seeking a challenging opportunity, we would love to hear from you. Please note that only immediate joiners will be considered for this position.
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases .
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
Data Scientist-
We are looking for an experienced Data Scientists to join our engineering team and
help us enhance our mobile application with data. In this role, we're looking for
people who are passionate about developing ML/AI in various domains that solves
enterprise problems. We are keen on hiring someone who loves working in fast paced start-up environment and looking to solve some challenging engineering
problems.
As one of the earliest members in engineering, you will have the flexibility to design
the models and architecture from ground up. As any early-stage start-up, we expect
you to be comfortable wearing various hats, and be proactive contributor in building
something truly remarkable.
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
Data Scientist-
We are looking for an experienced Data Scientists to join our engineering team and
help us enhance our mobile application with data. In this role, we're looking for
people who are passionate about developing ML/AI in various domains that solves
enterprise problems. We are keen on hiring someone who loves working in fast paced start-up environment and looking to solve some challenging engineering
problems.
As one of the earliest members in engineering, you will have the flexibility to design
the models and architecture from ground up. As any early-stage start-up, we expect
you to be comfortable wearing various hats, and be proactive contributor in building
something truly remarkable.
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
Fintech Leader, building a product on data Science
Data Scientist-
We are looking for an experienced Data Scientists to join our engineering team and
help us enhance our mobile application with data. In this role, we're looking for
people who are passionate about developing ML/AI in various domains that solves
enterprise problems. We are keen on hiring someone who loves working in fast paced start-up environment and looking to solve some challenging engineering
problems.
As one of the earliest members in engineering, you will have the flexibility to design
the models and architecture from ground up. As any early-stage start-up, we expect
you to be comfortable wearing various hats, and be proactive contributor in building
something truly remarkable.
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
world’s first real-time opportunity engine. We constantly cr
● Statistics - Always makes data-driven decisions using tools from statistics, such as: populations and
sampling, normal distribution and central limit theorem, mean, median, mode, variance, standard
deviation, covariance, correlation, p-value, expected value, conditional probability and Bayes's theorem
● Machine Learning
○ Solid grasp of attention mechanism, transformers, convolutions, optimisers, loss functions,
LSTMs, forget gates, activation functions.
○ Can implement all of these from scratch in pytorch, tensorflow or numpy.
○ Comfortable defining own model architectures, custom layers and loss functions.
● Modelling
○ Comfortable with using all the major ML frameworks (pytorch, tensorflow, sklearn, etc) and NLP
models (not essential). Able to pick the right library and framework for the job.
○ Capable of turning research and papers into operational execution and functionality delivery.
Request you to complete the assignment as early as possible to start the interview process.
https://docs.google.com/forms/d/e/1FAIpQLSczf-t9l-CHqI8BpiGNndxTtzZz7fhzKsvxyjD-w9Fe_QWcMw/viewform?vc=0&c=0&w=1&flr=0" target="_blank">https://docs.google.com/forms/
We are looking for a resource with 2-4 years of experience in Full stack with Python and Vue.js with ML/AI skills. As a developer, you’ll gain valuable management experience delegating responsibilities to the rest of your team and reviewing the work of junior developers.
Requirement:
- Bachelor’s Degree in Computer Science (good to have).
- 2+ years’ experience in web and software development
- Ability to work independently and multi-task effectively
- Demonstrated understanding of projects from the perspective of both client and business(preferably from cybersecurity domain).
Technical Skills required:
- Node Js,
- Vue.JS
- ML/AI
- Python
- Agile Development
Roles & Responsibilities:
- Develops software solutions by studying information needs, conferring with users, studying systems flow, data usage, and work processes; investigating problem areas; and following the software development lifecycle.
- Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments, and clear code.
- Prepares and installs solutions by determining and designing system specifications, standards, and programming.
- Improves operations by conducting systems analysis and recommending changes in policies and procedures.
- Meeting deadlines.
- Accomplishes engineering and organization mission by completing related results as needed.
- Troubleshoot, debug and upgrade existing software
Job Type: Full-time
Salary: OPEN
Benefits:
- Flexible schedule
- Paid sick time
This is a fast filling role.
Role & responsibilities:
- Developing ETL pipelines for data replication
- Analyze, query and manipulate data according to defined business rules and procedures
- Manage very large-scale data from a multitude of sources into appropriate sets for research and development for data science and analysts across the company
- Convert prototypes into production data engineering solutions through rigorous software engineering practices and modern deployment pipelines
- Resolve internal and external data exceptions in timely and accurate manner
- Improve multi-environment data flow quality, security, and performance
Skills & qualifications:
- Must have experience with:
- virtualization, containers, and orchestration (Docker, Kubernetes)
- creating log ingestion pipelines (Apache Beam) both batch and streaming processing (Pub/Sub, Kafka)
- workflow orchestration tools (Argo, Airflow)
- supporting machine learning models in production
- Have a desire to continually keep up with advancements in data engineering practices
- Strong Python programming and exploratory data analysis skills
- Ability to work independently and with team members from different backgrounds
- At least a bachelor's degree in an analytical or technical field. This could be applied mathematics, statistics, computer science, operations research, economics, etc. Higher education is welcome and encouraged.
- 3+ years of work in software/data engineering.
- Superior interpersonal, independent judgment, complex problem-solving skills
- Global orientation, experience working across countries, regions and time zones
with the engineering team to strategize and execute the development of data products
● Execute analytical experiments methodically to help solve various problems and make a true impact across
various domains and industries
NLP ENGINEER at KARZA TECHNOLOGIES
● Identify relevant data sources and sets to mine for client business needs, and collect large structured and
unstructured datasets and variables
● Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve
models, and clean and validate data for uniformity and accuracy
● Analyze data for trends and patterns, and Interpret data with a clear objective in mind
● Implement analytical models into production by collaborating with software developers and machine
learning engineers
● Communicate analytic solutions to stakeholders and implement improvements as needed to operational
systems
What you need to work with us:
● Good understanding of data structures, algorithms, and the first principles of mathematics.
● Proficient in Python and using packages like NLTK, Numpy, Pandas
● Should have worked on deep learning frameworks (like Tensorflow, Keras, PyTorch, etc)
● Hands-on experience in Natural Language Processing, Sequence, and RNN Based models
● Mathematical intuition of ML and DL algorithms
● Should be able to perform thorough model evaluation by creating hypotheses on the basis of statistical
analyses
● Should be comfortable in going through open-source code and reading research papers.
● Should be curious or thoughtful enough to answer the “WHYs” pertaining to the most cherished
observations, thumb rules, and ideas across the data science community.
Qualification and Experience Required:
● 1 - 4 years of relevant experience
● Bachelor/ Master’s degree in computer science / Computer Engineering / Information Technology
- A Data and MLOps Engineering lead that has a good understanding of modern Data engineering frameworks with a focus on Microsoft Azure and Azure Machine Learning and its development lifecycle and DevOps.
- Aims to solve the problems encountered when turning Data into meaningful solutions using transformations and data science code into production Machine Learning systems. Some of these challenges include:
- ML orchestration - how can I automate my ML workflows across multiple environments
- Scalability - how can I take advantage of the huge computational power available in the cloud?
- Serving - how can I make my ML models available to make predictions reliably when needed?
- Monitoring - how can I effectively monitor my ML system in production to ensure reliability? Not just system metrics, but also get insight into how my models are performing over time
- Reuse – how can I profess reuse of artefacts built and establish templates and patterns?
The MLOps team works closely with ML Engineering and DevOps teams. Rather than focus just on individual use cases, the focus would be to specialise in building the platforms and tools that can help adoption of MLOps across the organisation and develop best practices and ways of working to develop a state of the art MLOps capability.
A good understanding of AI/Machine Learning and software engineering best practices such as Cloud Engineering, Infrastructure-as-Code, and CI/CD.
Have excellent communication and consulting skills, while delivering innovative AI solutions on Azure.
Responsibilities will include:
- Building state-of-the-art MLOps platforms and tooling to help adoption of MLOps across organization
- Designing cloud ML architectures and provide a roadmap for flexible patterns
- Optimizing solutions for performance and scalability
- Leading and driving the evolving best practices for MLOps
- Helping to showcase expertise and leadership in this field
Tech stack
These are some of the tools and technologies that we use day to day. Key to success will be attitude and aptitude with a vision to build the next big thing in AI/ML field.
- Python - including poetry for dependency management, pytest for automated testing and fastapi for building APIs
- Microsoft Azure Platform - primarily focused on Databricks, Azure ML
- Containers
- CI/CD – Azure DevOps
- Strong programming skills in Python
- Solid understanding of cloud concepts
- Demonstrable interest in Machine Learning
- Understanding of IaC and CI/CD concepts
- Strong communication and presentation skills.
Remuneration: Best in the industry
Connect: https://www.linkedin.com/in/shweta-gupta-a361511
Outplay is building the future of sales engagement, a solution that helps sales teams personalize at scale while consistently staying on message and on task, through true multi-channel outreach including email, phone, SMS, chat and social media. Outplay is the only tool your sales team will ever need to crush their goals. Funded by Sequoia - Headquartered in the US. Sequoia not only led a $2 million seed round in Outplay early this year, but also followed with $7.3 million Series - A recently. The team is spread remotely all over the globe.
Perks of being an Outplayer :
• Fully remote job - You can be on the mountains or at the beach, and still work with us. Outplay is a 100% remote company.
• Flexible work hours - We believe mental health is way more important than a 9-5 job.
• Health Insurance - We are a family, and we take care of each other - we provide medical insurance coverage to all employees and their family members. We also provide an additional benefit of doctor consultation along with the insurance plan.
• Annual company retreat - we work hard, and we party harder.
• Best tools - we buy you the best tools of the trade
• Celebrations - No, we never forget your birthday or anniversary (be it work or wedding) and we never leave an opportunity to celebrate milestones and wins.
• Safe space to innovate and experiment
• Steady career growth and job security
About the Role:
We are looking for a Senior Data Scientist to help research, develop and advance the charter of AI at Outplay and push the threshold of conversational intelligence.
Job description :
• Lead AI initiatives that dissects data for creating new feature prototypes and minimum viable products
• Conduct product research in natural language processing, conversation intelligence, and virtual assistant technologies
• Use independent judgment to enhance product by using existing data and building AI/ML models
• Collaborate with teams, provide technical guidance to colleagues and come up with new ideas for rapid prototyping. Convert prototypes into scalable and efficient products.
• Work closely with multiple teams on projects using textual and voice data to build conversational intelligence
• Prototype and demonstrate AI augmented capabilities in the product for customers
• Conduct experiments to assess the precision and recall of language processing modules and study the effect of such experiments on different application areas of sales
• Assist business development teams in the expansion and enhancement of a feature pipeline to support short and long-range growth plans
• Identify new business opportunities and prioritize pursuits of AI for different areas of conversational intelligence
• Build reusable and scalable solutions for use across a varied customer base
• Participate in long range strategic planning activities designed to meet the company’s objectives and revenue goals
Required Skills :
• Bachelors or Masters in a quantitative field such as Computer Science, Statistics, Mathematics, Operations Research or related field with focus on applied Machine Learning, AI, NLP and data-driven statistical analysis & modelling.
• 4+ years of experience applying AI/ML/NLP/Deep Learning/ data-driven statistical analysis & modelling solutions to multiple domains. Experience in the Sales and Marketing domain is a plus.
• Experience in building Natural Language Processing (NLP), Conversational Intelligence, and Virtual Assistants based features.
• Excellent grasp on programming languages like Python. Experience in GoLang would be a plus.
• Proficient in analysis using python packages like Pandas, Plotly, Numpy, Scipy, etc.
• Strong and proven programming skills in machine learning and deep learning with experience in frameworks such as TensorFlow/Keras, Pytorch, Transformers, Spark etc
• Excellent communication skills to explain complex solutions to stakeholders across multiple disciplines.
• Experience in SQL, RDBMS, Data Management and Cloud Computing (AWS and/or Azure) is a plus.
• Extensive experience of training and deploying different Machine Learning models
• Experience in monitoring deployed models to proactively capture data drifts, low performing models, etc.
• Exposure to Deep Learning, Neural Networks or related fields
• Passion for solving AI/ML problems for both textual and voice data.
• Fast learner, with great written and verbal communication skills, and be able to work independently as
well as in a team environment
Strong knowledge in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, etc.
Sound Knowlegde querying databases and using statistical computer languages: R, Python, SQL, etc.
Strong understanding creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
Job description:
- Selecting features, building and optimizing classifiers using machine learning techniques
- Mining data as and when required
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- Processing, cleansing, and verifying the integrity of data used for analysis
- Doing ad-hoc analysis and presenting results in a clear manner
- Creating automated anomaly detection systems and constant tracking of its performance
- Efficient stakeholder management
Skills and Qualifications
- Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc.
- Good applied statistics skills, such as distributions, statistical testing, regression, etc.
- Experience with common data science toolkits.
- Great communication skills
- Experience with data visualisation tools
- Proficiency in using query languages such as SQL
- Good scripting and programming skills
- Data-oriented personality
- B.Tech, M.Tech, B.S., M.S., MBA
Requirement / Desired Skills
Data Scientist -- Data mining skills , SQL, Advanced ML Techniques, NLP (natural Language Processing)
- Work closely with your business to identify issues and use data to propose solutions for effective decision making
- Build algorithms and design experiments to merge, manage, interrogate and extract data to supply tailored reports to colleagues, customers or the wider organisation.
- Creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc
- Querying databases and using statistical computer languages: R, Python, SLQ, etc.
- Visualizing/presenting data through various Dashboards for Data Analysis, Using Python Dash, Flask etc.
- Test data mining models to select the most appropriate ones for use on a project
- Work in a POSIX/UNIX environment to run/deploy applications
- Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Develop custom data models and algorithms to apply to data sets.
- Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
- Assess the effectiveness of data sources and data-gathering techniques and improve data collection methods
- Horizon scan to stay up to date with the latest technology, techniques and methods
- Coordinate with different functional teams to implement models and monitor outcomes.
- Stay curious and enthusiastic about using algorithms to solve problems and enthuse others to see the benefit of your work.
General Expectations:
- Able to create algorithms to extract information from large data sets
- Strong knowledge of Python, R, Java or another scripting/statistical languages to automate data retrieval, manipulation and analysis.
- Experience with extracting and aggregating data from large data sets using SQL or other tools
- Strong understanding of various NLP, and NLU techniques like Named Entity Recognition, Summarization, Topic Modeling, Text Classification, Lemmatization and Stemming.
- Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, etc.
- Experience with Python libraries such as Pandas, NumPy, SciPy, Scikit-Learn
- Experience with Jupyter / Pandas / Numpy to manipulate and analyse data
- Knowledge of Machine Learning techniques and their respective pros and cons
- Strong Knowledge of various Data Science Visualization Tools like Tableau, PowerBI, D3, Plotly, etc.
- Experience using web services: Redshift, AWS, S3, Spark, DigitalOcean, etc.
- Proficiency in using query languages, such as SQL, Spark DataFrame API, etc.
- Hands-on experience in HTML, CSS, Bootstrap, JavaScript, AJAX, jQuery and Prototyping.
- Hands-on experience on C#, Javascript, .Net
- Experience in understanding and analyzing data using statistical software (e.g., Python, R, KDB+ and other relevant libraries)
- Experienced in building applications that meet enterprise needs – secure, scalable, loosely coupled design
- Strong knowledge of computer science, algorithms, and design patterns
- Strong oral and written communication, and other soft skills critical to collaborating and engage with teams
About the job
Our goal
We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)
The culture
We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.
The role:
Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.
On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).
By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.
Responsibilities
- Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
- Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
- Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
- Continually evaluate and improve DevOps processes for a cloud-native codebase.
- Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
- Leverage your industry experience to champion engineering best practices within the organization.
Qualifications
Work Experience
- 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
- Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
- Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
- 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
- Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
- (Bonus: worked with big data in data lakes/warehouses).
- (Bonus: built an end-to-end ML pipeline)
Skills
- Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
- Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
- Strong independent contributor as well as a team player.
- Working knowledge of ML and familiarity with concepts of MLOps
Benefits
- Competitive Salary
- Work Remotely
- Health insurance
- Unlimited Time Off
- Support for continual learning (free books and online courses)
- Reimbursement for streaming services (think Netflix)
- Reimbursement for gym or physical activity of your choice
- Flex hours
- Leveling Up Opportunities
You will excel in this role if
- You have a product mindset. You understand, care about, and can relate to our customers.
- You take ownership, collaborate, and follow through to the very end.
- You love solving difficult problems, stand your ground, and get what you want from engineers.
- Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
We are looking for a Senior Developer who has experience in developing complex end-to-end solutions.
● Take responsibility for developing product features
● Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep the platform ahead of market scenarios.
● Design and develop using Node.js/Feather.js, React, AWS ML stack
● Develop and utilize your skills as a mentor and leader. Grow your team’s capacity by mentoring other engineers and interviewing candidates.
Sounds like you?
● Experience: 4+ years of industry experience in a software engineering role, preferably building a SaaS product. You can demonstrate the significant impact that your work has had on the product and/or the team.
● Technically strong: Deep knowledge of a high-level programming language (for example, Javascript, JAVA, Python, etc.) but it doesn’t need to be a language that we use here! Great people are effective and learn what we use quickly (or introduce us to better ways of working)
● Product focused: You take pride in building an elegant and beautiful product
● Problem solver: You excel at understanding and solving complex problems. You have astonishing attention to detail.
● Quality communicator: You can confidently break down tricky topics in writing and in person.
● Surprisingly efficient: You get a lot done quickly, and can translate your skills into new processes that your team will follow.
● Leadership: You’re ready to rapidly grow into a company leader. You will build company culture and help shape our future.
● Motivation and drive: You volunteer for new challenges without waiting to be asked.
6-8 yrs experience
Fully Remote position
Max compensation - 45 LPA per annum (Full in hand)
Key Responsibilities
- Design, implement and maintain software to the demanding standards of a real time, highly concurrent distributed system.
- Working in conjunction with the rest of the development team, you will architect and build highly performant, scalable and extensible external APIs
- Collaborate with customers and internal stakeholders, at all levels, to continuously improve our product in a measured data-driven approach
- Learn quickly, adapt, and invent based on changing company needs and priorities
- Contribute to code reviews, tech talks, innovation drives and patents
Minimum Qualifications
- Excellent problem solving skills
- Bachelors in a computer science or other equivalent field
- Proficiency in deploying production systems using a major programming language like Java, Python, NodeJS or similar
- Excellent command over object oriented design and system design
- Experience building distributed systems and scaling them with high availability
- Ability to exercise autonomy rather than needing detailed direction and proactively get things done
Preferred Qualifications
- Experience in customer facing software development
- Proficiency building unit and performance tests to ensure reliability and scalability
- Experience in Artificial Intelligence, Machine Learning (ML) models, Natural Language Processing or Deep Learning is a plus
- Experience with cloud infrastructure such as AWS, GCP is a plus
Why work with us
- A small collaborative and excited team
- We value autonomy, allowing you to choose the configuration that makes you most productive
- Able to work remotely anywhere in Indian Standard Time
- Continuous learning and up-skill opportunities
- We love ideas, innovation and experiments!
- Competitive salary
About Us :
Docsumo is Document AI software that helps enterprises capture data and analyze customer documents. We convert documents such as invoices, ID cards, and bank statements into actionable data. We are work with clients such as PayU, Arbor and Hitachi and backed by Sequoia, Barclays, Techstars, and Better Capital.
As a Senior Machine Learning you will be working directly with the CTO to develop end to end API products for the US market in the information extraction domain.
Responsibilities :
- You will be designing and building systems that help Docsumo process visual data i.e. as PDF & images of documents.
- You'll work in our Machine Intelligence team, a close-knit group of scientists and engineers who incubate new capabilities from whiteboard sketches all the way to finished apps.
- You will get to learn the ins and outs of building core capabilities & API products that can scale globally.
- Should have hands-on experience applying advanced statistical learning techniques to different types of data.
- Should be able to design, build and work with RESTful Web Services in JSON and XML formats. (Flask preferred)
- Should follow Agile principles and processes including (but not limited to) standup meetings, sprints and retrospectives.
Skills / Requirements :
- Minimum 3+ years experience working in machine learning, text processing, data science, information retrieval, deep learning, natural language processing, text mining, regression, classification, etc.
- Must have a full-time degree in Computer Science or similar (Statistics/Mathematics)
- Working with OpenCV, TensorFlow and Keras
- Working with Python: Numpy, Scikit-learn, Matplotlib, Panda
- Familiarity with Version Control tools such as Git
- Theoretical and practical knowledge of SQL / NoSQL databases with hands-on experience in at least one database system.
- Must be self-motivated, flexible, collaborative, with an eagerness to learn
Work Timings:4:00PM to 11:30PM
Fulltime WFH
6+ Yrs in Data science
Strong Experience ML Regression, Classification, Anomaly detection, NLP, Deep learning, Predictive analytics, Predictive maintenance ,Python, Added advantage Data visualization
What we do:
Have you ever tried creating a good video? It's hard. Really hard. We know a thing or two about it because our team is comprised of filmmakers and engineers who have spent the last 20 years of our lives in the motion picture, advertising, and television industry. Today, stitching together a production-ready video poses a litany of challenges: uploading assets from multiple sources, soliciting input from multiple departments, maintaining version control, and getting sign-off on final releases. As you can guess, a lot of this workflow has been happening across multiple tools. Desynova unifies the assets and the conversation. We're aiming to become the media collaboration layer for the entire internet.
Culture:
Rule number one: be a good person. We have a passion for technology and Film/Video making. This is not just a job, it's an opportunity to do what you love every day, stimulate your mind, be challenged, and build an app that allows thousands of creators all over the world to connect with one another and create stunning films, television shows, and works of art.
As part of the Product Management Team, you will help strategize and design our products, manage the end-to-end execution of product features, and contribute to product vision and strategy.
Responsibilities
- Understanding and analyzing user needs. Define and validate use case scenarios with statistics and metrics.
- Maintaining a deep understanding of the competitive landscape and trends
- Validating and pitching new product ideas
- Defining features through detailed specifications and user stories
- Creating elaborate mock-ups and wireframes with industry standard prototyping tools (Optional)
- Working with design and engineering teams through feature implementations
- Maintaining timelines and keeping all stakeholders updated
- Defining success metrics and analyzing product performance
- Basic understanding of how to read code, code comments and verify business logic during code reviews.
- Basic understanding of APIS, how to read & understand them, Verify the API response as per product & business requirements
- Ability to talk extensively about features and product during live demos
Qualification
- Deep passion for building software products and services
- Basic understanding of how to read code, code comments and verify business logic during code reviews.
- Uncanny product sense on how to marry technology and design to solve user needs in a practical manner
- Excellent problem solving and analytical skills combined with a strong business and technical acumen
- Bias for action and can break down complex problems into steps that help drive product development
- Immaculate oral and written communication skills
Helpful Skills
- Worked on an iOS app, from design to development to testing to deployment
- Worked on an Android app, from design to development to testing to deployment
- Worked with AI/ML projects such as Face Detection Recognition, NER, NLP, etc
- Worked with elastic search, able to write, read and modify elastic queries
- Worked with Jira
- Worked with GitHub
- Worked with Google MS Sheets
- Good with presentations.
- Good to know python, mongoDB, Js, basic HTML/CSS
We are looking for an ambitious Co-founder and CTO to join and support our growth at Spacenos. The Person we are looking for will fill an entrepreneurial C-Level role and will enrich our team with the product development and delivery skills.
Skills Required
- 3+ years of experience as a technology leader.
- Skilled in Java, ReactJs, Firebase, Google Cloud.
- Applied knowledge of Cloud Security, AWS, Machine Learning, AI and Tensorflow.
- Strong communication and team management skills.
- Excellent resource handling and understanding of scaling principles.
Roles and Responsibilities
- Develop and understand technological roadmap
- Develop feature rollout systems and processes
- Hire and manage outside/contracted engineering talent
- Structure and lead teams, individuals, and cross-functional relationships in a way that ensures productivity and communication.
- Collaborate with other teams to optimize user experience of products.
- Drive modern engineering principles and practices by implementing relevant techniques, processes, and best practices
- Clearly articulate plans and projects to both senior level executives and more junior team members.
About Us:
Spacenos is the fastest-growing start-up which is innovating in the finance, edtech and marketing domain since 2015 and won multiple awards and recognitions from more than 40+ MNCs and Fortune 500 companies. Our Clients are based out of the U.S.A and Australia. We are funded & Supported by Government of Karnataka, Angel Investors and International Grants.
Hiring Process:
- Apply for your CV and past work to be reviewed.
- Receive a telephonic interview or assessment upon filling the final step form.
-
Receive offer letter if selected.
Hiring Duration:
Our hiring process takes less than 24 hours from the time you receive the Final Step form.
Validity: Up to Dec 2023
- Apply soon, the earliest applicant would be preferred over the late applicants.
Carsome’s Data Department is on the lookout for a Data Scientist/Senior Data Scientist who has a strong passion in building data powered products.
Data Science function under the Data Department has a responsibility for standardisation of methods, mentoring team of data science resources/interns, including code libraries and documentation, quality assurance of outputs, modeling techniques and statistics, leveraging a variety of technologies, open-source languages, and cloud computing platform.
You will get to lead & implement projects such as price optimization/prediction, enabling iconic personalization experiences for our customer, inventory optimization etc.
Job Descriptions
- Identifying and integrating datasets that can be leveraged through our product and work closely with data engineering team to develop data products.
- Execute analytical experiments methodically to help solve various problems and make a true impact across functions such as operations, finance, logistics, marketing.
- Identify, prioritize, and design testing opportunities that will inform algorithm enhancements.
- Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models and clean and validate data for uniformity and accuracy.
- Unlock insights by analyzing large amounts of complex website traffic and transactional data.
- Implement analytical models into production by collaborating with data analytics engineers.
Technical Requirements
- Expertise in model design, training, evaluation, and implementation ML Algorithm expertise K-nearest neighbors, Random Forests, Naive Bayes, Regression Models. PyTorch, TensorFlow, Keras, deep learning expertise, tSNE, gradient boosting expertise, regression implementation expertise, Python, Pyspark, SQL, R, AWS Sagemaker /personalize etc.
- Machine Learning / Data Science Certification
Experience & Education
- Bachelor’s in Engineering / Master’s in Data Science / Postgraduate Certificate in Data Science.
client of peoplefirst consultants
Skills: Machine Learning,Deep Learning,Artificial Intelligence,python.
Location:Chennai
Domain knowledge: Data cleaning, modelling, analytics, statistics, machine learning, AI
Requirements:
· To be part of Digital Manufacturing and Industrie 4.0 projects across Saint Gobain group of companies
· Design and develop AI//ML models to be deployed across SG factories
· Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required
· Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks
· Prior experience in developing AI and ML models is required
· Experience with data from the Manufacturing Industry would be a plus
Roles and Responsibilities:
· Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics
· Multitasking, good communication necessary
· Entrepreneurial attitude.
Responsibilities include:
- Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
- Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
- Build data ingestion and data transformation infrastructure
- Automate infrastructure that the data science team uses
- Perform statistical analysis and tune the results so that the organization can make better-informed decisions
- Set up and manage AI development and product infrastructure
- Be a good team player, as coordinating with others is a must
JOB SKILLS & QUALIFICATIONS
WHAT YOU'LL DO
- Design model serving solutions and develop machine learning-based applications. services, and APIs so as to productionise machine learning models.
- Set and maintain engineering standards while to grow and go far.
- Partner with the Data Scientists (those who actually build, train and evaluate ML models) to provide an end-to-end solution for machine learning-based projects.
- Foster the technological evolution of services and improve their end-to-end quality attributes.
- Be committed to Continuous Integration and Continuous Deployment.
Preferred Skills
- Familiarity with the engineering aspects of some of popular machine learning practices, libraries, and platforms (e.g. MLflow, Kubeflow, Mleap, Michelangelo, Feast, HopsWorks, MetaFlow, Zipline, Databricks, Spark, MLlib, PyTorch, TensorFlow, and Scikit-learn among others).
- Comfortable dealing with trade-offs project delivery and quality, especially those involving latency, throughput, and http://transactions.proven/">transactions.
- Experience Continuous Integration & Continuous Deployment processes and platforms, software design patterns and APIs.
- A person that enjoys staying on top of all the best practices and tools of modern software engineering, while being a advocate of code quality and continuous improvement.
- Someone interested in large-scale systems and passionate about solving complex problems while being open and comfortable with changes in the tech stack the teams use.
Principal Accountabilities :
1. Good in communication and converting business requirements to functional requirements
2. Develop data-driven insights and machine learning models to identify and extract facts from sales, supply chain and operational data
3. Sound Knowledge and experience in statistical and data mining techniques: Regression, Random Forest, Boosting Trees, Time Series Forecasting, etc.
5. Experience in SOTA Deep Learning techniques to solve NLP problems.
6. End-to-end data collection, model development and testing, and integration into production environments.
7. Build and prototype analysis pipelines iteratively to provide insights at scale.
8. Experience in querying different data sources
9. Partner with developers and business teams for the business-oriented decisions
10. Looking for someone who dares to move on even when the path is not clear and be creative to overcome challenges in the data.
2. Build large datasets that will be used to train the models
3. Empirically evaluate related research works
4. Train and evaluate deep learning architectures on multiple large scale datasets
5. Collaborate with the rest of the research team to produce high-quality research
Causality Biomodels is an Indo-German life science informatics company that focuses on the development of data-based solutions in the bioinformatics sector. Specifically, we work using semantic integration & information extraction methods, knowledge & data organization, and advanced statistical & machine learning techniques in the context of life sciences.
The team Causality Biomodels is searching for a full stack developer with a strong focus on Python, capable of taking on a lead developer role.
You will be mainly focusing on the following areas:
- Implementing new features by modifying our backend system and UI according to the product backlog and discussions with the team.
- Rapid prototyping to explore new directions based on current research developments.
- Design, development and maintenance of APIs, as well as product and add-on components.
- Maintenance of code integrity and organization.
The requirements are:
- Successfully completed bachelor’s or master’s degree in computer science or in related fields such as Bioinformatics.
- At least 2 years of professional software engineering experience.
- High proficiency in Python and ability to write clean and well-documented code (must).
- Experience with cloud-based development using AWS (preferred), GCP or Azure.
- Experience with Docker and container-based deployment.
- Proficiency in JavaScript.
- Experience with at least one database system (SQL or no-SQL).
- High familiarity with Git.
- Experience with agile development practices.
- Experience with CI/CD and automated testing.
- Very strong English skills (both verbal and written).
Bonus points for:
- Knowledge about machine learning or data science.
- Experience with Python packages SpaCy, scikit-learn, flask and fastapi.
- Experience with JavaScript libraries React and Redux/Context.
- Experience with Gitlab CI/CD pipelines.
- Experience working with knowledge graph data.
- Knowledge and experience in bioinformatics methods.
Job Responsibilities:-
- Develop robust, scalable and maintainable machine learning models to answer business problems against large data sets.
- Build methods for document clustering, topic modeling, text classification, named entity recognition, sentiment analysis, and POS tagging.
- Perform elements of data cleaning, feature selection and feature engineering and organize experiments in conjunction with best practices.
- Benchmark, apply, and test algorithms against success metrics. Interpret the results in terms of relating those metrics to the business process.
- Work with development teams to ensure models can be implemented as part of a delivered solution replicable across many clients.
- Knowledge of Machine Learning, NLP, Document Classification, Topic Modeling and Information Extraction with a proven track record of applying them to real problems.
- Experience working with big data systems and big data concepts.
- Ability to provide clear and concise communication both with other technical teams and non-technical domain specialists.
- Strong team player; ability to provide both a strong individual contribution but also work as a team and contribute to wider goals is a must in this dynamic environment.
- Experience with noisy and/or unstructured textual data.
knowledge graph and NLP including summarization, topic modelling etc
- Strong coding ability with statistical analysis tools in Python or R, and general software development skills (source code management, debugging, testing, deployment, etc.)
- Working knowledge of various text mining algorithms and their use-cases such as keyword extraction, PLSA, LDA, HMM, CRF, deep learning & recurrent ANN, word2vec/doc2vec, Bayesian modeling.
- Strong understanding of text pre-processing and normalization techniques, such as tokenization,
- POS tagging and parsing and how they work at a low level.
- Excellent problem solving skills.
- Strong verbal and written communication skills
- Masters or higher in data mining or machine learning; or equivalent practical analytics / modelling experience
- Practical experience in using NLP related techniques and algorithms
- Experience in open source coding and communities desirable.
Able to containerize Models and associated modules and work in a Microservices environment
A firm which woks with US clients. Permanent WFH.
This person MUST have:
- B.E Computer Science or equivalent
- 5 years experience with the Django framework
- Experience with building APIs (REST or GraphQL)
- Strong Troubleshooting and debugging skills
- React.js knowledge would be an added bonus
- Understanding on how to use a database like Postgres (prefered choice), SQLite, MongoDB, MySQL.
- Sound knowledge of object-oriented design and analysis.
- A strong passion for writing simple, clean and efficient code.
- Proficient understanding of code versioning tools Git.
- Strong communication skills.
Experience:
- Min 5 year experience
- Startup experience is a must.
Location:
- Remote developer
Timings:
- 40 hours a week but with 4 hours a day overlapping with client timezone. Typically clients are in California PST Timezone.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Responsibilities:
• Develop computer vision systems for enterprises to be used by hundreds of our
customers
• Enhance existing Computer vision systems to achieve high performance
• Prototype new algorithms rapidly, iterating to achieve high levels of performance
• Package these prototypes as robust models written in production level code to be
integrated into the product
• Work closely with the ML engineers to explore and enhance new product features
leading to new areas of business
Requirements:
Strong understanding of linear algebra, optimisation, probability, statistics
• Experience in the data science methodology from exploratory data analysis, feature
engineering, model selection, deployment of the model at scale and model evaluation
• Background in machine learning with experience in large scale training and
convolutional neural networks
• Deep understanding of evaluation metrics for different computer vision tasks
• Knowledge of common architectures for various computer vision tasks like object
detection, recognition, and semantic segmentation
• Experience with model quantization is a plus
• Experience with Python Web Framework (Django/Flask/FastAPI), Machine Learning
frameworks like Tensorflow/Keras/Pytorch
Product Based
We have an Excellent job Opportunity for "Applied Machine Learning Engineer" with one of thr Product based organization for Remote Working Mode or for Mumbai Location.
Job Responsibilities:
- Apply your knowledge of ML and statistics to conceptualise, experiment, develop & deploy machine learning & deep learning systems.
- Understanding the business objectives & defining the right target metrics to track performance & progress.
- Defining & building datasets with the appropriate representation techniques for learning.
- Training & tuning models. Running evaluation & test experiments on the models.
- Build ML pipelines end to end. (Everything MLOps.)
- Building pipelines for the various stages.
- Deploying models.
- Troubleshooting issues with models in production.
- Reporting results of model performance in production.
- Retraining, performance logging & maintenance.
- Help the business with insights for better decision-making. You will build many predictive models for internal business operations
you will derive insights from the trained models & data to help the product & business teams make better decisions.
Requirements:
- 2+ years of work experience as an ML engineer or Data Scientist with a Bachelors Degree in Computer science or related field
- Theoretical & practical knowledge of Machine Learning, Deep Learning and Statistical methods. (NLP Tasks, Recommender Systems, Predictive Modelling etc)
- Since Pepper is a content company, you will work on many interesting text based problems. Solid understanding of Natural Language Processing techniques with Deep Learning is a must for this role.
- Familiarity with the popular NLP applications and text representation architectures & techniques: text classification, machine translation, named entity recognition, summarisation, question answering, zero-shot learning etc. Bag of Words, TF-IDF, Word2vec, GloVe, BERT, ELMo, GPT etc.
- Experience with ML frameworks (like Tensorflow, Keras, PyTorch) & libraries like Sklearn.
- Experience with ML infrastructure & shipping models.
- Excellent programming & algorithmic skills. Good understanding of Data Structures and algorithms (fluent in at least one object oriented programming language). Proficiency in Python is a must.
- Strong understanding of database systems & schema design. Proficient in SQL
Please let us know if you are interested in the above opening and if interested please let us know your
Current CTC :
Expected CTC :
Notice Period :
Relevant experience in Machine Learning :
Relevant experience in Deep Learning:
Relevant experience in NLP Applications:
Regards
Ashwini
US based ecommerce platform which unites designers with cust
- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good
understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least
one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision
is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have exposure and working knowledge in AI environment with Machine learning experience
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis,
MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana,
Graylog, StatsD, Datadog etc
Summary:
Smart Joules is looking for a Chief Technology Officer (CTO) to drive the evolution of our technology platform (DeJoule) to make continuous energy optimization simple and profitable at scale. DeJoule is designed on the latest IOT and web technologies to continuously identify and correct for hidden inefficiencies in dynamic energy systems such as air conditioning and compressed air. It 1) collects data in real time from buildings and factories, 2) analyses real time and historical data using heuristic and machine learning algorithms to find hidden energy inefficiencies and optimum set points, and 3) continuously adjusts (controls) set points of various equipment installed in buildings and factories to minimize energy use. We are looking to build new capabilities in data-driven intelligence, user delight, universal compatibility and continuous optimization through full automation.
You will engage deeply with Smart Joules’ leadership and management team and customers to connect our software and its capabilities to our mission. The technology we build together will save 30% or more of the energy consumed in India’s most prominent buildings and factories that stand today and are yet to be built, and will displace the largest multi-nationals as the highest selling automation product in the Indian and other developing markets.
If you are passionate about stopping climate change, a believer in collaborative and multi-disciplinary innovation, capable of building and leading young and passionate technology teams, an expert in IoT and ready to commit yourself wholeheartedly to build the company that will manage the largest number of Joules in India by 2025, we’d like to meet you.
Overall Responsibilities:
- Build DeJoule into a product that can outcompete any other globally on automatic and continuous performance optimization at scale, user engagement and cost.
- Recruit, motivate, train and lead India’s #1 energy tech team.
- Strengthen the company culture around innovation and excellence so that DeJoule consistently remains ahead of the pack.
- Develop the company’s image as a global leader in the digital energy transition.
The Right Candidate:
- Is ready and hungry to work on the defining project of their professional career.
- Is able to cite specific experiences from their personal life and professional career where they have demonstrate that they have the ability to make tough decisions, to fight and win.
- Has 7+ years of experience building IOT & cloud technology platforms with mastery in System Design and Architecture, Database Administration, Data Structuring and Algorithms, Javascript frameworks (Angular.js, Node.js), AWS managed services (SNS,SQS, IoT core, Dynamo DB, Lambdas, Kinesis and others), Python and related technologies.
- Is humble and collaborative, with a deep-seated ambition to have a significant impact at a global scale.
- Has worked in a start-up or in start-up-like conditions with volatility, uncertainty, complexity and ambiguity.
About Smart Joules:
Smart Joules is India’s leading energy efficiency company on a mission to stop climate change by making continuous energy optimization simple and profitable at scalable for buildings and factories. We have pioneered the servitization model in the energy efficiency space in India and established leadership position in the healthcare industry with long-term projects in almost all states. Our clients are saving up to 70% in energy costs and have modernized their facilities with industry-leading technologies without making any up-front investments.
Led by an MIT/Berkeley Alumnus, an award-winning engineer from the Indian Navy, and a financial wizard, our team of 100+ professionals has a comprehensive set of capabilities spanning project development, design, financing, execution, cutting-edge IOT technologies, analytics and operations. We have won national and international awards, including recognition as a “Champion of Change” by the Prime Minister’s Office.
Our financial supporters include Asian Development Bank, Sangam Ventures, Max Ventures & Industries, Raintree Family Office, Echoing Green, Harvard University, TATA Trusts, David & Lucile Packard Foundation, TATA Cleantech Capital, Yes Bank, World Bank, SIDBI and some of India’s most prominent business families.
We like to work hard and play hard. Inspired by MIT’s motto “Mens et Manus”, we believe in using our minds and hands to develop and utilize technology for practical application. If you are curious to learn more about us, visit our LinkedIn page and the links we have posted.