29+ Machine Learning (ML) Jobs in Hyderabad | Machine Learning (ML) Job openings in Hyderabad
Apply to 29+ Machine Learning (ML) Jobs in Hyderabad on CutShort.io. Explore the latest Machine Learning (ML) Job opportunities across top companies like Google, Amazon & Adobe.
Location: Hyderabad, Hybrid
We seek an experienced Lead Data Scientist to support our growing team of talented engineers in the Insurtech space. As a proven leader, you will guide, mentor, and drive the team to build, deploy, and scale machine learning models that deliver impactful business solutions. This role is crucial for ensuring the team provides real-world, scalable results. In this position, you will play a key role, not only by developing models but also by leading cross-functional teams, structuring and understanding data, and providing actionable insights to the business.
What We’re Looking For
● Proven experience as a Lead Machine Learning Engineer, Data Scientist, or a similar role, with extensive leadership experience managing and mentoring engineering teams.
● Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or a related field.
● Demonstrated success in leading teams that have designed, deployed, and scaled machine learning models in production environments, delivering tangible business results.
● Expertise in machine learning algorithms, deep learning, and data mining techniques, ideally in an enterprise setting.
● Hands-on experience with natural language processing (NLP) and data extraction tools.
● Proficiency in Python (or other programming languages), and with libraries such as Scikit-learn, TensorFlow, PyTorch, etc.
● Strong experience with MLOps processes, including model development, deployment, and monitoring at scale.
● Strong leadership and analytical skills, capable of structuring and interpreting data to provide meaningful business insights and guide decisions.
● A practical understanding of aligning machine learning initiatives with business objectives.
● Industry experience in insurance is a plus but not required.
What You’ll Be Doing
● Lead and mentor a team of strong Machine Learning Engineers, synthesizing their skills and ideas into scalable, impactful solutions.
● Have a proven track record in successfully leading teams in the development, deployment, and scaling of machine learning models to solve real-world business problems.
● Provide leadership in structuring, analyzing, and interpreting data, delivering insights to the business side to support informed decision-making.
● Oversee the full lifecycle of machine learning models – from development and training to deployment and monitoring in production environments.
● Collaborate closely with cross-functional teams to align AI efforts with business goals
and ensure measurable impacts.
● Leverage deep expertise in machine learning, data science, or related fields to enhance document interpretation tasks such as text extraction, classification, and semantic analysis.
● Utilize natural language processing (NLP) to improve our AI-based Intelligent Document Processing platform.
● Implement MLOps best practices, ensuring smooth deployment, monitoring, and maintenance of models in production.
● Continuously explore and experiment with new technologies and techniques to push the boundaries of AI solutions within the organization.
● Lead by example, fostering a culture of innovation and collaboration within the team while effectively communicating technical insights to both technical and non-technical stakeholders.
Candidates not located in Hyderabad will not be taken into consideration.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Data engineers:
Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.This would also include develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
Constructing infrastructure for efficient ETL processes from various sources and storage systems.
Collaborating closely with Product Managers and Business Managers to design technical solutions aligned with business requirements.
Leading the implementation of algorithms and prototypes to transform raw data into useful information.
Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations.
Creating innovative data validation methods and data analysis tools.
Ensuring compliance with data governance and security policies.
Interpreting data trends and patterns to establish operational alerts.
Developing analytical tools, utilities, and reporting mechanisms.
Conducting complex data analysis and presenting results effectively.
Preparing data for prescriptive and predictive modeling.
Continuously exploring opportunities to enhance data quality and reliability.
Applying strong programming and problem-solving skills to develop scalable solutions.
Writes unit/integration tests, contributes towards documentation work
Must have ....
6 to 8 years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.
High proficiency in Scala/Java/ Python API frameworks/ Swagger and Spark for applied large-scale data processing.
Expertise with big data technologies, API development (Flask,,including Spark, Data Lake, Delta Lake, and Hive.
Solid understanding of batch and streaming data processing techniques.
Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion.
Expert-level ability to write complex, optimized SQL queries across extensive data volumes.
Experience with RDBMS and OLAP databases like MySQL, Redshift.
Familiarity with Agile methodologies.
Obsession for service observability, instrumentation, monitoring, and alerting.
Knowledge or experience in architectural best practices for building data pipelines.
Good to Have:
Passion for testing strategy, problem-solving, and continuous learning.
Willingness to acquire new skills and knowledge.
Possess a product/engineering mindset to drive impactful data solutions.
Experience working in distributed environments with teams scattered geographically.
Role Overview
We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies.
This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.
Responsibilities
- Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
- Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
- Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
- Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
- Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
- Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.
Requirements
- 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
- Expertise in Typescript, ReactJs, and familiarity with Python.
- Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
- Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
- Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
- A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.
Why Join Us?
- Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
- Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
- Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
- Work in an environment that values innovation, leadership, and the long-term success of its employees.
at Livello India Private Limited
At Livello we building machine-learning-based demand forecasting tools as well as computer-vision-based multi-camera product recognition solutions that detects people and products to track the inserted/removed items on shelves based on the hand movement of users. We are building models to determine real-time inventory levels, user behaviour as well as predicting how much of each product needs to be reordered so that the right products are delivered to the right locations at the right time, to fulfil customer demand.
Responsibilities
- Lead the CV and DS Team
- Work in the area of Computer Vision and Machine Learning, with focus on product (primarily food) and people recognition (position, movement, age, gender, DSGVO compliant).
- Your work will include formulation and development of a Machine Learning models to solve the underlying problem.
- You help build our smart supply chain system, keep up to date with the latest algorithmic improvements in forecasting and predictive areas, challenge the status quo
- Statistical data modelling and machine learning research.
- Conceptualize, implement and evaluate algorithmic solutions for supply forecasting, inventory optimization, predicting sales, and automating business processes
- Conduct applied research to model complex dependencies, statistical inference and predictive modelling
- Technological conception, design and implementation of new features
- Quality assurance of the software through planning, creation and execution of tests
- Work with a cross-functional team to define, build, test, and deploy applications
Requirements:
- Master/PHD in Mathematics, Statistics, Engineering, Econometrics, Computer Science or any related fields.
- 3-4 years of experience with computer vision and data science.
- Relevant Data Science experience, deep technical background in applied data science (machine learning algorithms, statistical analysis, predictive modelling, forecasting, Bayesian methods, optimization techniques).
- Experience building production-quality and well-engineered Computer Vision and Data Science products.
- Experience in image processing, algorithms and neural networks.
- Knowledge of the tools, libraries and cloud services for Data Science. Ideally Google Cloud Platform
- Solid Python engineering skills and experience with Python, Tensorflow, Docker
- Cooperative and independent work, analytical mindset, and willingness to take responsibility
- Fluency in English, both written and spoken.
JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
Company Description
Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
Job Responsibilities:
- Design and implement machine learning, information extraction, probabilistic matching algorithms and models
- Research and develop innovative, scalable and dynamic solutions to hard problems
- Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
- Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
- Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
- Be a valued contributor in shaping the future of our products and services
- You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
- Be part of a fast pace, fun-focused, agile team
Job Requirement:
- 4+ years of industry experience
- Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
- Strong mathematics, statistics, and data analytics
- Solid coding and engineering skills preferably in Machine Learning (not mandatory)
- Proficient in Java, Python, and Scala
- Industry experience building and productionizing end-to-end systems
- Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
- Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.
Position Summary
We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.
- Capable of building accurate machine learning models is the main goal of a machine learning engineer
- Linear Algebra, Applied Statistics and Probability
- Building Data Models
- Strong knowledge of NLP
- Good understanding of multithreaded and object-oriented software development
- Mathematics, Mathematics and Mathematics
- Collaborate with Data Engineers to prepare data models required for machine learning models
- Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
- Build large-scale software systems and numerical computation topics
- Use predictive analytics and data mining to solve complex problems and drive business decisions
- Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
- Tackle situations where problem is unknown and the Solution is unknown
- Solve analytical problems, and effectively communicate methodologies and results to the customers
- Adept at translating business needs into technical requirements and translating data into actionable insights
- Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.
Benefits
- Competitive salary for a startup
- Gain experience rapidly
- Work directly with the executive team
- Fast-paced work environment
About Phenom People
At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.
Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.
We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.
Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.
We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.
Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.
Kindly explore about the company phenom (https://www.phenom.com/">https://www.phenom.com/)
Youtube - https://www.youtube.com/c/PhenomPeople">https://www.youtube.com/c/PhenomPeople
LinkedIn - https://www.linkedin.com/company/phenompeople/">https://www.linkedin.com/company/phenompeople/
https://www.phenom.com/">Phenom | Talent Experience Management
About Thriving Springs:
Have you ever wondered, what does it take to succeed at the workplace ? Is it only technical skills or do behavior skills also play a role ?
Research suggests that 85% of job success depends on soft and behavioral skills and this is where Thriving Springs plays a critical role by helping organizations, teams and individuals build these skills and unlock their highest potential and achieve an all round success. It does this by assessing current levels of soft and behavioral skills and then helping in growing them through Thriving Springs’s innovative smart platform that combines the power of emotional intelligence (EI) with artificial intelligence (AI) and machine learning (ML).
Why join Thriving Springs?
While there are many companies, however, companies that are built with a clear purpose and mission to take humanity forward are rare. Thriving Springs - An Edtech startup is one of the pioneers in driving human productivity, fulfillment and success forward through empowering millions of working professionals globally with soft and behavioral skills and assisting them in the flow of work. Thriving Springs is leading a worldwide and purpose-led movement to drive success at the workplace and we invite you to join this fun and adventurous journey with us.
Culture, Rewards and Benefits:
A great culture is created when everybody in the company has great opportunities, creates meaningful impact and contributes to the good of society - Larry Page - Founder - Google
The founders of Thriving Springs come from Google and have built a Google-like culture within Thriving Springs where every team member experiences freedom and autonomy to pursue their ideas that have a deep impact and will shape the future of workplace learning. The culture at Thriving Springs is deeply rooted in emotional intelligence where there is an emphasis on collaboration, empathy and winning together as one team.
The candidate will receive an attractive total compensation package including a competitive fixed salary, performance bonuses and ESOPs translating into non-incremental gains as the company grows into a potential unicorn. In addition, every member will receive medical and health insurance.
Location: Hyderabad, India
Qualifications:
BTech/BE in computer science, electrical, electronics or related fields 5+ years of full stack design and development experience High emotional intelligence, empathy and collaborative approach. Experience in Angular Javascript frameworks, CSS, HTML5, NodeJS, ExpressJS, MongoDB to handle full stack web development. Experience with developing rich dynamic front end applications using Angular and CSS frameworks like BulmaCSS, Angular Material, Bootstrap, etc. Knowledge of GraphQL would be an added advantage. Knowledge of Cloud services like AWS, Heroku, Azure is preferable. Should be a quick learner to keep up with the pace of the ever changing world of technology as the candidate will get excellent exposure to the latest and trending Cloud based Saas technologies and best practices while working with varied customers based across the globe.
Responsibilities:
Develop web applications covering end to end software development life cycle right from writing UI code using Angular to backend API code using NodeJS and managing databases like MongoDB, MySQL, etc. Involved in full stack code management from Git check-ins to running automated builds and deployments using DevOps practices to deploy to public cloud services like AWS, Azure, Heroku, etc. Handling full-stack web development workflow right from front end to backend to CI/CD workflows. Design and Develop the tech architecture and work closely with CEO and CTO of the company Drive and guide with work of other engineers on the team
This is a leadership role and candidate is expected to wear multiple technical hats including customer interactions and investor discussions
What are the Key Responsibilities:
- Design NLP applications
- Select appropriate annotated datasets for Supervised Learning methods
- Use effective text representations to transform natural language into useful features
- Find and implement the right algorithms and tools for NLP tasks
- Develop NLP systems according to requirements
- Train the developed model and run evaluation experiments
- Perform statistical analysis of results and refine models
- Extend ML libraries and frameworks to apply in NLP tasks
- Remain updated in the rapidly changing field of machine learning
What are we looking for:
- Proven experience as an NLP Engineer or similar role
- Understanding of NLP techniques for text representation, semantic extraction techniques, data structures, and modeling
- Ability to effectively design software architecture
- Deep understanding of text representation techniques (such as n-grams, a bag of words, sentiment analysis etc), statistics and classification algorithms
- Knowledge of Python, Java, and R
- Ability to write robust and testable code
- Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like sci-kit-learn)
- Strong communication skills
- An analytical mind with problem-solving abilities
- Degree in Computer Science, Mathematics, Computational Linguistics, or similar field
at Panamax InfoTech Ltd.
- NLU, Text, Speech, Image Processing
- Excited about Machine Learning and AI
- Want to work on a highly saleable, performance-optimized infrastructure that elastically handles customer needs
- Innovate, Design, and prototype solutions for Digital Assistant and Bots platform to handle heavy loads
- Build and maintain our platform and automation frameworks to ensure maximum uptime and predictability while preventing outages and service interruptions or degradations
- Analyze system failures and develop rapid response solutions to ensure such failures do not reoccur
- Work cross-functionally with product development, Product Management, Program Management and Cloud Infra operations teams
- Partner with Engineering to provide the infrastructure and services required to enable innovation
- Predict and provide notice of potential system vulnerabilities for current and future solutions and implementations. Provide specific recommendations and guidance to address such vulnerabilities
- Analyze, build and maintain all automation tools and processes to ensure the highest standards of reliability and robustness
- Fully understand our customer's service needs and ensure we meet these needs
Preferred Qualifications
- NLU, Text, Speech, Image Processing
- Excited about Machine Learning and AI
- Want to work on a highly saleable, performance-optimized infrastructure that elastically handles customer needs
- Innovate, Design, and prototype solutions for Digital Assistant and Bots platform to handle heavy loads
- Build and maintain our platform and automation frameworks to ensure maximum uptime and predictability while preventing outages and service interruptions or degradations
- Analyze system failures and develop rapid response solutions to ensure such failures do not reoccur
- Work cross-functionally with product development, Product Management, Program Management and Cloud Infra operations teams
- Partner with Engineering to provide the infrastructure and services required to enable innovation
- Predict and provide notice of potential system vulnerabilities for current and future solutions and implementations. Provide specific recommendations and guidance to address such vulnerabilities
- Analyze, build and maintain all automation tools and processes to ensure the highest standards of reliability and robustness
- Fully understand our customer's service needs and ensure we meet these needs
- No matter your role in our team, you will find yourself in an exciting and challenging environment where every person is empowered to show initiative, be outspoken, and be proactive and not reactive. Oracle is dedicated to the continual growth and development of its staff, striving constantly to strengthen our expertise as well as develop new skills. Our team is spread all around the world in four continents - we provide a full range of opportunities and challenges to apply your kills and grow your career in this new and exciting arena.
A Bachelor’s degree in data science, statistics, computer science, or a similar field
2+ years industry experience working in a data science role, such as statistics, machine learning,
deep learning, quantitative financial analysis, data engineering or natural language processing
Domain experience in Financial Services (banking, insurance, risk, funds) is preferred
Have and experience and be involved in producing and rapidly delivering minimum viable products,
results focused with ability to prioritize the most impactful deliverables
Strong Applied Statistics capabilities. Including excellent understanding of Machine Learning
techniques and algorithms
Hands on experience preferable in implementing scalable Machine Learning solutions using Python /
Scala / Java on Azure, AWS or Google cloud platform
Experience with storage frameworks like Hadoop, Spark, Kafka etc
Experience in building &deploying unsupervised, semi-supervised, and supervised models and be
knowledgeable in various ML algorithms such as regression models, Tree-based algorithms,
ensemble learning techniques, distance-based ML algorithms etc
Ability to track down complex data quality and data integration issues, evaluate different algorithmic
approaches, and analyse data to solve problems.
Experience in implementing parallel processing and in-memory frameworks such as H2O.ai
o Strong Python development skills, with 7+ yrs. experience with SQL.
o A bachelor or master’s degree in Computer Science or related areas
o 5+ years of experience in data integration and pipeline development
o Experience in Implementing Databricks Delta lake and data lake
o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark
o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API
o experience with AWS Cloud on data integration with S3.
o Hands on Development experience with Python and/or Scala.
o Experience with SQL and NoSQL databases.
o Experience in using data modeling techniques and tools (focused on Dimensional design)
o Experience with micro-service architecture using Docker and Kubernetes
o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP
o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means
o Excellent verbal and written communications skills and strong leadership capabilities
Skills:
ML
MOdelling
Python
SQL
Azure Data Lake, dataFactory, Databricks, Delta Lake
Job Description
Do you have a passion for computer vision and deep learning problems? We are looking for someone who thrives on collaboration and wants to push the boundaries of what is possible today! Material Depot (materialdepot.in) is on a mission to be India’s largest tech company in the Architecture, Engineering and Construction space by democratizing the construction ecosystem and bringing stakeholders onto a common digital platform. Our engineering team is responsible for developing Computer Vision and Machine Learning tools to enable digitization across the construction ecosystem. The founding team includes people from top management consulting firms and top colleges in India (like BCG, IITB), and have worked extensively in the construction space globally and is funded by top Indian VCs.
Our team empowers Architectural and Design Businesses to effectively manage their day to day operations. We are seeking an experienced, talented Data Scientist to join our team. You’ll be bringing your talents and expertise to continue building and evolving our highly available and distributed platform.
Our solutions need complex problem solving in computer vision that require robust, efficient, well tested, and clean solutions. The ideal candidate will possess the self-motivation, curiosity, and initiative to achieve those goals. Analogously, the candidate is a lifelong learner who passionately seeks to improve themselves and the quality of their work. You will work together with similar minds in a unique team where your skills and expertise can be used to influence future user experiences that will be used by millions.
In this role, you will:
- Extensive knowledge in machine learning and deep learning techniques
- Solid background in image processing/computer vision
- Experience in building datasets for computer vision tasks
- Experience working with and creating data structures / architectures
- Proficiency in at least one major machine learning framework
- Experience visualizing data to stakeholders
- Ability to analyze and debug complex algorithms
- Good understanding and applied experience in classic 2D image processing and segmentation
- Robust semantic object detection under different lighting conditions
- Segmentation of non-rigid contours in challenging/low contrast scenarios
- Sub-pixel accurate refinement of contours and features
- Experience in image quality assessment
- Experience with in depth failure analysis of algorithms
- Highly skilled in at least one scripting language such as Python or Matlab and solid experience in C++
- Creativity and curiosity for solving highly complex problems
- Excellent communication and collaboration skills
- Mentor and support other technical team members in the organization
- Create, improve, and refine workflows and processes for delivering quality software on time and with carefully calculated debt
- Work closely with product managers, customer support representatives, and account executives to help the business move fast and efficiently through relentless automation.
How you will do this:
- You’re part of an agile, multidisciplinary team.
- You bring your own unique skill set to the table and collaborate with others to accomplish your team’s goals.
- You prioritize your work with the team and its product owner, weighing both the business and technical value of each task.
- You experiment, test, try, fail, and learn continuously.
- You don’t do things just because they were always done that way, you bring your experience and expertise with you and help the team make the best decisions.
For this role, you must have:
- Strong knowledge of and experience with the functional programming paradigm.
- Experience conducting code reviews, providing feedback to other engineers.
- Great communication skills and a proven ability to work as part of a tight-knit team.
JOB DESCRIPTION
- 2 to 6 years of experience in imparting technical training/ mentoring
- Must have very strong concepts of Data Analytics
- Must have hands-on and training experience on Python, Advanced Python, R programming, SAS and machine learning
- Must have good knowledge of SQL and Advanced SQL
- Should have basic knowledge of Statistics
- Should be good in Operating systems GNU/Linux, Network fundamentals,
- Must have knowledge on MS office (Excel/ Word/ PowerPoint)
- Self-Motivated and passionate about technology
- Excellent analytical and logical skills and team player
- Must have exceptional Communication Skills/ Presentation Skills
- Good Aptitude skills is preferred
- Exceptional communication skills
Responsibilities:
- Ability to quickly learn any new technology and impart the same to other employees
- Ability to resolve all technical queries of students
- Conduct training sessions and drive the placement driven quality in the training
- Must be able to work independently without the supervision of a senior person
- Participate in reviews/ meetings
Qualification:
- UG: Any Graduate in IT/Computer Science, B.Tech/B.E. – IT/ Computers
- PG: MCA/MS/MSC – Computer Science
- Any Graduate/ Post graduate, provided they are certified in similar courses
ABOUT EDUBRIDGE
EduBridge is an Equal Opportunity employer and we believe in building a meritorious culture where everyone is recognized for their skills and contribution.
Launched in 2009 EduBridge Learning is a workforce development and skilling organization with 50+ training academies in 18 States pan India. The organization has been providing skilled manpower to corporates for over 10 years and is a leader in its space. We have trained over a lakh semi urban & economically underprivileged youth on relevant life skills and industry-specific skills and provided placements in over 500 companies. Our latest product E-ON is committed to complementing our training delivery with an Online training platform, enabling the students to learn anywhere and anytime.
To know more about EduBridge please visit: http://www.edubridgeindia.com/">http://www.edubridgeindia.com/
You can also visit us on https://www.facebook.com/Edubridgelearning/">Facebook , https://www.linkedin.com/company/edubridgelearning/">LinkedIn for our latest initiatives and products
Responsibilities Description:
Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.
Experience Requirements:
BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 2-4 years of experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience in the healthcare sector. Experience in Deep Learning strongly preferred.
Required Technical Skill Set:
- Full cycle of building machine learning solutions,
o Understanding of wide range of algorithms and their corresponding problems to solve
o Data preparation and analysis
o Model training and validation
o Model application to the problem
- Experience using the full open source programming tools and utilities
- Experience in working in end-to-end data science project implementation.
- 2+ years of experience with development and deployment of Machine Learning applications
- 2+ years of experience with NLP approaches in a production setting
- Experience in building models using bagging and boosting algorithms
- Exposure/experience in building Deep Learning models for NLP/Computer Vision use cases preferred
- Ability to write efficient code with good understanding of core Data Structures/algorithms is critical
- Strong python skills following software engineering best practices
- Experience in using code versioning tools like GIT, bit bucket
- Experience in working in Agile projects
- Comfort & familiarity with SQL and Hadoop ecosystem of tools including spark
- Experience managing big data with efficient query program good to have
- Good to have experience in training ML models in tools like Sage Maker, Kubeflow etc.
- Good to have experience in frameworks to depict interpretability of models using libraries like Lime, Shap etc.
- Experience with Health care sector is preferred
- MS/M.Tech or PhD is a plus
at Meslova Systems Pvt Ltd
Artificial Intelligence (AI) Researchers and Developers
Successful candidate will be part of highly productive teams working on implementing core AI algorithms, Cryptography libraries, AI enabled products and intelligent 3D interface. Candidates will work on cutting edge products and technologies in highly challenging domains and will need to have highest level of commitment and interest to learn new technologies and domain specific subject matter very quickly. Successful completion of projects will require travel and working in remote locations with customers for extended periods
Education Qualification: Bachelor, Master or PhD degree in Computer Science, Mathematics, Electronics, Information Systems from a reputed university and/or equivalent Knowledge and Skills
Location : Hyderabad, Bengaluru, Delhi, Client Location (as needed)
Skillset and Expertise
• Strong software development experience using Python
• Strong background in mathematical, numerical and scientific computing using Python.
• Knowledge in Artificial Intelligence/Machine learning
• Experience working with SCRUM software development methodology
• Strong experience with implementing Web services, Web clients and JSON protocol is required
• Experience with Python Meta programming
• Strong analytical and problem-solving skills
• Design, develop and debug enterprise grade software products and systems
• Software systems testing methodology, including writing and execution of test plans, debugging, and testing scripts and tools
• Excellent written and verbal communication skills; Proficiency in English. Verbal communication in Hindi and other local
Indian languages
• Ability to effectively communicate product design, functionality and status to management, customers and other stakeholders
• Highest level of integrity and work ethic
Frameworks
1. Scikit-learn
2. Tensorflow
3. Keras
4. OpenCV
5. Django
6. CUDA
7. Apache Kafka
Mathematics
1. Advanced Calculus
2. Numerical Analysis
3. Complex Function Theory
4. Probability
Concepts (One or more of the below)
1. OpenGL based 3D programming
2. Cryptography
3. Artificial Intelligence (AI) Algorithms a) Statistical modelling b.) DNN c. RNN d. LSTM e.GAN f. CN
Luxury e-commerce platform well-established organisation
Experience 3 to 8 Years
Skill Set
- experience in algorithm development with a focus on signal processing, pattern recognition, machine learning, classification, data mining, and other areas of machine intelligence.
- Ability to analyse data streams from multiple sensors and develop algorithms to extract accurate and meaningful sport metrics.
- Should have a deeper understanding of IMU sensors and Biosensors like HRM, ECG
- A good understanding on Power and memory management on embedded platform
- Expertise in the design of multitasking, event-driven, real-time firmware using C and understanding of RTOS concepts
- Knowledge of Machine learning, Analytical and methodical approaches to data analysis and verification and Python
- Prior experience on fitness algorithm development using IMU sensor
- Interest in fitness activities and knowledge of human body anatomy
- Does analytics to extract insights from raw historical data of the organization.
- Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
- Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
- Tests the short/long term impact of productized MV models on those trends.
- Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory.
at Aganitha Cognitive Solutions
As a Lead Solutions Architect at Aganitha, you will:
* Engage and co-innovate with customers in BioPharma R&D
* Design and oversee implementation of solutions for BioPharma R&D * Manage Engineering teams using Agile methodologies
* Enhance reuse with platforms, frameworks and libraries
Applying candidates must have demonstrated expertise in the following areas:
1. App dev with modern tech stacks of Python, ReactJS, and fit for purpose database technologies
2. Big data engineering with distributed computing frameworks
3. Data modeling in scientific domains, preferably in one or more of: Genomics, Proteomics, Antibody engineering, Biological/Chemical synthesis and formulation, Clinical trials management
4. Cloud and DevOps automation
5. Machine learning and AI (Deep learning)
MTX Group Inc. is seeking a motivated Technical Lead - AI to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Extensive research and development of new AI/ML techniques that enables learning
the semantics of data (images, video, text, audio, speech, etc)
- Improving the existing ML and DNN models and products through R&D on cutting edge technologies
- Collaborate with Machine Learning teams to drive innovation of complex and accurate cognitive system
- Collaborate with Engineering and Core team to drive innovation of scalable ML and AI serving production platforms
- Create POCs to quickly test a new model architecture and create improvement over an existing methodology
- Introduce major innovations that can result in better product features and develop strategies and plans required to drive these
- Lead a team and collaborate with product managers, tech review complex implementations and provide optimisation best practices
What you will bring:
- 4-6 years of Experience
- Experience in neural networks, graphical models, reinforcement learning, and natural language processing
- Experience in Computer Vision techniques and image detection neural network models like semantic segmentation, instance segmentation, object detection, etc
- In-depth understanding of benchmarking, parallel computing, distributed computing, machine learning, and AI
- Programming experience in one or more of the following: Python, C, C++, C#, Java, R, and toolkits such as Tensorflow, Keras, PyTorch, Caffe, MxNet, SciPy, SciKit, etc
- Ability to perform research that is justified and guided by business opportunities
- Demonstrated successful implementation if industry grade AI solutions in the past
- Ability to lead a team of AI engineers in an agile development environment
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
Required skill
- Around 6- 8.5 years of experience and around 4+ years in AI / Machine learning space
- Extensive experience in designing large scale machine learning solution for the ML use case, large scale deployments and establishing continues automated improvement / retraining framework.
- Strong experience in Python and Java is required.
- Hands on experience on Scikit-learn, Pandas, NLTK
- Experience in Handling of Timeseries data and associated techniques like Prophet, LSTM
- Experience in Regression, Clustering, classification algorithms
- Extensive experience in buildings traditional Machine Learning SVM, XGBoost, Decision tree and Deep Neural Network models like RNN, Feedforward is required.
- Experience in AutoML like TPOT or other
- Must have strong hands on experience in Deep learning frameworks like Keras, TensorFlow or PyTorch
- Knowledge of Capsule Network or reinforcement learning, SageMaker is a desirable skill
- Understanding of Financial domain is desirable skill
Responsibilities
- Design and implementation of solutions for ML Use cases
- Productionize System and Maintain those
- Lead and implement data acquisition process for ML work
- Learn new methods and model quickly and utilize those in solving use cases
Tiger Analytics is a global AI & analytics consulting firm. With data and technology at the core of our solutions, we are solving some of the toughest problems out there. Our culture is modeled around expertise and mutual respect with a team first mindset. Working at Tiger, you’ll be at the heart of this AI revolution. You’ll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire.
We are headquartered in the Silicon Valley and have our delivery centres across the globe. The below role is for our Chennai or Bangalore office, or you can choose to work remotely.
About the Role:
As an Associate Director - Data Science at Tiger Analytics, you will lead data science aspects of endto-end client AI & analytics programs. Your role will be a combination of hands-on contribution, technical team management, and client interaction.
• Work closely with internal teams and client stakeholders to design analytical approaches to
solve business problems
• Develop and enhance a broad range of cutting-edge data analytics and machine learning
problems across a variety of industries.
• Work on various aspects of the ML ecosystem – model building, ML pipelines, logging &
versioning, documentation, scaling, deployment, monitoring and maintenance etc.
• Lead a team of data scientists and engineers to embed AI and analytics into the client
business decision processes.
Desired Skills:
• High level of proficiency in a structured programming language, e.g. Python, R.
• Experience designing data science solutions to business problems
• Deep understanding of ML algorithms for common use cases in both structured and
unstructured data ecosystems.
• Comfortable with large scale data processing and distributed computing
• Excellent written and verbal communication skills
• 10+ years exp of which 8 years of relevant data science experience including hands-on
programming.
Designation will be commensurate with expertise/experience. Compensation packages among the best in the industry.
at Transportation | Warehouse Optimization
Job Description
Want to make every line of code count? Tired of being a small cog in a big machine? Like a fast-paced environment where stuff get DONE? Wanna grow with a fast-growing company (both career and compensation)? Like to wear different hats? Join ThinkDeeply in our mission to create and apply Enterprise-Grade AI for all types of applications.
Seeking an M.L. Engineer with high aptitude toward development. Will also consider coders with high aptitude in M.L. Years of experience is important but we are also looking for interest and aptitude. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.
Experience
10+ Years
Location
Bozeman/Hyderabad
Skills
Required Skills:
Bachelors/Masters or Phd in Computer Science or related industry experience
3+ years of Industry Experience in Deep Learning Frameworks in PyTorch or TensorFlow
7+ Years of industry experience in scripting languages such as Python, R.
7+ years in software development doing at least some level of Researching / POCs, Prototyping, Productizing, Process improvement, Large-data processing / performance computing
Familiar with non-neural network methods such as Bayesian, SVM, Adaboost, Random Forests etc
Some experience in setting up large scale training data pipelines.
Some experience in using Cloud services such as AWS, GCP, Azure
Desired Skills:
Experience in building deep learning models for Computer Vision and Natural Language Processing domains
Experience in productionizing/serving machine learning in industry setting
Understand the principles of developing cloud native applications
Responsibilities
Collect, Organize and Process data pipelines for developing ML models
Research and develop novel prototypes for customers
Train, implement and evaluate shippable machine learning models
Deploy and iterate improvements of ML Models through feedback