50+ Machine Learning (ML) Jobs in Mumbai | Machine Learning (ML) Job openings in Mumbai
Apply to 50+ Machine Learning (ML) Jobs in Mumbai on CutShort.io. Explore the latest Machine Learning (ML) Job opportunities across top companies like Google, Amazon & Adobe.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Fynd is India’s largest omnichannel platform and multi-platform tech company with expertise in retail tech and products in AI, ML, big data ops, gaming+crypto, image editing and learning space. Founded in 2012 by 3 IIT Bombay alumni: Farooq Adam, Harsh Shah and Sreeraman MG. We are headquartered in Mumbai and have 1000+ brands under management, more than 10k stores and servicing 23k + pin codes.
We want new ambitious research members to join our Machine Learning Research group. We are looking for people with a proven track record in computer vision research. In this role, we create new models, algorithms and approaches to solve challenging machine learning problems. Some of our problems areas include Image Restoration, Image Enhancement, Generative Models and 3D computer vision. You will work on various state-of-art new techniques to improve and optimize neural networks and also use computer vision approaches to solve various problems. You will be required to have in-depth knowledge of image processing and convolutional neural networks. You will tackle our challenging problems also have the freedom to try out your ideas and pursue research topics of your interest.
What you will do:
- Engage in advanced research to push the boundaries of computer vision technology. Leverage core image processing and computer vision fundamentals along with focus on transformer based models, generative adversarial networks (GANs), diffusion models, and other advanced deep learning methods.
- Develop and enhance models for practical applications, including image processing, object detection, image segmentation, and scene understanding, leveraging recent innovations in the field.
- Apply contemporary methodologies to address complex, real-world challenges and create innovative, deployable solutions that enhance our products and services.
- Collaborate with cross-functional teams to integrate new computer vision technologies and solutions into our product suite.
- Remain current with the latest developments in computer vision and deep learning, including emerging techniques and best practices.
- Document and communicate research findings through publications in top-tier conferences and journals, and contribute to the academic community.
Skills needed:
- A BS, MS, PhD, or equivalent experience in Computer Science, Engineering, Applied Mathematics, or a related quantitative field.
- Extensive research experience in computer vision, with a focus on deep learning techniques such as transformers, GANs, diffusion, and other emerging models.
- Proficiency in Python, with experience in deep learning frameworks like TensorFlow and PyTorch
- Deep understanding of mathematical principles relevant to computer vision, including linear algebra, optimization, and statistics.
- Proven ability to bridge the gap between theoretical research and practical application, effectively translating innovative ideas into deployable solutions.
- Strong problem-solving skills and an innovative mindset to address complex challenges and apply contemporary research effectively.
What do we offer?
Growth
Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.
Flex University: We help you upskill by organising in-house courses on important subjects
Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.
Culture
Community and Team building activities
Host weekly, quarterly and annual events/parties.
Wellness
Mediclaim policy for you + parents + spouse + kids
Experienced therapist for better mental health, improve productivity & work-life balance
We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
About the company
DCB Bank is a new generation private sector bank with 442 branches across India.It is a scheduled commercial bank regulated by the Reserve Bank of India. DCB Bank’s business segments are Retail banking, Micro SME, SME, mid-Corporate, Agriculture, Government, Public Sector, Indian Banks, Co-operative Banks and Non-Banking Finance Companies.
Job Description
Department: Risk Analytics
CTC: Max 18 Lacs
Grade: Sr Manager/AVP
Experience: Min 4 years of relevant experience
We are looking for a Data Scientist to join our growing team of Data Science experts and manage the processes and people responsible for accurate data collection, processing, modelling, analysis, implementation, and maintenance.
Responsibilities
- Understand, monitor and maintain existing financial scorecards (ML Based) and make changes to the model when required.
- Perform Statistical analysis in R and assist IT team with deployment of ML model and analytical frameworks in Python.
- Should be able to handle multiple tasks and must know how to prioritize the work.
- Lead cross-functional projects using advanced data modelling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities.
- Develop clear, concise and actionable solutions and recommendations for client’s business needs and actively explore client’s business and formulate solutions/ideas which can help client in terms of efficient cost cutting or in achieving growth/revenue/profitability targets faster.
- Build, develop and maintain data models, reporting systems, data automation systems, dashboards and performance metrics support that support key business decisions.
- Design and build technical processes to address business issues.
- Oversee the design and delivery of reports and insights that analyse business functions and key operations and performance metrics.
- Manage and optimize processes for data intake, validation, mining, and engineering as well as modelling, visualization, and communication deliverables.
- Communicate results and business impacts of insight initiatives to the Management of the company.
Requirements
- Industry knowledge
- 4 years or more of experience in financial services industry particularly retail credit industry is a must.
- Candidate should have either worked in banking sector (banks/ HFC/ NBFC) or consulting organizations serving these clients.
- Experience in credit risk model building such as application scorecards, behaviour scorecards, and/ or collection scorecards.
- Experience in portfolio monitoring, model monitoring, model calibration
- Knowledge of ECL/ Basel preferred.
- Educational qualification: Advanced degree in finance, mathematics, econometrics, or engineering.
- Technical knowledge: Strong data handling skills in databases such as SQL and Hadoop. Knowledge with data visualization tools, such as SAS VI/Tableau/PowerBI is preferred.
- Expertise in either R or Python; SAS knowledge will be plus.
Soft skills:
- Ability to quickly adapt to the analytical tools and development approaches used within DCB Bank
- Ability to multi-task good communication and team working skills.
- Ability to manage day-to-day written and verbal communication with relevant stakeholders.
- Ability to think strategically and make changes to data when required.
Client based at Bangalore location.
Data Scientist with LLM and Healthcare Expertise
Keywords: Data Scientist, LLM, Radiology, Healthcare, Machine Learning, Deep Learning, AI, Python, TensorFlow, PyTorch, Scikit-learn, Data Analysis, Medical Imaging, Clinical Data, HIPAA, FDA.
Responsibilities:
· Develop and deploy advanced machine learning models, particularly focusing on Large Language Models (LLMs) and their application in the healthcare domain.
· Leverage your expertise in radiology, visual images, and text data to extract meaningful insights and drive data-driven decision-making.
· Collaborate with cross-functional teams to identify and address complex healthcare challenges.
· Conduct research and explore new techniques to enhance the performance and efficiency of our AI models.
· Stay up-to-date with the latest advancements in machine learning and healthcare technology.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field.
· 6+ years of hands-on experience in data science and machine learning.
· Strong proficiency in Python and popular data science libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
· Deep understanding of LLM architectures, training methodologies, and applications.
· Expertise in working with radiology images, visual data, and text data.
· Experience in the healthcare domain, particularly in areas such as medical imaging, clinical data analysis, or patient outcomes.
· Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
· PhD in Computer Science, Data Science, or a related field.
· Experience with cloud platforms (e.g., AWS, GCP, Azure).
· Knowledge of healthcare standards and regulations (e.g., HIPAA, FDA).
· Publications in relevant academic journals or conferences.
at Accrete
Responsibilities:
- Collaborating with data scientists and machine learning engineers to understand their requirements and design scalable, reliable, and efficient machine learning platform solutions.
- Building and maintaining the applications and infrastructure to support end-to-end machine learning workflows, including inference and continual training.
- Developing systems for the definition deployment and operation of the different phases of the machine learning and data life cycles.
- Working within Kubernetes to orchestrate and manage containers, ensuring high availability and fault tolerance of applications.
- Documenting the platform's best practices, guidelines, and standard operating procedures and contributing to knowledge sharing within the team.
Requirements:
- 3+ years of hands-on experience in developing and managing machine learning or data platforms
- Proficiency in programming languages commonly used in machine learning and data applications such as Python, Rust, Bash, Go
- Experience with containerization technologies, such as Docker, and container orchestration platforms like Kubernetes.
- Familiarity with CI/CD pipelines for automated model training and deployment. Basic understanding of DevOps principles and practices.
- Knowledge of data storage solutions and database technologies commonly used in machine learning and data workflows.
Responsibilities
- Work on execution and scheduling of all tasks related to assigned projects' deliverable dates
- Optimize and debug existing codes to make them scalable and improve performance
- Design, development, and delivery of tested code and machine learning models into production environments
- Work effectively in teams, managing and leading teams
- Provide effective, constructive feedback to the delivery leader
- Manage client expectations and work with an agile mindset with machine learning and AI technology
- Design and prototype data-driven solutions
Eligibility
- Highly experienced in designing, building, and shipping scalable and production-quality machine learning algorithms in the field of Python applications
- Working knowledge and experience in NLP core components (NER, Entity Disambiguation, etc.)
- In-depth expertise in Data Munging and Storage (Experienced in SQL, NoSQL, MongoDB, Graph Databases)
- Expertise in writing scalable APIs for machine learning models
- Experience with maintaining code logs, task schedulers, and security
- Working knowledge of machine learning techniques, feed-forward, recurrent and convolutional neural networks, entropy models, supervised and unsupervised learning
- Experience with at least one of the following: Keras, Tensorflow, Caffe, or PyTorch
Lifespark is looking for individuals with a passion for impacting real lives through technology. Lifespark is one of the most promising startups in the Assistive Tech space in India, and has been honoured with several National and International awards. Our mission is to create seamless, persistent and affordable healthcare solutions. If you are someone who is driven to make a real impact in this world, we are your people.
Lifespark is currently building solutions for Parkinson’s Disease, and we are looking for a ML lead to join our growing team. You will be working directly with the founders on high impact problems in the Neurology domain. You will be solving some of the most fundamental and exciting challenges in the industry and will have the ability to see your insights turned into real products every day
Essential experience and requirements:
1. Advanced knowledge in the domains of computer vision, deep learning
2. Solid understand of Statistical / Computational concepts like Hypothesis Testing, Statistical Inference, Design of Experiments and production level ML system design
3. Experienced with proper project workflow
4. Good at collating multiple datasets (potentially from different sources)
5. Good understanding of setting up production level data pipelines
6. Ability to independently develop and deploy ML systems to various platforms (local and cloud)
7. Fundamentally strong with time-series data analysis, cleaning, featurization and visualisation
8. Fundamental understanding of model and system explainability
9. Proactive at constantly unlearning and relearning
10. Documentation ninja - can understand others documentation as well as create good documentation
Responsibilities :
1. Develop and deploy ML based systems built upon healthcare data in the Neurological domain
2. Maintain deployed systems and upgrade them through online learning
3. Develop and deploy advanced online data pipelines
Role: Principal Software Engineer
We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.
Responsibilities:
• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule
• Software Development that creates data driven intelligence in the products which deals with Big Data backends
• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements
• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development
• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)
• Creating metrics and evaluation of algorithm for better accuracy and recall
• Ensuring efficient access and usage of data through the means of indexing, clustering etc.
• Collaborate with engineering and product development teams.
Requirements:
• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school
• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.
• Experience of 8 to 10 year with product development, having done algorithmic work
• 5+ years of experience working with large data sets or do large scale quantitative analysis
• Understanding of SaaS based products and services.
• Strong algorithmic problem-solving skills
• Able to mentor and manage team and take responsibilities of team deadline.
Skill set required:
• In depth Knowledge Python programming languages
• Understanding of software architecture and software design
• Must have fully managed a project with a team
• Having worked with Agile project management practices
• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)
• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis
Team:- We are a team of 9 data scientists working on Video Analytics Projects, Data Analytics projects for internal AI requirements of Reliance Industries as well for the external business. At a time, we make progress on multiple projects(atleast 4) in Video Analytics or Data Analytics.
RESPONSIBILITIES:
• You will be involved in directly driving application of machine learning and AI to solve various product and business problems including ML model lifecycle management with ideation, experimentation, implementation, and maintenance.
• Your responsibilities will include enabling the team in moving forward with ML/AI solutions to optimise various components across our music streaming platforms.
• Your work would be impacting millions of users the way they consume music and podcasts, it would involve solving cold start problems, understanding user personas, optimising ranking and improving recommendations for serving relevant content to users.
• We are looking for a seasoned engineer to orchestrate our recommendations and discovery projects and also be involved in tech management within the team.
• The team of talented, passionate people in which you’ll work will include ML engineers and data scientists.
• You’ll be reporting directly to the head of the engineering and will be
instrumental in discussing and explaining the progress to top management and other stake holders.
• You’ll be expected to have regular conversations with product leads and other engineering leads to understand the requirements, and also similar and more frequent conversation with your own team.
REQUIREMENTS:
• A machine learning software engineer with a passion for working on exciting, user impacting product and business problems
• Stay updated with latest research in machine learning esp. recommender systems and audio signals
• Have taken scalable ML services to production, maintained and managed their lifecycle
• Good understanding of foundational mathematics associated with machine learning such as statistics, linear algebra, optimization, probabilistic models
Minimum Qualifications
• 13+ years of industry experience doing applied machine learning
• 5+ years of experience in tech team management
• Fluent in one or more object oriented languages like Python, C++, Java
• Knowledgeable about core CS concepts such as common data structures an algorithms
• Comfortable conducting design and code reviews
• Comfortable in formalising a product or business problem as a ML problem
• Master’s or PhD degree in Computer Science, Mathematics or related field
• Industry experience with large scale recommendation and ranking systems
• Experience in managing team of 10-15 engineers
• Hands on experience with Spark, Hive, Flask, Tensorflow, XGBoost, Airflow
at TSG Global Services Private Limited
Exp-Min 10 Years
Location Mumbai
Sal-Nego
Powerbi, Tableau, QlikView,
Solution Architect/Technology Lead – Data Analytics
Role
Looking for Business Intelligence lead (BI Lead) having hands on experience BI tools (Tableau, SAP Business Objects, Financial and Accounting modules, Power BI), SAP integration, and database knowledge including one or more of Azure Synapse/Datafactory, SQL Server, Oracle, cloud-based DB Snowflake. Good knowledge of AI-ML, Python is also expected.
- You will be expected to work closely with our business users. The development will be performed using an Agile methodology which is based on scrum (time boxing, daily scrum meetings, retrospectives, etc.) and XP (continuous integration, refactoring, unit testing, etc) best practices. Candidates must therefore be able to work collaboratively, demonstrate good ownership, leadership and be able to work well in teams.
- Responsibilities :
- Design, development and support of multiple/hybrid Data sources, data visualization Framework using Power BI, Tableau, SAP Business Objects etc. and using ETL tools, Scripting, Python Scripting etc.
- Implementing DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code-utilizing tools like Git. Primary Skills
Requirements
- 10+ years working as a hands-on developer in Information Technology across Database, ETL and BI (SAP Business Objects, integration with SAP Financial and Accounting modules, Tableau, Power BI) & prior team management experience
- Tableau/PowerBI integration with SAP and knowledge of SAP modules related to finance is a must
- 3+ years of hands-on development experience in Data Warehousing and Data Processing
- 3+ years of Database development experience with a solid understanding of core database concepts and relational database design, SQL, Performance tuning
- 3+ years of hands-on development experience with Tableau
- 3+ years of Power BI experience including parameterized reports and publishing it on PowerBI Service
- Excellent understanding and practical experience delivering under an Agile methodology
- Ability to work with business users to provide technical support
- Ability to get involved in all the stages of project lifecycle, including analysis, design, development, testing, Good To have Skills
- Experience with other Visualization tools and reporting tools like SAP Business Objects.
- You're proficient in AI/Machine learning latest technologies
- You're proficient in GPT-3 based algorithms
- You have a passion for writing code as well as understanding and crafting the ways systems interact
- You believe in the benefits of agile processes and shipping code often
- You are pragmatic and work to coalesce requirements into reasonable solutions that provide value
Responsibilities
- Deploy well-tested, maintainable and scalable software solutions
- Take end-to-end ownership of the technology stack and product
- Collaborate with other engineers to architect scalable technical solutions
- Embrace and improve our standards and processes to reduce friction and unlock efficiency
Current Ecosystem :
ShibaSwap : https://shibaswap.com/#/" target="_blank">https://shibaswap.com/#/
Metaverse : https://shib.io/#/" target="_blank">https://shib.io/#/
NFTs : https://opensea.io/collection/theshiboshis" target="_blank">https://opensea.io/collection/theshiboshis
Game : Shiba Eternity on iOS and Android
Roles and Responsibilities
- Managing available resources such as hardware, data, and personnel so that deadlines are met.
- Analyzing the ML and Deep Learning algorithms that could be used to solve a given problem and ranking them by their success probabilities
- Exploring data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world
- Defining validation framework and establish a process to ensure acceptable data quality criteria are met
- Supervising the data acquisition and partnership roadmaps to create stronger product for our customers.
- Defining feature engineering process to ensure usage of meaningful features given the business constraints which may vary by market
- Device self-learning strategies through analysis of errors from the models
- Understand business issues and context, devise a framework for solving unstructured problems and articulate clear and actionable solutions underpinned by analytics.
- Manage multiple projects simultaneously while demonstrating business leadership to collaborate & coordinate with different functions to deliver the solutions in a timely, efficient and effective manner.
- Manage project resources optimally to deliver projects on time; drive innovation using residual resources to create strong solution pipeline; provide direction, coaching & training, feedbacks to project team members to enhance performance, support development and encourage value aligned behaviour of the project team members; Provide inputs for periodic performance appraisal of project team members.
Preferred Technical & Professional expertise
- Undergraduate Degree in Computer Science / Engineering / Mathematics / Statistics / economics or other quantitative fields
- At least 2+ years of experience of managing Data Science projects with specializations in Machine Learning
- In-depth knowledge of cloud analytics tools.
- Able to drive Python Code optimization; ability review codes and provide inputs to improve the quality of codes
- Ability to evaluate hardware selection for running ML models for optimal performance
- Up to date with Python libraries and versions for machine learning; Extensive hands-on experience with Regressors; Experience working with data pipelines.
- Deep knowledge of math, probability, statistics and algorithms; Working knowledge of Supervised Learning, Adversarial Learning and Unsupervised learning
- Deep analytical thinking with excellent problem-solving abilities
- Strong verbal and written communication skills with a proven ability to work with all levels of management; effective interpersonal and influencing skills.
- Ability to manage a project team through effectively allocation of tasks, anticipating risks and setting realistic timelines for managing the expectations of key stakeholders
- Strong organizational skills and an ability to balance and handle multiple concurrent tasks and/or issues simultaneously.
- Ensure that the project team understand and abide by compliance framework for policies, data, systems etc. as per group, region and local standards
- Provide insights based on data to business teams
- Develop framework, solutions and recommendations for business problems
- Build ML models for predictive solutions
- Use advance data science techniques to build business solutions
- Automation / Optimization of new/existing models ensuring smooth,timely and accurate execution with lowest possible TAT.
- Design & maintenance of response tracking, measurement, and comparison of success parameters of various projects.
- Ability to handle large volumes of data with ease using multiple software like Python ,R etc
Experience in modeling techniques and hands on experience in building Logistic regression models, Random Forrest, K-mean Cluster, NLP, Decision tree, Boosting techniques etc
- Good at data interpretation and reasoning skills
This profile will include the following responsibilities:
- Develop Parsers for XML and JSON Data sources/feeds
- Write Automation Scripts for product development
- Build API Integrations for 3rd Party product integration
- Perform Data Analysis
- Research on Machine learning algorithms
- Understand AWS cloud architecture and work with 3 party vendors for deployments
- Resolve issues in AWS environmentWe are looking for candidates with:
Qualification: BE/BTech/Bsc-IT/MCA
Programming Language: Python
Web Development: Basic understanding of Web Development. Working knowledge of Python Flask is desirable
Database & Platform: AWS/Docker/MySQL/MongoDB
Basic Understanding of Machine Learning Models & AWS Fundamentals is recommended.
Position: Manager/Sr. Manager - Presales
Location: Mumbai / Delhi
About NxtGen
NxtGen is an emerging leader in data center and cloud4based services that help powering businesses to grow by cutting through complexity and saving on cost.
Our advanced solutions can be provided both from our own High Density Data Center (HDDC™) facilities or deployed at On4Premise Data Centers (OPDC™) that are managed centrally.
Our Enterprise Cloud Services™ (ECS) provide private and public clouds or hybrid cloud infrastructure hosted on OPDC™ or HDDC™. NxtGen’s advanced infrastructure enable companies to simplify IT infrastructure, reduce running costs, and enable business growth by creating additional capacity from existing infrastructure. By providing the right public or private infrastructure, NxtGen helps companies meet dynamic business demands.
With the support and trust of Intel Capital, Axon Partners, and International Finance Corporation (IFC), NxtGen continues to push the boundaries of the IT infrastructure domain with its unique and cost-effective solution.
To know more about us log on to www.nxtgen.com , https://www.youtube.com/watch?v=s6GNShHFRRg
Job Overview
- • Responsible for technical and business requirements, discovery, proposal preparation support and technical presentations to customers for NxtGen’s full suite of products and solutions.
- • Serves as technical consultant for: Cloud, hosting, Virtualization, and Managed Services.
- • The presales manager is also responsible for providing technical training for the sales team.
Job Responsibilities
- • Performs customer discovery discussions to understand and document business needs and design requirements necessary for the formulation of optimal solutions
- • Creatively designs solutions for customers using the best mix of NxtGen’s, products. Alters the design as needed to result in the customer choosing the NxtGen’s solution
- • Determine client requirements and provide designs for Hosting Services, Cloud, Virtualization (VMware, KVM, Hyper4v) Outsourced infrastructure solutions (managed and un4managed), specialized Enterprise application suites
- • Conceptual and working knowledge of AWS/Azure/Google Cloud.
- • Provide technical expertise during requirements gathering and analysis, document technical designs and support manuals, and drive test and deployment processes for Hadoop Cluster
- • Conceptual knowledge on various distributions of Hadoop.
- • Working knowledge on various Backup technologies like Symantec, Commvault.
- • Creatively design DR solutions for customers understanding their current infrastructure by using third party tools or in-built replication technologies.
- • Possess knowledge on Dockers, Containers & Dev Ops.
- • Understand power, cooling, and other environmental constraints on a client’s outsourced infrastructure
- • Develop and grow technical knowledge base in Hosting Services, Cloud, Virtualization, Outsourced infrastructure solutions (managed and unmanaged), specialized Enterprise application suites
- • Documents, via diagrams and writing, and presents the solution to the customer, describes the benefits of the solution
- • Builds relationships with customers serving as the technical liaison from sales to post4sales
- • Responsible for growing strategic product sales and revenue through proactive engagement with customers
- • Participates in strategic and tactical account planning
- • Follows industry technology trends through self4study and formal training and shares that knowledge with customers
- • Clearly communicates the customer design to the teams responsible for ordering, implementation, and ongoing support
- • Provides technical training and development support to the local branch
- • Leads internal cross4functional teams to obtain required approvals of non4standard designs for customers
- • Provide sales engineering support remotely across multiple time zones
- • Travel for customer and team meetings
NxtGen is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, national origin, disability, age, military status or veteran status, genetic information, or any other status protected by applicable law.
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- 4+ years of industrial experience in computer vision and/or deep learning
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
Cogoport Story
Do you prefer to get speeding tickets or parking tickets?
Because at Cogoport we are speeding ahead to do something remarkable for the world. We are trying to solve the Trade Knowledge and Execution Gap, which is currently widening and preventing trade to the tune of $3.4 trillion annually. This Gap has enormous economic as well as human impact and disproportionately hits small and medium businesses globally.
The team at Cogoport is working on developing a new category, the Global Trade Platform, that helps companies discover and connect with appropriate trade partners, optimize shipping and logistics options and costs, and improve cash management and cash flow.
Cogoport is currently in hypergrowth mode. We are proud to have been named an Asia-Pacific High-Growth Company by the Financial Times and an Indian Growth Champion by the Economic Times. We are aiming to reach an annualized revenue of $1 billion (7700 Crores INR) by this summer and are hiring over 500 additional employees. We are currently hiring in Mumbai, Gurgaon, Chennai and Bangalore.
Cogoport Culture: We have two core values at Cogoport—Intrapreneurship and Customer-centricity. If you share these values and are a hard worker who is willing to take risks (and occasionally get a speeding ticket), you can make a huge impact and propel your career in an endless number of directions with Cogoport.
Cogoport Leadership
https://www.linkedin.com/in/purnendushekhar/">https://www.linkedin.com/in/purnendushekhar/
https://www.linkedin.com/in/amitabhshankar/">https://www.linkedin.com/in/amitabhshankar/
Life at Cogoport: It’s rare to be able to join a company that can give you the resources, support and technology you need to break new ground and see your ideas come to life. You’ll be surrounded by some of the smartest engineers and commercial specialists in India and the Asia Pacific Region.
With huge growth and the right entrepreneurial mindset, comes huge opportunities! So, wherever you join us, you’ll be able to dream, deliver better and brighter solutions, and speed ahead with the possibility to propel your career forward in endless directions as our company continues to grow and expand.
For more insights about the company: https://cogoport.com/about">https://cogoport.com/about
Why Cogoport?
International Trade can be complicated at times and every day brings new challenges and opportunities to learn. When we simplify international trade, it empowers and affects every human being on the face of this earth. Seven billion people - one common problem.
As a part of the Talent team at Cogoport, you will get an opportunity to be a part of an industry-wide revolution in the world of shipping and logistics by collaborating with other brilliant minds to resolve real world on-ground challenges. You will have a direct impact on the revenue and profitability growth for the organization.
Areas of Impact for you
- Hands-on management with deep-dive into the details of software design, implementation and debugging.
- Guide your teams in developing roadmaps and systems to drive product growth, then identify, plan, and execute projects to support that growth.Manage multiple projects across a wide breadth of technologies, coordinate dependencies, and interactions with the internal teams and external partners.
- Collaborate with stakeholders from across functions to keep the development team in sync with all functions and overall business objectives.
- Develop large multi-tenant applications in Rails.
- Understand Rails best practices and religiously introduce those to codebase.
- Set up, create and manage strong best practices/architecture to ensure reliable, secure, bug-free, and performant software is released on-time.
Desirable Skills and Experience
- Loves coding.
- 4-6 years of experience managing technology teams.
- Demonstrated ability to build complex scalable technology products.
- Should have prior experience of working with ROR, React, PostgreSQL and cloud infra.
- Understanding scaling strategies for a high-traffic Rails applications.
- Understanding O-Auth2 or JWT (Json Web Token) authentication mechanisms.
- Experience in using ActiveRecordSerialize, RSpec and active interaction.Engin
- Knowledge about Asynchronous Networking in Ruby; Refactoring ActiveRecord Models; Background Job processing using Redis and Sidekiq; Writing automated Deployment Scripts using Capistrano, Ansible etc.
- Expertise in Data Science and Machine Learning is a plus.
- Expertise in Jenkins, Kubernetes, dockers and cloud technology is a plus.
Cogoport is an equal opportunity employer. We are a welcoming place for everyone, and we do our best to make sure all people feel supported and respected at work.
Cogoport Story
Do you prefer to get speeding tickets or parking tickets?
Because at Cogoport we are speeding ahead to do something remarkable for the world. We are trying to solve the Trade Knowledge and Execution Gap, which is currently widening and preventing trade to the tune of $3.4 trillion annually. This Gap has enormous economic as well as human impact and disproportionately hits small and medium businesses globally.
The team at Cogoport is working on developing a new category, the Global Trade Platform, that helps companies discover and connect with appropriate trade partners, optimize shipping and logistics options and costs, and improve cash management and cash flow.
Cogoport is currently in hypergrowth mode. We are proud to have been named an Asia-Pacific High-Growth Company by the Financial Times and an Indian Growth Champion by the Economic Times. We are aiming to reach an annualized revenue of $1 billion (7700 Crores INR) by this summer and are hiring over 500 additional employees. We are currently hiring in Mumbai, Gurgaon, Chennai and Bangalore.
Cogoport Culture: We have two core values at Cogoport—Intrapreneurship and Customer-centricity. If you share these values and are a hard worker who is willing to take risks (and occasionally get a speeding ticket), you can make a huge impact and propel your career in an endless number of directions with Cogoport.
Cogoport Leadership
https://www.linkedin.com/in/purnendushekhar/">https://www.linkedin.com/in/purnendushekhar/
https://www.linkedin.com/in/amitabhshankar/">https://www.linkedin.com/in/amitabhshankar/
Life at Cogoport: It’s rare to be able to join a company that can give you the resources, support and technology you need to break new ground and see your ideas come to life. You’ll be surrounded by some of the smartest engineers and commercial specialists in India and the Asia Pacific Region.
With huge growth and the right entrepreneurial mindset, comes huge opportunities! So, wherever you join us, you’ll be able to dream, deliver better and brighter solutions, and speed ahead with the possibility to propel your career forward in endless directions as our company continues to grow and expand.
For more insights about the company: https://cogoport.com/about">https://cogoport.com/about
Why Cogoport?
International Trade can be complicated at times and every day brings new challenges and opportunities to learn. When we simplify international trade, it empowers and affects every human being on the face of this earth. Seven billion people - one common problem.
As a part of the Talent team at Cogoport, you will get an opportunity to be a part of an industry-wide revolution in the world of shipping and logistics by collaborating with other brilliant minds to resolve real world on-ground challenges. You will have a direct impact on the revenue and profitability growth for the organization.
Areas of Impact for you
- Hands-on management with deep-dive into the details of software design, implementation and debugging.
- Guide your teams in developing roadmaps and systems to drive product growth, then identify, plan, and execute projects to support that growth.
- Manage multiple projects across a wide breadth of technologies, coordinate dependencies, and interactions with the internal teams and external partners.
- Collaborate with stakeholders from across functions to keep the development team in sync with all functions and overall business objectives.
- Develop large multi-tenant applications in Rails.
- Understand Rails best practices and religiously introduce those to the codebase.
- Set up, create and manage strong best practices/architecture to ensure reliable, secure, bug-free, and performant software is released on-time.
Desirable Skills and Experience
- Loves coding.
- 2-4 years of experience building scalable & complex products from scratch.
- Demonstrated ability to build complex scalable technology products.
- Should have prior experience of working with ROR, React, PostgreSQL and cloud infra.
- Understanding scaling strategies for high-traffic Rails applications.
- Understanding O-Auth2 or JWT (Json Web Token) authentication mechanisms.
- Experience in using ActiveRecordSerialize, RSpec and active interaction.
- Knowledge about Asynchronous Networking in Ruby; Refactoring ActiveRecord Models; Background Job processing using Redis and Sidekiq; Writing automated Deployment Scripts using Capistrano, Ansible etc.
- Expertise in Data Science and Machine Learning is a pl
- Expertise in Jenkins, Kubernetes, dockers and cloud technology is a plus.
Cogoport is an equal opportunity employer. We are a welcoming place for everyone, and we do our best to make sure all people feel supported and respected at work.
About Quidich
http://www.quidich.com">Quidich Innovation Labs pioneers products and customized technology solutions for the Sports Broadcast & Film industry. With a mission to bring machines and machine learning to sports, we use camera technology to develop services using remote controlled systems like drones and buggies that add value to any broadcast or production. Quidich provides services to some of the biggest sports & broadcast clients in India and across the globe. A few recent projects include Indian Premier League, ICC World Cup for Men and Women, Kaun Banega Crorepati, Bigg Boss, Gully Boy & Sanju.
What’s Unique About Quidich?
- Your work will be consumed by millions of people within months of your joining and will impact consumption patterns of how live sport is viewed across the globe.
- You work with passionate, talented, and diverse people who inspire and support you to achieve your goals.
- You work in a culture of trust, care, and compassion.
- You have the autonomy to shape your role, and drive your own learning and growth.
Opportunity
- You will be a part of world class sporting events
- Your contribution to the software will help shape the final output seen on television
- You will have an opportunity to work in live broadcast scenarios
- You will work in a close knit team that is driven by innovation
Role
We are looking for a tech enthusiast who can work with us to help further the development of our Augmented Reality product, https://www.quidich.com/services/spatio">Spatio, to keep us ahead of the technology curve. We are one of the few companies in the world currently offering this product for live broadcast. We have a tight product roadmap that needs enthusiastic people to solve problems in the realm of software development and computer vision systems. Qualified candidates will be driven self-starters, robust thinkers, strong collaborators, and adept at operating in a highly dynamic environment. We look for candidates that are passionate about the product and embody our values.
Responsibilities
- Working with the research team to develop, evaluate and optimize various state of the art algorithms.
- Deploying high performance, readable, and reliable code on edge devices or any other target environments.
- Continuously exploring new frameworks and identifying ways to incorporate those in the product.
- Collaborating with the core team to bring ideas to life and keep pace with the latest research in Computer Vision, Deep Learning etc.
Minimum Qualifications, Skills and Competencies
- B.E/B.Tech or Masters in Computer Science, Mathematics or relevant experience
- 3+ years of experience in computer vision algorithms like - sfm/SLAM, optical flow, visual-inertial odometry
- Experience in sensor fusion (camera, imu, lidars) and in probabilistic filters - EKF, UKF
- Proficiency in programming - C++ and algorithms
- Strong mathematical understanding - linear algebra, 3d-geometry, probability.
Preferred Qualifications, Skills and Competencies
- Proven experience in optical flow, multi-camera geometry, 3D reconstruction
- Strong background in Machine Learning and Deep Learning frameworks.
Reporting To: Product Lead
Joining Date: Immediate (Mumbai)
- Writing efficient, reusable, testable, and scalable code
- Understanding, analyzing, and implementing – Business needs, feature modification requests, conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of application
- Working with Python libraries like Pandas, NumPy, etc.
- Creating predictive models for AI and ML-based features
- Keeping abreast with the latest technology and trends
- Fine-tune and develop AI/ML-based algorithms based on results
Technical Skills-
Good proficiency in,
- Python frameworks like Django, etc.
- Web frameworks and RESTful APIs
- Core Python fundamentals and programming
- Code packaging, release, and deployment
- Database knowledge
- Circles, conditional and control statements
- Object-relational mapping
- Code versioning tools like Git, Bitbucket
Fundamental understanding of,
- Front-end technologies like JS, CSS3 and HTML5
- AI, ML, Deep Learning, Version Control, Neural networking
- Data visualization, statistics, data analytics
- Design principles that are executable for a scalable app
- Creating predictive models
- Libraries like Tensorflow, Scikit-learn, etc
- Multi-process architecture
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
JOB DESCRIPTION
- 2 to 6 years of experience in imparting technical training/ mentoring
- Must have very strong concepts of Data Analytics
- Must have hands-on and training experience on Python, Advanced Python, R programming, SAS and machine learning
- Must have good knowledge of SQL and Advanced SQL
- Should have basic knowledge of Statistics
- Should be good in Operating systems GNU/Linux, Network fundamentals,
- Must have knowledge on MS office (Excel/ Word/ PowerPoint)
- Self-Motivated and passionate about technology
- Excellent analytical and logical skills and team player
- Must have exceptional Communication Skills/ Presentation Skills
- Good Aptitude skills is preferred
- Exceptional communication skills
Responsibilities:
- Ability to quickly learn any new technology and impart the same to other employees
- Ability to resolve all technical queries of students
- Conduct training sessions and drive the placement driven quality in the training
- Must be able to work independently without the supervision of a senior person
- Participate in reviews/ meetings
Qualification:
- UG: Any Graduate in IT/Computer Science, B.Tech/B.E. – IT/ Computers
- PG: MCA/MS/MSC – Computer Science
- Any Graduate/ Post graduate, provided they are certified in similar courses
ABOUT EDUBRIDGE
EduBridge is an Equal Opportunity employer and we believe in building a meritorious culture where everyone is recognized for their skills and contribution.
Launched in 2009 EduBridge Learning is a workforce development and skilling organization with 50+ training academies in 18 States pan India. The organization has been providing skilled manpower to corporates for over 10 years and is a leader in its space. We have trained over a lakh semi urban & economically underprivileged youth on relevant life skills and industry-specific skills and provided placements in over 500 companies. Our latest product E-ON is committed to complementing our training delivery with an Online training platform, enabling the students to learn anywhere and anytime.
To know more about EduBridge please visit: http://www.edubridgeindia.com/">http://www.edubridgeindia.com/
You can also visit us on https://www.facebook.com/Edubridgelearning/">Facebook , https://www.linkedin.com/company/edubridgelearning/">LinkedIn for our latest initiatives and products
A global business process management company
B1 – Data Scientist - Kofax Accredited Developers
Requirement – 3
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA
With a leading Business Process Management (BPM) company
Azure Chatbot Developer
Location :Bangalore / Mumbai / Pune
Job Description
Build and maintain bots on Azure platform. Integration with Active directory, WEB API based integration with external systems. Training and Integrate bots as per users’ requirements. Work in line with design guidelines, best practices and standards of bot deliverable. Creative approach to the conversation flow design, human aspects in the bot responses and sentiments
Qualifications
a) 5 years of experience in software development with clear understanding of the project life cycle
b) Min 2-3 years of hands-on experience in Microsoft Azure Bot Framework, LUIS and other Cognitive services offered by Azure
c) Hands on experience with Machine Learning based chat bots
d) Experience with Azure bot services like Text Analytics etc.
e)Strong database skills and hands-on experience on databases like SQL Server/Oracle
f) Strong experience on Azure Active directory and adaptive cards integration in Chat bot.
g) Strong experience designing and working with
with service-oriented architectures (SOA) and WebAPIs.
h) A strong experience on Microsoft Azure, ASPNET / MVC and programming languages such as C#/VBNET
i) Knowledge of Python and NodeJS is a plus
j) Ability to design and optimize SQL Server 2008 stored procedures.
k) Experience with JQuery, CSS3, HTML5 or similar technologies.
l) Ability to adapt quickly to an existing, complex environment.
About CarWale: CarWale's mission is to bring delight in car buying, we offer a bouquet of reliable tools and services to help car consumers decide on buying the right car, at the right price and from the right partner. CarWale has always strived to serve car buyers and owners in the most comprehensive and convenient way possible. We provide a platform where car buyers and owners can research, buy, sell and come together to discuss and talk about their cars.We aim to empower Indian consumers to make informed car buying and ownership decisions with exhaustive and un-biased information on cars through our expert reviews, owner reviews, detailed specifications and comparisons. We understand that a car is by and large the second-most expensive asset a consumer associates his lifestyle with! Together with CarTrade & BikeWale, we are the market leaders in the personal mobility media space.About the Team:We are a bunch of enthusiastic analysts assisting all business functions with their data needs. We deal with huge but diverse datasets to find relationships, patterns and meaningful insights. Our goal is to help drive growth across the organization by creating a data-driven culture.
We are looking for an experienced Data Scientist who likes to explore opportunities and know their way around data to build world class solutions making a real impact on the business.
Skills / Requirements –
- 3-5 years of experience working on Data Science projects
- Experience doing statistical modelling of big data sets
- Expert in Python, R language with deep knowledge of ML packages
- Expert in fetching data from SQL
- Ability to present and explain data to management
- Knowledge of AWS would be beneficial
- Demonstrate Structural and Analytical thinking
- Ability to structure and execute data science project end to end
Education –
Bachelor’s degree in a quantitative field (Maths, Statistics, Computer Science). Masters will be preferred.
Positions : 2-3
CTC Offering : 40,000 to 55,000/month
Job Location: Remote for 6-12 months due to the pandemic, then Mumbai, Maharashtra
Required experience:
Minimum 1.5 to 2 years of experience in Web & Backend Development using Python and Django with experience in some form of Machine Learning ML Algorithms
Overview
We are looking for Python developers with a strong understanding of object orientation and experience in web and backend development. Experience with Analytical algorithms and mathematical calculations using libraries such as Numpy and Pandas are a must. Experience in some form of Machine Learning. We require candidates who have working experience using Django Framework and DRF
Key Skills required (Items in Bold are mandatory keywords) :
1. Proficiency in Python 3.x based web and backend development
2. Solid understanding of Python concepts
3. Strong experience in building web applications using Django
4. Experience building REST APIs using DRF or Flask
5. Experience with some form of Machine Learning (ML)
6. Experience in using libraries such as Numpy and Pandas
7. Some form of experience with NLP and Deep Learning using any of Pytorch, Tensorflow, Keras, Scikit-learn or similar
8. Hands on experience with RDBMS such as Postgres or MySQL
9. Comfort with Git repositories, branching and deployment using Git
10. Working experience with Docker
11. Basic working knowledge of ReactJs
12. Experience in deploying Django applications to AWS,Digital Ocean or Heroku
KRAs includes :
1. Understanding the scope of work
2. Understanding and adopting the current internal development workflow and processes
3. Understanding client requirements as communicated by the project manager
4. Arriving on timelines for projects, either independently or as a part of a team
5. Executing projects either independently or as a part of a team
6. Developing products and projects using Python
7. Writing code to collect and mathematically analyse large volumes of data.
8. Creating backend modules in Python by building or reutilizing existing modules in a manner so as to provide optimal deliveries on time
9. Writing Scalable, maintainable code
10. Building secured REST APIs
11. Setting up batch task processing environments using Celery
12. Unit testing prepared modules
13. Bug fixing issues as reported by the QA team
14. Optimization and performance tuning of code
Bonus but not mandatory
1. Nodejs
2. Redis
3. PHP
4. CI/CD
5. AWS
Leading Multinational Co
ML Engineer-Analyst/ Senior Analyst
Job purpose:
To design and develop machine learning and deep learning systems. Run machine learning tests andexperiments and implementing appropriate ML algorithms. Works cross-functionally with the Data Scientists, Software application developers and business groups for the development of innovative ML models. Use Agile experience to work collaboratively with other Managers/Owners in geographically distributed teams.
Accountabilities:
- Work with Data Scientists and Business Analysts to frame problems in a business context. Assist all the processes from data collection, cleaning, and preprocessing, to training models and deploying them to production.
- Understand business objectives and developing models that help to achieve them, along with metrics to track their progress.
- Explore and visualize data to gain an understanding of it, then identify differences in data distribution that could affect performance when deploying the model in the real world.
- Define validation strategies, preprocess or feature engineering to be done on a given dataset and data augmentation pipelines.
- Analyze the errors of the model and design strategies to overcome them.
- Collaborate with data engineers to build data and model pipelines, manage the infrastructure and data pipelines needed to bring code to production and demonstrate end-to-end understanding of applications (including, but not limited to, the machine learning algorithms) being created.
Qualifications & Specifications
- Bachelor's degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Master's degree in relevant specification will be first preference
- Experience of machine learning algorithms and libraries
- Understanding of data structures, data modeling and software architecture.
- Deep knowledge of math, probability, statistics and algorithms
- Experience with machine learning platforms such as Microsoft Azure, Google Cloud, IBM Watson, and Amazon
- Big data environment: Hadoop, Spark
- Programming languages: Python, R, PySpark
- Supervised & Unsupervised machine learning: linear regression, logistic regression, k-means
clustering, ensemble models, random forest, svm, gradient boosting
- Sampling data: bagging & boosting, bootstrapping
- Neural networks: ANN, CNN, RNN related topics
- Deep learning: Keras, Tensorflow
- Experience with AWS Sagemaker deployment and agile methodology
Job Type: Full-time
CTC Offering : 3.6L PA to 6L PA
Job Location: Remote for 6-9 months due to the pandemic, then Mumbai, Maharashtra
Required experience:
-
Minimum 1.5 to 2 year of experience in Web & Backend Development using Python with experience in some form of Machine Learning ML Algorithms
Overview
We are looking for Python developers with a strong understanding of object orientation and experience in web and backend development. Experience with Analytical algorithms and mathematical calculations using libraries such as Numpy and Pandas are a must. Experience in some form of Machine Learning. We require candidates who have working experience using Django Framework
Key Skills required (Items in Bold are mandatory keywords) :
1. Proficiency in Python 3.x based web and backend development
2. Solid understanding of Python concepts
3. Experience with some form of Machine Learning (ML)
4. Experience in using libraries such as Numpy and Pandas
5. Some form of experience with NLP and Deep Learning using any of Pytorch, Tensorflow, Keras, Scikit-learn or similar
6. Hands on experience with RDBMS such as Postgres or MySQL
7. Experience building REST APIs using DRF or Flask
8. Comfort with Git repositories, branching and deployment using Git
9. Working experience with Docker
10. Basic working knowledge of ReactJs
11. Experience in deploying Django applications to AWS,Digital Ocean or Heroku
KRAs includes:
1. Understanding the scope of work
2. Understanding and adopting the current internal development workflow and processes
3. Understanding client requirements as communicated by the project manager
4. Arriving on timelines for projects, either independently or as a part of a team
5. Executing projects either independently or as a part of a team
6. Developing products and projects using Python
7. Writing code to collect and mathematically analyse large volumes of data.
8. Creating backend modules in Python by building or reutilizing existing modules in a manner so as to
provide optimal deliveries on time
9. Writing Scalable, maintainable code
10. Building secured REST APIs
11. Setting up batch task processing environments using Celery
12. Unit testing prepared modules
13. Bug fixing issues as reported by the QA team
14. Optimization and performance tuning of code
Bonus but not mandatory
1. Nodejs
2. Redis
3. PHP
4. CI/CD
5. AWS
at nymbleUP
Responsibilities
-
Create data funnels to feed into models via web, structured and unstructured data
-
Maintain coding standards using SDLC, Git, AWS deployments etc
-
Keep abreast of developments in the field
-
Deploy models in production and monitor them
-
Documentations of processes and logic
-
Take ownership of the solution from code to deployment and performance
Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote
We are looking for an engineer with ML/DL background.
Ideal candidate should have the following skillset
1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications
Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:
1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production
Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)
at Magic9 Media and Consumer Knowledge Pvt. Ltd.
Job Description
This requirement is to service our client which is a leading big data technology company that measures what viewers consume across platforms to enable marketers make better advertising decisions. We are seeking a Senior Data Operations Analyst to mine large-scale datasets for our client. Their work will have a direct impact on driving business strategies for prominent industry leaders. Self-motivation and strong communication skills are both must-haves. Ability to work in a fast-paced work environment is desired.
Problems being solved by our client:
Measure consumer usage of devices linked to the internet and home networks including computers, mobile phones, tablets, streaming sticks, smart TVs, thermostats and other appliances. There are more screens and other connected devices in homes than ever before, yet there have been major gaps in understanding how consumers interact with this technology. Our client uses a measurement technology to unravel dynamics of consumers’ interactions with multiple devices.
Duties and responsibilities:
- The successful candidate will contribute to the development of novel audience measurement and demographic inference solutions.
- Develop, implement, and support statistical or machine learning methodologies and processes.
- Build, test new features and concepts and integrate into production process
- Participate in ongoing research and evaluation of new technologies
- Exercise your experience in the development lifecycle through analysis, design, development, testing and deployment of this system
- Collaborate with teams in Software Engineering, Operations, and Product Management to deliver timely and quality data. You will be the knowledge expert, delivering quality data to our clients
Qualifications:
- 3-5 years relevant work experience in areas as outlined below
- Experience in extracting data using SQL from large databases
- Experience in writing complex ETL processes and frameworks for analytics and data management. Must have experience in working on ETL tools.
- Master’s degree or PhD in Statistics, Data Science, Economics, Operations Research, Computer Science, or a similar degree with a focus on statistical methods. A Bachelor’s degree in the same fields with significant, demonstrated professional research experience will also be considered.
- Programming experience in scientific computing language (R, Python, Julia) and the ability to interact with relational data (SQL, Apache Pig, SparkSQL). General purpose programming (Python, Scala, Java) and familiarity with Hadoop is a plus.
- Excellent verbal and written communication skills.
- Experience with TV or digital audience measurement or market research data is a plus.
- Familiarity with systems analysis or systems thinking is a plus.
- Must be comfortable with analyzing complex, high-volume and high-dimension data from varying sources
- Excellent verbal, written and computer communication skills
- Ability to engage with Senior Leaders across all functional departments
- Ability to take on new responsibilities and adapt to changes
- Handling Survey Scripting Process through the use of survey software platform such as Toluna, QuestionPro, Decipher.
- Mining large & complex data sets using SQL, Hadoop, NoSQL or Spark.
- Delivering complex consumer data analysis through the use of software like R, Python, Excel and etc such as
- Working on Basic Statistical Analysis such as:T-Test &Correlation
- Performing more complex data analysis processes through Machine Learning technique such as:
- Classification
- Regression
- Clustering
- Text
- Analysis
- Neural Networking
- Creating an Interactive Dashboard Creation through the use of software like Tableau or any other software you are able to use.
- Working on Statistical and mathematical modelling, application of ML and AI algorithms
What you need to have:
- Bachelor or Master's degree in highly quantitative field (CS, machine learning, mathematics, statistics, economics) or equivalent experience.
- An opportunity for one, who is eager of proving his or her data analytical skills with one of the Biggest FMCG market player.
About Us
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
-
upGrad was awarded the Best Tech for Education by IAMAI for 2018-19
-
upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most sought-
after startups in India
-
upGrad was earlier selected as one of the top ten most innovative companies in India
by FastCompany.
-
We were also covered by the Financial Times along with other disruptors in Ed-Tech
-
upGrad is the official education partner for Government of India - Startup India
program
-
Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning
Role Summary
Are you excited by the challenge and the opportunity of applying data-science and data- analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education?
If this is you, come join us and become a part of the upGrad technology team. At upGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to ourmarketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired careeroutcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand.
We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment.
Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company.
Roles & Responsibilities-
- 3+ years of experience in analytics, data science, machine learning or comparable role
- Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline
- Experience in building and deploying Machine Learning models in Production systems
- Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand
- Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL
- Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders
- Comfortable dealing with ambiguity and competing objectives
Skills Required
-
Experience in Text Analytics, Natural Language Processing
-
Advanced degree in Data Science/Data Analytics or Math/Statistics
-
Comfortable with data-visualization tools and techniques
-
Knowledge of AWS and Data Warehousing
-
Passion for building data-products for Production systems - a strong desire to impact
the product through data-science technique
-
Nactus is at forefront of education reinvention, helping educators and learner’s community at large through innovative solutions in digital era. We are looking for an experienced AI specialist to join our revolution using the deep learning, artificial intelligence. This is an excellent opportunity to take advantage of emerging trends and technologies to a real-world difference.
Role and Responsibilities
- Manage and direct research and development (R&D) and processes to meet the needs of our AI strategy.
- Understand company and client challenges and how integrating AI capabilities can help create educational solutions.
- Analyse and explain AI and machine learning (ML) solutions while setting and maintaining high ethical standards.
Skills Required
- Knowledge of algorithms, object-oriented and functional design principles
- Demonstrated artificial intelligence, machine learning, mathematical and statistical modelling knowledge and skills.
- Well-developed programming skills – specifically in SAS or SQL and other packages with statistical and machine learning application, e.g. R, Python
- Experience with machine learning fundamentals, parallel computing and distributed systems fundamentals, or data structure fundamentals
- Experience with C, C++, or Python programming
- Experience with debugging and building AI applications.
- Robustness and productivity analyse conclusions.
- Develop a human-machine speech interface.
- Verify, evaluate, and demonstrate implemented work.
- Proven experience with ML, deep learning, Tensorflow, Python
We are building a global content marketplace that brings companies and content
creators together to scale up content creation processes across 50+ content verticals and 150+ industries. Over the past 2.5 years, we’ve worked with companies like India Today, Amazon India, Adobe, Swiggy, Dunzo, Businessworld, Paisabazaar, IndiGo Airlines, Apollo Hospitals, Infoedge, Times Group, Digit, BookMyShow, UpGrad, Yulu, YourStory, and 350+ other brands.
Our mission is to become the world’s largest content creation and distribution platform for all kinds of content creators and brands.
Our Team
We are a 25+ member company and is scaling up rapidly in both team size and our ambition.
If we were to define the kind of people and the culture we have, it would be -
a) Individuals with an Extreme Sense of Passion About Work
b) Individuals with Strong Customer and Creator Obsession
c) Individuals with Extraordinary Hustle, Perseverance & Ambition
We are on the lookout for individuals who are always open to going the extra mile and thrive in a fast-paced environment. We are strong believers in building a great, enduring
a company that can outlast its builders and create a massive impact on the lives of our
employees, creators, and customers alike.
Our Investors
We are fortunate to be backed by some of the industry’s most prolific angel investors - Kunal Bahl and Rohit Bansal (Snapdeal founders), YourStory Media. (Shradha Sharma); Dr. Saurabh Srivastava, Co-founder of IAN and NASSCOM; Slideshare co-founder Amit Ranjan; Indifi's Co-founder and CEO Alok Mittal; Sidharth Rao, Chairman of Dentsu Webchutney; Ritesh Malik, Co-founder and CEO of Innov8; Sanjay Tripathy, former CMO, HDFC Life, and CEO of Agilio Labs; Manan Maheshwari, Co-founder of WYSH; and Hemanshu Jain, Co-founder of Diabeto.
Backed by Lightspeed Venture Partners
Job Responsibilities:
● Design, develop, test, deploy, maintain and improve ML models
● Implement novel learning algorithms and recommendation engines
● Apply Data Science concepts to solve routine problems of target users
● Translates business analysis needs into well-defined machine learning problems, and
selecting appropriate models and algorithms
● Create an architecture, implement, maintain and monitor various data source pipelines
that can be used across various different types of data sources
● Monitor performance of the architecture and conduct optimization
● Produce clean, efficient code based on specifications
● Verify and deploy programs and systems
● Troubleshoot, debug and upgrade existing applications
● Guide junior engineers for productive contribution to the development
The ideal candidate must -
ML and NLP Engineer
● 4 or more years of experience in ML Engineering
● Proven experience in NLP
● Familiarity with language generative model - GPT3
● Ability to write robust code in Python
● Familiarity with ML frameworks and libraries
● Hands on experience with AWS services like Sagemaker and Personalize
● Exposure to state of the art techniques in ML and NLP
● Understanding of data structures, data modeling, and software architecture
● Outstanding analytical and problem-solving skills
● Team player, an ability to work cooperatively with the other engineers.
● Ability to make quick decisions in high-pressure environments with limited information.
Well Funded & Growing Medical Technological Organization
- We are looking for a Tech Lead to provide sound technical leadership in all aspects of our business. You will communicate with employees, stakeholders and customers to ensure our company's technologies are used appropriately.
- Strategic thinking and strong business acumen are essential in this role. We expect you to be well-versed in current technological trends and familiar with a variety of business concepts. Strong In-depth and hands-on development experience is a must.
Responsibilities :
- Develop technical aspects of the company's strategy to ensure alignment with its business goals
- Discover and implement new technologies that yield competitive advantage
- Help departments use technology profitably
- Supervise system infrastructure to ensure functionality and efficiency
- Build quality assurance and data protection processes
- Monitor KPIs and IT budgets to assess technological performance
- Use stakeholders- feedback to inform necessary improvements and adjustments to technology
- Communicate technology strategy to partners and investors
Requirements :
- Proven experience as a Tech Lead or similar leadership role
- Knowledge of technological trends to build a strategy
- Understanding of budgets and business-planning
- Ability to conduct technological analyses and research
- Excellent communication skills
- Leadership and organizational abilities
- Strategic thinking
- Problem-solving aptitude
Skills - Python, Android, IOS, MERN Stack, AWS, ML
Note - Working Days- Mon-Fri, Sat - Half day & should be comfortable for work from office only.
at Transportation | Warehouse Optimization
Company Profile and Job Description
About us:
AthenasOwl (AO) is our “AI for Media” solution that helps content creators and broadcasters to create and curate smarter content. We launched the product in 2017 as an AI-powered suite meant for the media and entertainment industry. Clients use AthenaOwl's context adapted technology for redesigning content, taking better targeting decisions, automating hours of post-production work and monetizing massive content libraries.
For more details visit: www.athenasowl.tv
Role: |
Senior Machine Learning Engineer |
Experience Level: |
4 -6 Years of experience |
Work location: |
Mumbai (Malad W) |
Responsibilities:
- Develop cutting edge machine learning solutions at scale to solve computer vision problems in the domain of media, entertainment and sports
- Collaborate with media houses and broadcasters across the globe to solve niche problems in the field of post-production, archiving and viewership
- Manage a team of highly motivated engineers to deliver high-impact solutions quickly and at scale
The ideal candidate should have:
- Strong programming skills in any one or more programming languages like Python and C/C++
- Sound fundamentals of data structures, algorithms and object-oriented programming
- Hands-on experience with any one popular deep learning framework like TensorFlow, PyTorch, etc.
- Experience in implementing Deep Learning Solutions (Computer Vision, NLP etc.)
- Ability to quickly learn and communicate the latest findings in AI research
- Creative thinking for leveraging machine learning to build end-to-end intelligent software systems
- A pleasantly forceful personality and charismatic communication style
- Someone who will raise the average effectiveness of the team and has demonstrated exceptional abilities in some area of their life. In short, we are looking for a “Difference Maker”
2. Should understand the importance and know-how of taking the machine-learning-based solution to the consumer.
3. Hands-on experience with statistical, machine-learning tools and techniques
4. Good exposure to Deep learning libraries like Tensorflow, PyTorch.
5. Experience in implementing Deep Learning techniques, Computer Vision and NLP. The candidate should be able to develop the solution from scratch with Github codes exposed.
6. Should be able to read research papers and pick ideas to quickly reproduce research in the most comfortable Deep Learning library.
7. Should be strong in data structures and algorithms. Should be able to do code complexity analysis/optimization for smooth delivery to production.
8. Expert level coding experience in Python.
9. Technologies: Backend - Python (Programming Language)
10. Should have the ability to think long term solutions, modularity, and reusability of the components.
11. Should be able to work in a collaborative way. Should be open to learning from peers as well as constantly bring new ideas to the table.
12. Self-driven missile. Open to peer criticism, feedback and should be able to take it positively. Ready to be held accountable for the responsibilities undertaken.
We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations , clinical named entity recognition and information extraction from clinical notes.
This is a role for highly technical machine learning & data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.
You will be responsible for setting an agenda to develop and ship machine learning models that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company, and help build a foundation of tools and practices used by quantitative staff across the company.
What you will achieve:
-
Define the research vision for data science, and oversee planning, staffing, and prioritization to make sure the team is advancing that roadmap
-
Invest in your team’s skills, tools, and processes to improve their velocity, including working with engineering counterparts to shape the roadmap for machine learning needs
-
Hire, retain, and develop talented and diverse staff through ownership of our data science hiring processes, brand, and functional leadership of data scientists
-
Evangelise machine learning and AI internally and externally, including attending conferences and being a thought leader in the space
-
Partner with the executive team and other business leaders to deliver cross-functional research work and models
Required Skills:
-
Strong background in classical machine learning and machine learning deployments is a must and preferably with 4-8 years of experience
-
Knowledge of deep learning & NLP
-
Hands-on experience in TensorFlow/PyTorch, Scikit-Learn, Python, Apache Spark & Big Data platforms to manipulate large-scale structured and unstructured datasets.
-
Experience with GPU computing is a plus.
-
Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization. This could be through technical leadership with ownership over a research agenda, or developing a team as a personnel manager in a new area at a larger company.
-
Expert-level experience with a wide range of quantitative methods that can be applied to business problems.
-
Evidence you’ve successfully been able to scope, deliver and sell your own research in a way that shifts the agenda of a large organization.
-
Excellent written and verbal communication skills on quantitative topics for a variety of audiences: product managers, designers, engineers, and business leaders.
-
Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling
Qualifications
-
Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization
-
Expert-level experience with machine learning that can be applied to business problems
-
Evidence you’ve successfully been able to scope, deliver and sell your own work in a way that shifts the agenda of a large organization
-
Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling
-
Degree in a field that has very applicable use of data science / statistics techniques (e.g. statistics, applied math, computer science, OR a science field with direct statistics application)
-
5+ years of industry experience in data science and machine learning, preferably at a software product company
-
3+ years of experience managing data science teams, incl. managing/grooming managers beneath you
-
3+ years of experience partnering with executive staff on data topics
FreightCrate Technologies is a Logistics Technology company that- aims to transform international shipping through innovative and cost-effective tech solutions that can mutually benefit exporters, importers and logistics service providers.
Our web-platform, FreightCrate offers online tech-enabled quote and shipment management solutions for export and import businesses to and from more than 50,000 global locations.
We have had more than 50 media coverages including Entrepreneur, LoadStar UK, Maritime Gateway, Cargotalk, BW Disrupt, INC42, Silicon India, Tech Observer and several other newspapers and online magazines.
Our vision is to utilize cutting edge technology, engineering, deep learning and artificial intelligence to create an intelligent freight management system that can automate and optimize international trade operations for global businesses.
FOUNDERS:
Samir Lambay is the Co-Founder and CEO with an MSc in Shipping, Trade and Finance from Cass Business School, London, UK with Distinction.
Samir has worked in the shipping and logistics industry across both family businesses and multinational companies. His most recent job was at- DB Schenker Logistics- (Part of Deutsche Bahn Group - Fortune 200) where he was the youngest Vertical Manager globally and won two awards including Vertical Manager of the year and a Global Award.
Ruchi Dogra is a Co-Founder & Director and is in charge of sales and marketing. She has worked for over 15 years across the top 3 logistics/supply chain companies globally. She set-up the hotel, wines and beverages vertical at- Kuehne & Nagel- and- DB Schenker. Her latest assignment was at- DHL Global Forwarding- where she worked as a Vertical Manager for Engineering and Manufacturing; handling top Indian and multinational accounts including several Fortune 100 companies. She brings with her an extensive knowledge in strategy, sales, operations and customer service.
Title:
CTO
Description
The CTO role is to ensure the successful execution of the company’s business services via the web application. This requires envisioning the company’s service offerings as a web-based business, leading implementation of web applications, and planning for risk and growth.
.
Compensation:
As this is a new business the compensation is negotiable and we will make an offer depending on the experience of the candidate which will include a salary and ESOP package.
Responsibilities:
In partnership with the company’s founders, identify opportunities and risks for delivering the company’s services as a web-based business, including identification of competitive services, opportunities for innovation, and assessment of marketplace obstacles and technical hurdles to the business success. As our Product is already developed we expect the CTO to work with product development to implement new product features, apply new technologies and set up the engineering and development team to maintain, update and improve the web-application. Some specific responsibilities are listed below:
Strategy, planning, and design:
• Take end-to-end ownership of the product, identify technology requirements, define the future product vision, create preliminary design concepts for add-on modules and shape overall technology and product roadmap by collaborating with the founders, business development, and marketing team.
• Ensure user oriented design is the primary approach to product development across multiple screens, based on user behaviour data and direct customer feedback.
Implementation and deployment:
• Manage Product Release, QA cycles, feature implementation and on time delivery through in-house team and vendors.
• Collaborate with team and customers to define use cases.
• Creation of wireframes/prototypes, site maps and user-flows for web and mobile platforms.
Operational management :
• Support marketing by implementing technical requirements for SEO/product analytics.
• Establish and supervise a quality assurance process, including integration & system testing.
• Rigorously monitor key performance metrics and coordinate with various teams to take corrective actions if needed.
• Establish and forecast ROI of features and succinctly articulate competitive advantage.
• Set-up data collection and analysis system in collaboration with CEO to track key performance.
metrics.
Skill Sets:
• PHP/Python
• AWS & Cloud-computing
• Mysql/MongoDB/CouchDB
• Memcache/Redis
• Frontend technologies HTML, CSS, JS, Angular, ReactJS
• Frameworks like CodeIgnitor/CakePHP/Symphony, Flask/Django
• DevOps
• ELK
• Tools like Datadog, NewRelic, etc
• Shell-scripting
• Setting & managing dev/beta/prod environments
• tools like Jenkins, Chef, Puppet, Ansible, etc.
• Experience with MEAN stack will be an advantage
• Knowledge of Solr/Elastic search will be an advantage
Strong fundamentals in computer science/engineering and algorithm design.
Practical knowledge of computer software algorithms in machine/deep learning, NLP, Computer Vision etc.
Personal Requirements:
• Min of 7+ years of Hands on experience in Web app development, payment gateways implementation, architecture design, product management, databases and UI/UX in consumer facing applications.
• Experience on projects involving engineering and algorithmic functions, machine learning, deep learning and artificial intelligence is very advantageous.
• Creative self-starter who is comfortable with both taking initiative and working in teams.
• Previous experience working on logistics technology products is advantageous but not mandatory.
• Strong communication skills.
• Willingness to learn and utilize emerging technologies.
• Sincere passion to use disruptive technologies that can be globally significant.
------------------------
Solve problems in speech and NLP domain using advanced Deep learning and Machine Learning techniques. Few examples of the problems are -
* Limited resource Speaker Diarization on mono-channel recordings in noisy environment.
* Speech Enhancement to improve accuracy of downstream speech analytics tasks.
* Automated Speech Recognition for accent heavy audio with a noisy background.
* Speech analytic tasks, which include: emotions, empathy, keyword extraction.
* Text analytic tasks, which include: topic modeling, entity and intent extraction, opinion mining, text classification, and sentiment detection on multilingual data.
A typical day at work
-----------------------------
You will work closely with the product team to own a business problem. You will then model the business problem into a Machine Learning problem. Next you will do literature review to identify approaches to solve the problem. Test these approaches, identify the best approach, add your own insights to improve the performance and ship that to production!
What should you know?
---------------------------------
* Solid understanding of Classical Machine Learning and Deep Learning concepts and algorithms.
* Experience with literature review either in academia or industry.
* Proficiency in at least one programming language such as Python, C, C++, Java, etc.
* Proficiency in Machine Learning tools such as TensorFlow, Keras, Caffe, Torch/PyTorch or Theano.
* Advanced degree in Computer Science, Electrical Engineering, Machine Learning, Mathematics, Statistics, Physics, or Computational Linguistics
Why DeepAffects?
--------------------------
* You’ll learn insanely fast here.
* Esops and competitive compensation.
* Opportunity and encouragement for publishing research at top conferences, paid trips to attend workshop and conferences where you have published.
* Independent work, flexible timings and sense of ownership of your work.
* Mentorship from distinguished researchers and professors.