Keras Jobs in Pune
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Roles & Responsibilities:
Work on implementation of real-time and batch data pipelines for disparate data sources.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
- Identify improvement areas in the current data system and implement optimizations.
- Work on specific areas of data governance including metadata management and data quality management.
- Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
- Develop Proof-of-Concepts to validate new technology solutions or advancements.
- Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
- Work on building intelligent systems using various AI/ML algorithms.
- Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
- Experience with private and public cloud architectures with pros/cons.
- Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
- Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
- Knowledge of Kafka, Redis is preferred
- Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
Location: Ahmedabad / Pune
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and the Internet of Things.
● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics/modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework/pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science/computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adapting them for our
● Feature engineering to add new features that improve model
KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
Data Scientist - Product Development
Employment Type: Full Time, Permanent
Experience: 3-5 Years as a Full Time Data Scientist
We are looking for an exceptional Data Scientist who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes (TB) of daily data for various use cases.
Location: Pune (Currently remote up till pandemic, later you need to relocate)
About the Organization: A funded product development company, headquarter in Singapore and offices in Australia, United States, Germany, United Kingdom and India. You will gain work experience in a global environment. Qualifications:
- 3+ years relevant working experience
- Master / Bachelor’s in computer science or engineering
- Working knowledge of Python, Spark / Pyspark, SQL
- Experience working with large-scale data
- Experience in data manipulation, analytics, visualization, model building, model deployment
- Proficiency of various ML algorithms for supervised and unsupervised learning
- Experience working in Agile/Lean model
- Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Tensorflow, PyTorch, Keras, GCP ML Stack
- Exposure to MLOps tools such as MLflow, Airflow
- Exposure to modern Big Data tech such as Cassandra/Scylla, Snowflake, Kafka, Ceph, Hadoop
- Exposure to IAAS platforms such as AWS, GCP, Azure
- Experience with Java and Golang is a plus
- Experience with BI toolkit such as Superset, Tableau, Quicksight, etc is a plus
****** Looking for someone who can join immediately / within a month and carries experience with product development companies and dealt with streaming data. Experience working in a product development team is desirable. AWS experience is a must. Strong experience in Python and its related library is required.
Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
- Partners with business stakeholders to translate business objectives into clearly defined analytical projects.
- Identify opportunities for text analytics and NLP to enhance the core product platform, select the best machine learning techniques for the specific business problem and then build the models that solve the problem.
- Own the end-end process, from recognizing the problem to implementing the solution.
- Define the variables and their inter-relationships and extract the data from our data repositories, leveraging infrastructure including Cloud computing solutions and relational database environments.
- Build predictive models that are accurate and robust and that help our customers to utilize the core platform to the maximum extent.
Skills and Qualification
- 12 to 15 yrs of experience.
- An advanced degree in predictive analytics, machine learning, artificial intelligence; or a degree in programming and significant experience with text analytics/NLP. He shall have a strong background in machine learning (unsupervised and supervised techniques). In particular, excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc.
- Experience with text mining, parsing, and classification using state-of-the-art techniques.
- Experience with information retrieval, Natural Language Processing, Natural Language
- Understanding and Neural Language Modeling.
- Ability to evaluate the quality of ML models and to define the right performance metrics for models in accordance with the requirements of the core platform.
- Experience in the Python data science ecosystem: Pandas, NumPy, SciPy, sci-kit-learn, NLTK, Gensim, etc.
- Excellent verbal and written communication skills, particularly possessing the ability to share technical results and recommendations to both technical and non-technical audiences.
- Ability to perform high-level work both independently and collaboratively as a project member or leader on multiple projects.