Cutshort logo
Support vector machine Jobs in Ahmedabad

Support vector machine Jobs in Ahmedabad

Explore top Support vector machine Job opportunities in Ahmedabad from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon

IT Product based Org.

Agency job
via OfficeDay Innovation by OFFICEDAY INNOVATION
Ahmedabad
3 - 5 yrs
₹10L - ₹12L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Deep Learning
+7 more
  • 3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
  • Programming skills in Python, knowledge in Statistics.
  • Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
  • Experience in the financial domain and familiarity with financial models are highly desirable.
  • Experience in image processing and computer vision.
  • Experience working with building data pipelines.
  • Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
  • Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
  • Should have experience in using larger data sets using Postgres Database. 

 

Read more

Leading Grooming Platform

Agency job
via Qrata by Blessy Fernandes
Remote, Ahmedabad
3 - 6 yrs
₹15L - ₹25L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
  • Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
  • At least 1 Data Query language – SQL/Python
  • Experience in creating breakthrough visualizations
  • Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
Read more

Consulting and Services company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
5 - 10 yrs
₹5L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Python
PySpark

Data Engineer 

  

Mandatory Requirements  

  • Experience in AWS Glue 
  • Experience in Apache Parquet  
  • Proficient in AWS S3 and data lake  
  • Knowledge of Snowflake 
  • Understanding of file-based ingestion best practices. 
  • Scripting language - Python & pyspark 

 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS  
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies  
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform  
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations  
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data. 
  • Define process improvement opportunities to optimize data collection, insights and displays. 
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible  
  • Identify and interpret trends and patterns from complex data sets  
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.  
  • Key participant in regular Scrum ceremonies with the agile teams   
  • Proficient at developing queries, writing reports and presenting findings  
  • Mentor junior members and bring best industry practices  

 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)  
  • Strong background in math, statistics, computer science, data science or related discipline 
  • Advanced knowledge one of language: Java, Scala, Python, C#  
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake   
  • Proficient with 
  • Data mining/programming tools (e.g. SAS, SQL, R, Python) 
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum) 
  • Data visualization (e.g. Tableau, Looker, MicroStrategy) 
  • Comfortable learning about and deploying new technologies and tools.  
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.  
  • Good written and oral communication skills and ability to present results to non-technical audiences  
  • Knowledge of business intelligence and analytical tools, technologies and techniques. 

 

Familiarity and experience in the following is a plus:  

  • AWS certification 
  • Spark Streaming  
  • Kafka Streaming / Kafka Connect  
  • ELK Stack  
  • Cassandra / MongoDB  
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools 
Read more

consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry

Agency job
via Jobdost by Sathish Kumar
Ahmedabad, Hyderabad, Pune, Delhi
5 - 7 yrs
₹18L - ₹25L / yr
AWS Lambda
AWS Simple Notification Service (SNS)
AWS Simple Queuing Service (SQS)
Python
PySpark
+9 more
  1. Data Engineer

 Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON

Mandatory Requirements  

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

  

Familiarity and experience in the following is a plus:  

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Read more

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Snow flake schema
Python
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more

Reputed firm providing worldclass consulting & implementatin

Agency job
via Jobdost by Saida Jabbar
Remote, Ahmedabad, Hyderabad, Pune, Delhi
5 - 10 yrs
₹25L - ₹30L / yr
Amazon Web Services (AWS)
AWS Lambda
PySpark
Data engineering
Big Data
+9 more

Mandatory Requirements 


  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark

 

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

 

Familiarity and experience in the following is a plus: 

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 

CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

 

Read more

at TIGI HR Solution Pvt. Ltd.

1 video
31 recruiters
DP
Posted by Dhara Raval
Ahmedabad
5 - 9 yrs
₹12L - ₹15L / yr
Data Science
Keras
Python
Java
TensorFlow
+9 more

Position: Data Scientist

Experience: 5+ Years

 
Required Skillset:
 
• 5+ Years of hands-on development experience with AI/ML technologies
• Programming experience in Python, R, or Java
• Extensive data modeling and data architecture skills
• Experience working with modern frameworks like Keras, Tensorflow, PyTorch, and MXNet
• Experience with Container Services & Registries to Serve ML Models in the Cloud
• Experience with the Amazon Web Services platform and associated machine learning
services (Polly, Transcribe, Lex, Recognition, Comprehend, Translate, etc.)
• Experience with open-source application development stacks
• Use Terraform, Ansible, Jenkins, AWS-CDK (or similar) to Setup Automated CI/CD Pipelines
 
Desired Skill Set:
 
• Advanced math skills (linear algebra, Bayesian statistics, group theory)
• Theoretical understanding of model architectures of various object classification, object
detection models, recommender systems, NLP, Text, and voice processing models, 3D models
• Knowledge of Hadoop or other distributed computing systems
• Understanding of performance and accuracy metrics for different classes of neural
networks. Familiarity with industry-standard models and datasets and neural network tuning is a
plus.
Read more

at InFoCusp

3 recruiters
DP
Posted by Kanchan Gangotri
Pune, Ahmedabad
2 - 8 yrs
₹10L - ₹30L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
TensorFlow
+4 more
Machine Learning Engineer

Location: Ahmedabad / Pune
Team: Technology

Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and the Internet of Things.

PRIMARY RESPONSIBILITIES:

● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics/modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework/pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science/computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adapting them for our
purposes.
● Feature engineering to add new features that improve model
performance.

KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.

EDUCATION:
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
Read more

at InFoCusp

3 recruiters
DP
Posted by Kanchan Gangotri
Pune, Ahmedabad
3 - 8 yrs
₹15L - ₹40L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
TensorFlow
+3 more
Machine Learning Engineer

Location: Ahmedabad / Pune
Team: Technology

Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and Internet of Things.

PRIMARY RESPONSIBILITIES:

● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics / modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework / pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science / computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adopting them for our
purposes.
● Feature engineering to add new features that improve model
performance.

KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical and skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.

EDUCATION:
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
Read more
DP
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Ahmedabad
1 - 2 yrs
₹1L - ₹3L / yr
Java
Python
Data Structures
Algorithms
C++
+1 more
Looking for Alternative Data Programmer for equity fund
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python 
Some examples of work: 
Text search on earnings transcripts for keywords to identify future trends.  
Integration of internal and external financial database
Web scraping to capture clean and organize data
Automatic updating of our financial models by importing data from machine readable formats such as XBRL 
Fetching data from public databases such as RBI, NSE, BSE, DGCA and process the same. 
Back-testing of data either to test historical cause and effect relation on market performance/portfolio performance, as well as back testing of our screener criteria in devising strategy
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort