Cutshort logo
R Language Jobs in Ahmedabad

R Language Jobs in Ahmedabad

Explore top R Language Job opportunities in Ahmedabad from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon

at TIGI HR Solution Pvt. Ltd.

1 video
31 recruiters
DP
Posted by Dhara Raval
Ahmedabad
5 - 9 yrs
₹12L - ₹15L / yr
Data Science
Keras
Python
Java
TensorFlow
+9 more

Position: Data Scientist

Experience: 5+ Years

 
Required Skillset:
 
• 5+ Years of hands-on development experience with AI/ML technologies
• Programming experience in Python, R, or Java
• Extensive data modeling and data architecture skills
• Experience working with modern frameworks like Keras, Tensorflow, PyTorch, and MXNet
• Experience with Container Services & Registries to Serve ML Models in the Cloud
• Experience with the Amazon Web Services platform and associated machine learning
services (Polly, Transcribe, Lex, Recognition, Comprehend, Translate, etc.)
• Experience with open-source application development stacks
• Use Terraform, Ansible, Jenkins, AWS-CDK (or similar) to Setup Automated CI/CD Pipelines
 
Desired Skill Set:
 
• Advanced math skills (linear algebra, Bayesian statistics, group theory)
• Theoretical understanding of model architectures of various object classification, object
detection models, recommender systems, NLP, Text, and voice processing models, 3D models
• Knowledge of Hadoop or other distributed computing systems
• Understanding of performance and accuracy metrics for different classes of neural
networks. Familiarity with industry-standard models and datasets and neural network tuning is a
plus.
Read more

Leading Grooming Platform

Agency job
via Qrata by Blessy Fernandes
Remote, Ahmedabad
3 - 6 yrs
₹15L - ₹25L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
  • Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
  • At least 1 Data Query language – SQL/Python
  • Experience in creating breakthrough visualizations
  • Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
Read more

Brand Manufacturer for Bearded Men

Agency job
via Qrata by Prajakta Kulkarni
Ahmedabad
3 - 10 yrs
₹15L - ₹30L / yr
Analytics
Business Intelligence (BI)
Business Analysis
Python
SQL
+2 more
Analytics Head

Technical must haves:

● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must

Technical Ideal to have:

● Exposure to our tech stack – PHP
● Microsoft workflows knowledge

Behavioural Pen Portrait:

● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate

Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
Read more

Reputed firm providing world-class consulting

Agency job
via Jobdost by Saida Jabbar
Remote, Ahmedabad, Hyderabad, Delhi, Pune
12 - 15 yrs
₹30L - ₹40L / yr
Amazon Web Services (AWS)
NodeJS (Node.js)
MongoDB
TypeScript
RESTful APIs

As part of a demonstrated commitment, we are investing in digital innovation, emerging products and new ways to help our customers with their financial wellbeing. This role is on our industry leading credit cards platform.

We are looking for a Solution Architect to join our team. You will work with our product, design and engineering teams to plan, design, and develop web, mobile applications & services for credit cards. We offer an opportunity to work in a collaborative and inclusive environment with people who value their work and who welcome fresh ideas.


Who you are:

·        You are driven to build intuitive experiences that start with the customer first

·        You embrace change and innovation, always seeking to push the boundaries and expand `your knowledge and skill set

What you will do:

·        Provide architecture direction and guidance to technology and business partners of Credit Card that starts and ends with a focus on delivering best world class customer experiences.

·        Build strong, meaningful relationships with business & technical partners.

·        Work with the Credit Cards and Enterprise teams on defining technology strategy and best practices, as well as identifying opportunities to increase efficiency and resiliency throughout the organization.

·        Lead the definition of system architecture and  detailed solution design that are scalable and extensible

·        Become a subject matter expert for backend and middleware systems.


What we’re looking for:

·        Amazing technical instincts. You know how to evaluate and choose the right technology and approach for the job. You have stories you could share about what problem you thought you were solving at first, but through testing and iteration, came to solve a much bigger and better problem that resulted in positive outcomes all-around.

·        A love for learning. Technology is continually evolving around us, and you want to keep up to date to ensure we are using the right tech at the right time.

·        A love for working in ambiguity—and making sense of it. You can take in a lot of disparate information and find common themes, recommend clear paths forward and iterate along the way. You don’t form an opinion and sell it as if it’s gospel; this is all about being flexible, agile, dependable, and responsive in the face of many moving parts.

·        Confidence, not ego. You have an ability to collaborate with others and see all sides of the coin to come to the best solution for everyone.

·        Flexible and willing to accept change in priorities, as necessary

·        Demonstrable passion for technology (e.g., personal projects, open-source involvement)

·        Enthusiastic embrace of DevOps culture and collaborative software engineering

·        Ability and desire to work in a dynamic, fast paced, and agile team environment

·        Enthusiasm for cloud computing platforms such as AWS or Azure


Basic Qualifications:

·        Minimum B.S. / M.S. Computer Science or related discipline from accredited college or University

·        At least 12+ years of experience designing, developing, and delivering frontend, backend applications, services with Node.js, TypeScript, JavaScript, Restful APIs etc

·        At least 10 years of experience building internet facing applications

·        At least 6 years of experience with AWS and/or OpenShift

·        Proficient in the following concepts: object-oriented programming, software engineering techniques, quality engineering, parallel programming, databases, etc.

·        Proficient in building and consuming RESTful APIs

·        Proficient in managing multiple tasks and consistently meet established timelines

·        Experience integrating APIs with front-end and/or mobile-specific frameworks

·        Strong collaboration skills

·        Excellent written and verbal communications skills


Preferred Qualifications:

·        Financial Services experience and Credit Card experience a big plus.

·        Experience with Apache Cordova framework

·        Experience developing and deploying applications within Kubernetes based containers

·        Experience in Agile and SCRUM development techniques

 

Read more

worldclass consulting & implementation firm

Agency job
via Jobdost by Saida Jabbar
Remote, Ahmedabad, Hyderabad, Pune, Delhi
5 - 10 yrs
₹25L - ₹30L / yr
Amazon Web Services (AWS)
AWS Lambda
PySpark
Data engineering
Big Data
+9 more

Mandatory Requirements 


  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark

 

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

 

Familiarity and experience in the following is a plus: 

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 

CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

 

Read more

Consulting and Services company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
5 - 10 yrs
₹5L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Python
PySpark

Data Engineer 

  

Mandatory Requirements  

  • Experience in AWS Glue 
  • Experience in Apache Parquet  
  • Proficient in AWS S3 and data lake  
  • Knowledge of Snowflake 
  • Understanding of file-based ingestion best practices. 
  • Scripting language - Python & pyspark 

 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS  
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies  
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform  
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations  
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data. 
  • Define process improvement opportunities to optimize data collection, insights and displays. 
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible  
  • Identify and interpret trends and patterns from complex data sets  
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.  
  • Key participant in regular Scrum ceremonies with the agile teams   
  • Proficient at developing queries, writing reports and presenting findings  
  • Mentor junior members and bring best industry practices  

 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)  
  • Strong background in math, statistics, computer science, data science or related discipline 
  • Advanced knowledge one of language: Java, Scala, Python, C#  
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake   
  • Proficient with 
  • Data mining/programming tools (e.g. SAS, SQL, R, Python) 
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum) 
  • Data visualization (e.g. Tableau, Looker, MicroStrategy) 
  • Comfortable learning about and deploying new technologies and tools.  
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.  
  • Good written and oral communication skills and ability to present results to non-technical audiences  
  • Knowledge of business intelligence and analytical tools, technologies and techniques. 

 

Familiarity and experience in the following is a plus:  

  • AWS certification 
  • Spark Streaming  
  • Kafka Streaming / Kafka Connect  
  • ELK Stack  
  • Cassandra / MongoDB  
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools 
Read more

IT Product based Org.

Agency job
via OfficeDay Innovation by OFFICEDAY INNOVATION
Ahmedabad
3 - 5 yrs
₹10L - ₹12L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Deep Learning
+7 more
  • 3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
  • Programming skills in Python, knowledge in Statistics.
  • Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
  • Experience in the financial domain and familiarity with financial models are highly desirable.
  • Experience in image processing and computer vision.
  • Experience working with building data pipelines.
  • Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
  • Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
  • Should have experience in using larger data sets using Postgres Database. 

 

Read more

at InFoCusp

3 recruiters
DP
Posted by Kanchan Gangotri
Pune, Ahmedabad
3 - 8 yrs
₹15L - ₹40L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
TensorFlow
+3 more
Machine Learning Engineer

Location: Ahmedabad / Pune
Team: Technology

Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and Internet of Things.

PRIMARY RESPONSIBILITIES:

● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics / modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework / pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science / computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adopting them for our
purposes.
● Feature engineering to add new features that improve model
performance.

KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical and skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.

EDUCATION:
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
Read more
DP
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more

at Lendingkart

5 recruiters
DP
Posted by Mohammed Nayeem
Bengaluru (Bangalore), Ahmedabad
2 - 5 yrs
₹2L - ₹13L / yr
Python
Data Science
SQL
Roles and Responsibilities:
 Mining large volumes of credit behavior data to generate insights around product holdings and monetization opportunities for cross sell
 Use data science to size opportunity and product potential for launch of any new product/pilots
 Build propensity models using heuristics and campaign performance to maximize efficiency.
 Conduct portfolio analysis and establish key metrics for cross sell partnership

Desired profile/Skills:
 2-5 years of experience with a degree in any quantitative discipline such as Engineering, Computer Science, Economics, Statistics or Mathematics
 Excellent problem solving and comprehensive analytical skills – ability to structure ambiguous problem statements, perform detailed analysis and derive crisp insights.
 Solid experience in using python and SQL
 Prior work experience in a financial services space would be highly valued

Location: Bangalore/ Ahmedabad
Read more
Ahmedabad
1 - 2 yrs
₹1L - ₹3L / yr
Java
Python
Data Structures
Algorithms
C++
+1 more
Looking for Alternative Data Programmer for equity fund
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python 
Some examples of work: 
Text search on earnings transcripts for keywords to identify future trends.  
Integration of internal and external financial database
Web scraping to capture clean and organize data
Automatic updating of our financial models by importing data from machine readable formats such as XBRL 
Fetching data from public databases such as RBI, NSE, BSE, DGCA and process the same. 
Back-testing of data either to test historical cause and effect relation on market performance/portfolio performance, as well as back testing of our screener criteria in devising strategy
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort