Similar jobs
Founded 1987  •  Product  •  500-1000 employees  •  Profitable
Amazon Web Services (AWS)
Big Data
Apache Kafka
Apache Hive
Spark
Scala
Python
athena
EMR
AWS Lambda
Amazon DynamoDB
cloudwatch
Remote only
3 - 5 yrs
₹8L - ₹15L / yr

Primary Responsibilities: Develop solutions using big data technologies and AWS services on AWS cloud.

Position description:
Hands-on experience in AWS services (S3, AWS Lambda, Athena, Step functions, AWS Glue, Kinesis, AWS DynamoDB (AWS Serverless Technologies), Fargate, ECS, Load Balancers, Cloudwatch, EMR etc.)
Hands-on experience in Big Data (Hadoop, Spark, Hive, HDFS, Kafka, Zookeeper etc.)
Hands-on experience in programming (Python, Pyspark, Scala)
Experience in creating data pipelines – ingestion, processing and orchestration.
Experience working within a Linux computing environment, and use of command line tools including knowledge of scripting for automating common tasks.
Understanding about data engineering concepts (near-/real-time streaming, data structures, metadata and workflow management).
Challenges involved in Big Data – large data sizes (e.g. depth/width), even distribution of data
Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
Monitor & Optimize the performance using AWS dashboards and logs.  Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements.

Required Skills:


AWS certification would be preferred
2+ years good experience with AWS services: S3, AWS Lambda, Athena, Step functions, AWS Glue, Kinesis, ECS, AWS DynamoDB (AWS Serverless Technologies), Fargate, Load Balancers, Cloudwatch, EMR.
2+ years good experience working with Bigdata technologies: Hadoop, Spark, Hive, HDFS, Kafka, Zookeeper.
Good experience in data structure, programming in (pyspark / python / golang / Scala)


Read more
Job posted by
Sailee Dorle
Apply for job
Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Machine Learning (ML)
Software deployment
CI/CD
Cloud Computing
Snow flake schema
Amazon Redshift
Big Data
Serverless
AWS Lambda
PySpark
EMR
Data storage
Google Cloud Storage
Amazon S3
Amazon Glacier
Tableau
PowerBI
Qlik
Predictive modelling
Python
Scikit-Learn
k-means clustering
Artificial Intelligence (AI)
SaaS
Chennai
5 - 10 yrs
₹10L - ₹30L / yr

We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.

 

Skills:

- 5+ years deploying Machine Learning pipelines in large enterprise production systems.

- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.

 

Roles and Responsibilities:

Deploying ML models into production, and scaling them to serve millions of customers.

Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.

Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.

Provide software design and programming support to projects.

 

 Qualifications & Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.

 

Read more
Job posted by
Vijay Hemnath
Apply for job
Founded 2017  •  Product  •  0-20 employees  •  Raised funding
Data Science
R Programming
Python
Machine Learning (ML)
ML stack
Bengaluru (Bangalore)
3 - 5 yrs
₹11L - ₹25L / yr
Data Scientist
  1. 3-5yrs of practical DS experience working with varied data sets. Working with retail banking is preferred but not necessary.
  2. Need to be strong in concepts of statistical modelling – particularly looking for practical knowledge learnt from work experience (should be able to give “rule of thumb” answers)
  3. Strong problem solving skills and the ability to articulate really well.
  4. Ideally, the data scientist should have interfaced with data engineering and model deployment teams to bring models / solutions to “live” in production.
  5. Strong working knowledge of python ML stack is very important here.
  6. Willing to work on diverse range of tasks in building ML related capability on the Corridor Platform as well as client work.
  7. Someone with strong interest in data engineering aspect of ML is highly preferred, i.e. can play dual role of Data Scientist as well as someone who can code a module on our Corridor Platform writing robust code.
Read more
Job posted by
Niti Anand
Apply for job
at They provide both wholesale and retail funding. (PM1)
Agency job
Teradata
Vertica
Python
DBA
Redshift
Synapse
Snowflake
Dynamo DB
UDX
FSLDM
Cosmos DB
OLAP
Data modeling
Mumbai
5 - 7 yrs
₹20L - ₹25L / yr
  • Key responsibility is to design, develop & maintain efficient Data models for the organization maintained to ensure optimal query performance by the consumption layer.
  • Developing, Deploying & maintaining a repository of UDXs written in Java / Python.
  • Develop optimal Data Model design, analyzing complex distributed data deployments, and making recommendations to optimize performance basis data consumption patterns, performance expectations, the query is executed on the tables/databases, etc.
  • Periodic Database health check and maintenance
  • Designing collections in a no-SQL Database for efficient performance
  • Document & maintain data dictionary from various sources to enable data governance
  • Coordination with Business teams, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
  • Data Governance Process Implementation and ensuring data security

Requirements

  • Extensive working experience in Designing & Implementing Data models in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc).
  • Programming experience using Python / Java.
  • Working knowledge in developing & deploying User-defined Functions (UDXs) using Java / Python.
  • Strong understanding & extensive working experience in OLAP Data Warehousing (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) architecture and cloud-native Data Lake (S3, ADLS, BigQuery, etc) Architecture.
  • Strong knowledge in Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model.
  • Extensive technical experience in SQL including code optimization techniques.
  • Strung knowledge of database performance and tuning, troubleshooting, and tuning.
  • Knowledge of collection design in any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc), along with implementation of best practices.
  • Ability to understand business functionality, processes, and flows.
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
  • Any OLAP DWH DBA Experience and User Management will be added advantage.
  • Knowledge in financial industry-specific Data models such as FSLDM, IBM Financial Data Model, etc will be added advantage.
  • Experience in Snowflake will be added advantage.
  • Working experience in BFSI/NBFC & data understanding of Loan/Mortgage data will be added advantage.

Functional knowledge

  • Data Governance & Quality Assurance
  • Modern OLAP Database Architecture & Design
  • Linux
  • Data structures, algorithm & data modeling techniques
  • No-SQL database architecture
  • Data Security

 

Read more
Job posted by
Sapna Deb
Apply for job
Big Data
Google Cloud Platform (GCP)
Data Warehouse (DWH)
ETL
Systems Development Life Cycle (SDLC)
Hadoop
Java
Scala
Python
SQL
Scripting
Teradata
HiveQL
Pig
Spark
Apache Kafka
Windows Azure
Remote, Bengaluru (Bangalore)
4 - 8 yrs
₹4L - ₹16L / yr
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
Job posted by
Sanjay Biswakarma
Apply for job
at Metadata Technologies, North America
Agency job
Data engineering
Java
Go Programming (Golang)
Network
Multithreading
Apache Kafka
Spark
Data Analytics
Real time media streaming
Distributed Systems
Shared services
Cassandra
Concurrency
Real Time Data
Data Pipeline
Shared Noting Architechture
RTB
Remote only
4 - 8 yrs
₹15L - ₹45L / yr

 

We are looking for an exceptional Software Developer for our Data Engineering India team who can-

contribute to building a world-class big data engineering stack that will be used to fuel us

Analytics and Machine Learning products. This person will be contributing to the architecture,

operation, and enhancement of:

Our petabyte-scale data platform with a key focus on finding solutions that can support

Analytics and Machine Learning product roadmap. Everyday terabytes of ingested data

need to be processed and made available for querying and insights extraction for

various use cases.

About the Organisation:

 

- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.

 

- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom, and India.

 

- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.


Job Description

Position:
Software Developer, Data Engineering team
Location: Pune(Initially 100% Remote due to Covid 19 for coming 1 year)

 

  • Our bespoke Machine Learning pipelines. This will also provide opportunities to

contribute to the prototyping, building, and deployment of Machine Learning models.

You:

  • Have at least 4+ years’ Experience.
  • Deep technical understanding of Java or Golang.
  • Production experience with Python is a big plus, extremely valuable supporting skill for

us.

  • Exposure to modern Big Data tech: Cassandra/Scylla, Kafka, Ceph, the Hadoop Stack,

Spark, Flume, Hive, Druid etc… while at the same time understanding that certain

problems may require completely novel solutions.

  • Exposure to one or more modern ML tech stacks: Spark ML-Lib, TensorFlow, Keras,

GCP ML Stack, AWS Sagemaker - is a plus.

  • Experience includes working in Agile/Lean model
  • Experience with supporting and troubleshooting large systems
  • Exposure to configuration management tools such as Ansible or Salt
  • Exposure to IAAS platforms such as AWS, GCP, Azure…
  • Good addition - Experience working with large-scale data
  • Good addition - Good to have experience architecting, developing, and operating data

warehouses, big data analytics platforms, and high velocity data pipelines

**** Not looking for a Big Data Developer / Hadoop Developer

Read more
Job posted by
Biswadeep RS
Apply for job
Founded 2005  •  Services  •  100-1000 employees  •  Profitable
Python
Django
Big Data
Pune
3 - 6 yrs
₹5L - ₹9L / yr

Position Name: Software Developer

Required Experience: 3+ Years

Number of positions: 4

Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science).

Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting

Compensation - As per industry standards.

Joining - Immediate joining is preferrable.

 

Required Skills:

 

  • Strong Experience in Python and web frameworks like Django, Tornado and/or Flask
  • Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib
  • Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot
  • Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift.
  • Experience in building APIs, Multi-threading for tasks on Linux platform
  • Exposure to finance and capital markets will be added advantage. 
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application.
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3

 

Company Description:

Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.

 

Website: www.virtuaresearch.com

Read more
Job posted by
Jyoti Nair
Apply for job
at fintech
Agency job
ETL
Druid Database
Java
Scala
SQL
Tableau
Python
Remote only
2 - 6 yrs
₹9L - ₹30L / yr
● Education in a science, technology, engineering, or mathematics discipline, preferably a
bachelor’s degree or equivalent experience
● Knowledge of database fundamentals and fluency in advanced SQL, including concepts
such as windowing functions
● Knowledge of popular scripting languages for data processing such as Python, as well as
familiarity with common frameworks such as Pandas
● Experience building streaming ETL pipelines with tools such as Apache Flink, Apache
Beam, Google Cloud Dataflow, DBT and equivalents
● Experience building batch ETL pipelines with tools such as Apache Airflow, Spark, DBT, or
custom scripts
● Experience working with messaging systems such as Apache Kafka (and hosted
equivalents such as Amazon MSK), Apache Pulsar
● Familiarity with BI applications such as Tableau, Looker, or Superset
● Hands on coding experience in Java or Scala
Read more
Job posted by
Raksha Pant
Apply for job
Founded 2003  •  Product  •  500-1000 employees  •  Profitable
Data Science
R Programming
Python
Machine Learning (ML)
Architecture
Deep Learning
OpenCV
Image Processing
Chennai
13.5 - 28 yrs
₹15L - ₹37L / yr

GREETINGS FROM CODEMANTRA !!!

 

EXCELLENT OPPORTUNITY FOR DATA SCIENCE/AI AND ML ARCHITECT !!!

 

Skills and Qualifications

 

*Strong Hands-on experience in Python Programming

*** Working experience with Computer Vision models - Object Detection Model, Image Classification

* Good experience in feature extraction, feature selection techniques and transfer learning

* Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.

* Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.

* Good experience in exploratory data analysis, data visualisation, and other data pre-processing techniques.

* Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe Good knowledge in statistics, distribution of data and in supervised and unsupervised machine learning algorithms.

* Exposure to OpenCV Familiarity with GPUs + CUDA Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.

* We are looking for a candidate with 9+ years of relevant experience , who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: *Experience with big data tools: Hadoop, Spark, Kafka, etc.
*Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
*Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.

Responsibilities
*Selecting features, building and optimizing classifiers using machine learning techniques
*Data mining using state-of-the-art methods
*Enhancing data collection procedures to include information that is relevant for building analytic systems
*Processing, cleansing, and verifying the integrity of data used for analysis
*Creating automated anomaly detection systems and constant tracking of its performance
*Assemble large, complex data sets that meet functional / non-functional business requirements.
*Secure and manage when needed GPU cluster resources for events
*Write comprehensive internal feedback reports and find opportunities for improvements
*Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model

 

Regards

Ranjith PR
Read more
Job posted by
Ranjith PR
Apply for job
Founded 2012  •  Products & Services  •  100-1000 employees  •  Bootstrapped
Decision Science
Data modeling
Statistical Modeling
Python
Machine Learning (ML)
Data Science
Artificial Intelligence (AI)
Predictive analytics
Mumbai
- yrs
₹3L - ₹15L / yr
About us: Quantiphi is a category defining Data Science and Machine Learning Software and Services Company focused on helping organizations translate the big promise of Big Data & Machine Learning technologies into quantifiable business impact. We were founded on the belief that machine learning and artificial intelligence are transformative technologies that will create the next quantum gain in customer experience and unit economics of businesses. Quantiphi helps clients find and capture hidden value from data through a unique blend of business acumen, big-data, machine learning and intuitive information design. AthenasOwl (AO) is our “AI for Media” solution that helps content creators and broadcasters to create and curate smarter content. We launched the product in 2017 as an AI-powered suite meant for the media and entertainment industry. Clients use AthenaOwl's context adapted technology for redesigning content, taking better targeting decisions, automating hours of post-production work and monetizing massive content libraries. Please Find Attached fact sheet for your reference. For more details visit: www.quantiphi.com ; www.athenasowl.tv Job Description: -Developing high-level solution architecture related to different use-cases in the media industry -Leveraging both structured and unstructured data from external sources and our proprietary AI/ML models to build solutions and workflows that can be used to give data driven insights. -Develop sophisticated yet easy to digest interpretations and communicate insights to clients that lead to quantifiable business impact. -Building deep relationship with clients by understanding their stated but more importantly, latent needs. -Working closely with the client-side delivery managers to ensure a seamless communication and delivery cadence. Essential Skills and Qualifications: -Hands-on experience with statistical tools and techniques in Python -Great analytical skills, with expertise in analytical toolkits such as Logistic Regression, Cluster Analysis, Factor Analysis, Multivariate Regression, Statistical modelling, predictive analysis. -Advanced knowledge of supervised and unsupervised machine learning algorithms like Random Forest, Boosting, SVM, Neural Networks, Collaborative filtering etc. -Ability to think creatively and work well both as part of a team and as an individual contributor -Critical eye for the quality of data and strong desire to get it right -Strong communication skills. -Should be able to read a paper and quickly implement ideas from scratch. -A pleasantly forceful personality and charismatic communication style.
Read more
Job posted by
Bhavisha Mansukhani
Apply for job
Did not find a job you were looking for?
Search
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.