Data Engineer

at A logistic Company

Agency job
via Anzy
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹18L - ₹25L / yr
icon
Full time
Skills
Data engineering
ETL
SQL
Hadoop
Apache Spark
Spark
Cassandra
MongoDB
Snow flake schema
Amazon Redshift
Apache Hadoop
Apache Kafka
Java
Python
Scala
Airflow
Hevo
flink
Key responsibilities:
• Create and maintain data pipeline
• Build and deploy ETL infrastructure for optimal data delivery
• Work with various including product, design and executive team to troubleshoot data
related issues
• Create tools for data analysts and scientists to help them build and optimise the product
• Implement systems and process for data access controls and guarantees
• Distill the knowledge from experts in the field outside the org and optimise internal data
systems
Preferred qualifications/skills:
• 5+ years experience
• Strong analytical skills

____ 04

Freight Commerce Solutions Pvt Ltd. 

• Degree in Computer Science, Statistics, Informatics, Information Systems
• Strong project management and organisational skills
• Experience supporting and working with cross-functional teams in a dynamic environment
• SQL guru with hands on experience on various databases
• NoSQL databases like Cassandra, MongoDB
• Experience with Snowflake, Redshift
• Experience with tools like Airflow, Hevo
• Experience with Hadoop, Spark, Kafka, Flink
• Programming experience in Python, Java, Scala
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at Aidetic

Founded 2018  •  Services  •  20-100 employees  •  Profitable
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Python
PyTorch
pandas
NumPy
TensorFlow
Natural Language Toolkit (NLTK)
recommendation algorithm
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹8L - ₹15L / yr

Title:

Data Scientist


Location:

Bengaluru, Karnataka.


About us: 

We are an emerging Artificial Intelligence-based startup trying to cater to the need of industries that employ cutting-edge technologies for their operations. Currently, we are into and provide services to disruptive sectors such as drone tech, video surveillance, human-computer interaction, etc. In general, we believe that AI has the ability to shape the future of humanity and we aim to work towards spearheading this transition.


About the role:

We are looking for a highly motivated data scientist with a strong algorithmic mindset and problem-solving propensity.

Since we are operating in a highly competitive market, every opportunity to increase efficiency and cut costs is critical and the candidate should have an eye for such opportunities. We are constantly innovating – working on novel hardware and software – so a high level of flexibility and celerity in learning is expected.

Responsibilities:

  • Design machine learning / deep learning models for products and client projects.
  • Creating and managing data pipelines.
  • Exploring new SOTA models and data handling techniques.
  • Coordinate with software development teams to implement models and monitor outcomes.
  • Develop processes and tools to monitor and analyze model performance and data accuracy.
  • Explore new promising technology and implement it to create awesome stuff.


Technology stack:

  • Must have:
    • Python
    • Pandas, Numpy
    • Tensorflow, Pytorch
    • Sklearn
  • Good to have:
    • Scipy
    • Opencv
    • Spacy, NLTK

What’s in it for you:

  • Opportunity to work on a lot of new cutting-edge technologies: We promise you rapid growth on your skillset by providing you with a steep learning curve.
  • Opportunity to work closely with our experienced founding members who are experts in developing scalable practical AI products and software architecture development.
Job posted by
Vibha Atal

DATA SCIENTIST-MACHINE LEARNING

at Gormalone LLP

Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
TensorFlow
Machine Learning (ML)
Artificial Intelligence (AI)
Data Science
Natural Language Processing (NLP)
Computer Vision
Data Analytics
EDA
ETL
recommendation algorithm
MLFlow
Airflow
Cloud Technologies
MLOps
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹6L - ₹30L / yr

DATA SCIENTIST-MACHINE LEARNING                           

GormalOne LLP. Mumbai IN

 

Job Description

GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.

 

GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.

 

Location: Bangalore

 

Roles & Responsibilities

  • Individual contributor
  • Developing and maintaining an end-to-end data science project
  • Deploying scalable applications on a different platform
  • Ability to analyze and enhance the efficiency of existing products

 

What are we looking for?

  • 3 to 5 Years of experience as a Data Scientist
  • Skilled in Data Analysis, EDA, Model Building, and Analysis.
  • Basic coding skills in Python
  • Decent knowledge of Statistics
  • Creating pipelines for ETL and ML models.
  • Experience in the operationalization of ML models

 

 

Basic Qualifications

  • Tech/BE in Computer Science or Information Technology
  • Certification in AI, ML, or Data Science is preferred.
  • Masters/Ph.D. in a relevant field is preferred.

 

 

Preferred Requirements

  • Exp in tools and packages like Tensorflow, MLFlow, Airflow
  • Exposure to cloud technologies
  • Operationalization of ML models
  • Good understanding and exposure to MLOps

 

 

Kindly note: Salary shall be commensurate with qualifications and experience

 

 

 

 

Job posted by
Dhwani Rambhia

Data Engineer

at REConnect Energy

Founded 2010  •  Products & Services  •  100-1000 employees  •  Raised funding
Python
pandas
NumPy
ETL
Data Structures
luigi
Renewable Energy
Weather
Linux/Unix
Data integration
Internet of Things (IOT)
PySpark
SQL
NOSQL Databases
C++
C
Data Warehouse (DWH)
Informatica
Airflow
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹7L - ₹10L / yr

Work at the intersection of Energy, Weather & Climate Sciences and Artificial Intelligence.

Responsibilities:

  • Manage all real-time and batch ETL pipelines with complete ownership
  • Develop systems for integration, storage and accessibility of multiple data streams from SCADA, IoT devices, Satellite Imaging, Weather Simulation Outputs, etc.
  • Support team members on product development and mentor junior team members

Expectations:

  • Ability to work on broad objectives and move from vision to business requirements to technical solutions
  • Willingness to assume ownership of effort and outcomes
  • High levels of integrity and transparency

Requirements:

  • Strong analytical and data driven approach to problem solving
  • Proficiency in python programming and working with numerical and/or imaging data
  • Experience working on LINUX environments
  • Industry experience in building and maintaining ETL pipelines
Job posted by
Bhavya Das

Data Scientist

at Disruptive Fintech Startup

Agency job
via Unnati
Data Science
Data Analytics
R Programming
Python
Investment analysis
credit rating
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹8L - ₹12L / yr
If you are interested in joining a purpose-driven community that is dedicated to creating ambitious and inclusive workplaces, then be a part of a high growth startup with a world-class team, building a revolutionary product!
 
Our client is a vertical fintech play focused on solving industry-specific financing gaps in the food sector through the application of data. The platform provides skin-in-the-game growth capital to much-loved F&B brands. Founded in 2019, they’re VC funded and based out of Singapore and India-Bangalore.
 
Founders are the alumnus of IIT-D, IIM-B and Wharton. They’ve 12+ years of experience as Venture capital and corporate entrepreneurship at DFJ, Vertex, InMobi and VP at Snyder UAE, investment banking at Unitus Capital - leading the financial services practice, and institutional equities at Kotak. They’ve a team of high-quality professionals coming together for this mission to disrupt the convention.
 
 
AsData Scientist, you will develop a first of its kind risk engine for revenue-based financing in India and automating investment appraisals for the company's different revenue-based financing products

What you will do:
 
  • Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
  • Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
  • Back-testing investment appraisal models at regular intervals to improve the same
  • Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
  • Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
  • Identifying relevant sub-sector criteria to score and rate investment opportunities internally

 


Candidate Profile:

 

Desired Candidate Profile

 

What you need to have:
 
  • Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
  • Experience in working in lending/investing fintech (mandatory)
  • Strong Excel skills (mandatory)
  • Previous experience in credit rating or credit scoring or investment analysis (preferred)
  • Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
  • Proficiency in data analysis (preferred)
  • Good verbal and written skills
Job posted by
Sarika Tamhane

ML Ops Engineer

at Top Management Consulting Company

DevOps
Microsoft Windows Azure
gitlab
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Docker
Kubernetes
Jenkins
GitHub
Git
Python
MySQL
PostgreSQL
SQL server
Oracle
Terraform
argo
airflow
kubeflow
Machine Learning (ML)
icon
Gurugram, Bengaluru (Bangalore), Chennai
icon
2 - 9 yrs
icon
₹9L - ₹27L / yr
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Job posted by
Naveed Mohd

Big Data

at NoBroker

Founded 2014  •  Products & Services  •  100-1000 employees  •  Raised funding
Java
Spark
PySpark
Data engineering
Big Data
Hadoop
Selenium
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹6L - ₹8L / yr
You will build, setup and maintain some of the best data pipelines and MPP frameworks for our
datasets
Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
Collaborate with multiple cross-functional teams and work
Job posted by
noor aqsa

Data Engineer

at Service based company

pandas
PySpark
Big Data
Data engineering
Performance optimixation
oo concepts
SQL
Python
icon
Remote only
icon
3 - 8 yrs
icon
₹8L - ₹13L / yr
Data pre-processing, data transformation, data analysis, and feature engineering, 
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
Job posted by
Sonali Kamani

Data Engineer

at Aptus Data LAbs

Founded 2014  •  Products & Services  •  100-1000 employees  •  Profitable
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
Apache Spark
Python
Elastic Search
Kibana
Cisco Certified Network Associate (CCNA)
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹6L - ₹15L / yr

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Job posted by
Merlin Metilda

Data Scientist

at DataToBiz

Founded 2018  •  Services  •  20-100 employees  •  Bootstrapped
Algorithms
ETL
Python
Machine Learning (ML)
Deep Learning
Statistical Modeling
Data Structures
DevOps
icon
Chandigarh
icon
2 - 5 yrs
icon
₹4L - ₹6L / yr
Job Summary DataToBiz is an AI and Data Analytics Services startup. We are a team of young and dynamic professionals looking for an exceptional data scientist to join our team in Chandigarh. We are trying to solve some very exciting business challenges by applying cutting-edge Machine Learning and Deep Learning Technology. Being a consulting and services startup we are looking for quick learners who can work in a cross-functional team of Consultants, SMEs from various domains, UX architects, and Application development experts, to deliver compelling solutions through the application of Data Science and Machine Learning. The desired candidate will have a passion for finding patterns in large datasets, an ability to quickly understand the underlying domain and expertise to apply Machine Learning tools and techniques to create insights from the data. Responsibilities and Duties As a Data Scientist on our team, you will be responsible for solving complex big-data problems for various clients (on-site and off-site) using data mining, statistical analysis, machine learning, deep learning. One of the primary responsibilities will be to understand the business need and translate it into an actionable analytical plan in consultation with the team. Ensure that the analytical plan aligns with the customer’s overall strategic need. Understand and identify appropriate data sources required for solving the business problem at hand. Explore, diagnose and resolve any data discrepancies – including but not limited to any ETL that may be required, missing value and extreme value/outlier treatment using appropriate methods. Execute project plan to meet requirements and timelines. Identify success metrics and monitor them to ensure high-quality output for the client. Deliver production-ready models that can be deployed in the production system. Create relevant output documents, as required – power point deck/ excel files, data frames etc. Overall project management - Creating a project plan and timelines for the project and obtain sign-off. Monitor project progress in conjunction with the project plan – report risks, scope creep etc. in a timely manner. Identify and evangelize new and upcoming analytical trends in the market within the organization. Implementing the applications of these algorithms/methods/techniques in R/Python Required Experience, Skills and Qualifications 3+ years experience working Data Mining and Statistical Modeling for predictive and prescriptive enterprise analytics. 2+ years of working with Python, Machine learning with exposure to one or more ML/DL frameworks like Tensorflow, Caffe, Scikit-Learn, MXNet, CNTK. Exposure to ML techniques and algorithms to work with different data formats including Structured Data, Unstructured Data, and Natural Language. Experience working with data retrieval and manipulation tools for various data sources like: Rest/Soap APIs, Relational (MySQL) and No-SQL Databases (MongoDB), IOT data streams, Cloud-based storage, and HDFS. Strong foundation in Algorithms and Data Science theory. Strong verbal and written communication skills with other developers and business client Knowledge of Telecom and/or FinTech Domain is a plus.
Job posted by
Ankush Sharma

Machine learning

at Sagacito Technologies

Founded 2016  •  Product  •  20-100 employees  •  Profitable
Machine Learning (ML)
Natural Language Processing (NLP)
Python
Data Science
icon
NCR (Delhi | Gurgaon | Noida)
icon
3 - 9 yrs
icon
₹6L - ₹18L / yr
Location: Gurgaon Role: • The person will be part of data science team. This person will be working on a close basis with the business analysts and the technology team to deliver the Data Science portion of the project and product. • Data Science contribution to a project can range between 30% to 80%. • Day to Day activities will include data exploration to solve a specific problem, researching of methods to be applied as solution, setting up ML process to be applied in context of a specific engagement/ requirement, contributing to building a DS platform, coding the solution, interacting with client on explanations, integration the DS solution with the technology solution, data cleaning and structuring etc. • Nature of work will depend on stage of a specific engagement, available engagements and individual skill At least 2-6 years of experience in: • Machine Learning (including deep learning methods): Algorithm design, analysis and development and performance improvement o Strong understanding of statistical and predictive modeling concepts, machine-learning approaches, clustering, classification, regression techniques, and recommendation (collaborative filtering) algorithms o Time Series Analysis o Optimization techniques and work experience with solvers for MILP and global optimization. • Data Science o Good experience in exploratory data analysis and feature design & development o Experience of applying and evaluating ML algorithms to practical predictive modeling scenarios in various verticals including (but not limited to) FMCG, Media, E-commerce and Hospitality. • Proficient with programming in Python (must have) & PySpark (good to have). Parallel ML algorithms design and development and usage for maximal performance on multi-core, distributed and/or GPU architectures. • Must be able to write a production ready code with reusable components and integration into data science platform. • Strong inclination to write structured code as per prevailing coding standards and best practices. • Ability to design a data science architecture for repeatability of solutions • Preparedness to manage whole cycle from data preparation to algorithm design to client presentation at individual level. • Comfort in working on AWS including managing data science AWS servers • Team player and good communication and interpersonal skills • Good experience in Natural Language Processing and its applications (Good to Have)
Job posted by
Neha Verma
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at A logistic Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort