Senior Machine Learning Engineer | Permanent WFH

at Delivery Solutions

DP
Posted by Ayyappan Paramasivam
icon
Remote only
icon
3 - 6 yrs
icon
₹7L - ₹19L / yr
icon
Full time
Skills
Python
NumPy
pandas
SQL
Docker
Title: Senior Software Engineer - AI/ML

  • Minimum 3 years of technical experience in AI/ML (you can include internships & freelance work towards this)
  • Excellent proficiency in Python (Numpy, Pandas)
  • Experience working with SQL/NoSQL databases
  • Experience working with AWS, Docker
  • Should have worked with a large set of data
  • Should be familiar with MI model building and deployment on AWS.
  • Good communication skills and very good problem-solving skills 

Perks & Benefits @Delivery Solutions: 

  • Permanent Remote work - (Work from anywhere)
  • Broadband reimbursement
  • Flexi work hours - (Login/Logout flexibility)
  • 21 Paid leaves in a year (Jan to Dec) and 7 COVID leaves
  • Two appraisal cycles in a year
  • Encashment of unused leaves on Gross
  • RNR - Amazon Gift Voucher
  • Employee Referral Bonus
  • Technical & Soft skills training
  • Sodexo meal card
  • Surprise on birthday/ service anniversary/new baby/wedding gifts
  • Annual trip 
Read more

About Delivery Solutions

Out-of-the-box solutions are provided by Delivery Solutions to retailers, allowing them to provide customer experiences such as curbside delivery, same-day delivery, shipping, in-store pickup, and post-purchase pickup. The company collaborates with some of the most recognizable names in the retail industry, such as Michael's, Sephora, Loblaw, GameStop, Office Depot, Sally Beauty, Total Wine, Belk, and Abercrombie & Fitch.


Its SAAS-based solution is incredibly adjustable and works in combination with e-commerce sites, warehouse management systems, order management systems, and point-of-sale systems to give a highly scalable experience and a base of delighted customers. They have direct connections to the most prominent businesses in same-day delivery, like DoorDash, Uber, Postmates, and Shipt, amongst others, in addition to the most prominent shipping firms, including UPS, FedEx, USPS, and others.


Perks & Benefits @Delivery Solutions: 

  • Permanent Remote work - (Work from anywhere)
  • Broadband reimbursement
  • Flexi work hours - (Login/Logout flexibility)
  • 21 Paid leaves in a year (Jan to Dec) and 7 COVID leaves
  • Two appraisal cycles in a year
  • Encashment of unused leaves on Gross
  • RNR - Amazon Gift Voucher
  • Employee Referral Bonus
  • Technical & Soft skills training
  • Sodexo meal card
  • Surprise on birthday/ service anniversary/new baby/wedding gifts
  • Annual trip 
Read more
Founded
2015
Type
Product
Size
100-500 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Backend Engineer

at a secure data and intelligence sharing platform for Enterprises. We believe data security and privacy are paramount for AI and Machine Learning to truly evolve and embed into the world

Agency job
via HyrHub
Python
Data Structures
RESTful APIs
Design patterns
Django
Apache Kafka
pandas
TensorFlow
RabbitMQ
Amazon Web Services (AWS)
Machine Learning (ML)
DevOps
airflow
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹13L - ₹25L / yr
As part of early stage sta
Expectations
Good experience with writing quality and mature Python code. Familiar with Python
design patterns. OOP , refactoring patterns, writing async tasks and heavy
background tasks.
Understand auth n/z, ideally worked on authorization/authentication mechanism in
python. Familiarity with Auth0 is preferred.
Understand how to secure API endpoints.
Familiar with AWS concepts on -> EC2, VPC, RDS, and IAM. (Or any cloud
equivalent)
Backend Engineer @Eder Labs 3
Have basic DevOps experience and engineering and supporting services in modern
containerized cloud stack.
Experience and understanding of docker an docker-compose.
Responsibilites
Own backend design, architecture, implementation and delivery of features and
modules.
Take ownership of the Database. Write migrations, maintain, and manage
Database. (Postgres, MongoDB.)
Collaborate with a generalist team to develop, test and launch new features. Be a
generalist and find ways and functions in to bring up your team, product and
eventually the business.
Refactoring when needed, and keep hunting for new tools that can help us as a
business (not just the engineering team)
Develop Data Pipelines, from data sourcing, wrangling (cleaning), transformations,
to eventual use
Develop MLOps systems, to take in data, analyze it, pass it through any models,
and process results. DevOps for Machine Learning.
Follow modern git oriented dev workflows, versioning, CI/CD automation and
testing.
Ideal Candidate will have :
2 years of full time experience working as a data infrastructure / core backend
engineer in a team environment.
Understanding of Machine Learning technologies, frameworks and paradigms
involved there.
Backend Engineer @Eder Labs 4
Experience with the following tools:
Fast API / Django
Airflow
Kafka / RabbitMQ
Tensorflow / Pandas / Jupyter Notebook
pytest / asyncio
Experience setting up and managing ELK stack
In depth understanding of database systems, in terms of scaling compute efficiently.
Good understanding of data streaming services, and the involved networking.
Read more
Job posted by
Shwetha Naik

Data Analyst

at SafeHouse Tech

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
SQL
Python
Big Data
PowerBI
Amazon Redshift
BigQuery
icon
Remote only
icon
3 - 8 yrs
icon
₹8L - ₹20L / yr

About The Position:

We are looking for a sharp and highly analytical Data Analyst to join our fast-growing company and help us make data-driven decisions.

As a Data Analyst, you will play a key role in shaping SafeHouse’s future by identifying business, product, and marketing opportunities while creating interactive dashboards for the different deportment (product, marketing, R&d, sales, etc.).

 You will work across the organization and provide critical insights to multiple departments (Marketing, Product, Finance, Business Strategy, CX, and more), helping them focus on what’s important.

 

Responsibilities

  • Work with various managers to learn business insight needs, including tracking metrics, dashboards, and reports.
  • Act as a bridge between units’ business needs and the BI department, ensuring high-quality execution quality, data integrity of analyzed insights, and reporting deliverables
  • Establish reporting methodologies and cross-department standards
  • To communicate, influence, support, and execute business decisions, go beyond the numbers.

 

Qualifications

  • 3+ years of experience in quantitative analysis
  • Excellent communication skills with the ability to deliver results of analyses clearly and effectively
  • Excellent SQL skills - a must
  • Experience with Google data studio 
  • Strong Python capabilities - a major advantage
  • Experience working with big data tools: Redshift, BigQuery, Databricks, etc. - a must
  • A highly independent and creative team player who works well with others
  • BSc/BA in a highly quantitative field such as mathematics, statistics, economics, or engineering
Read more
Job posted by
Ruchir Shukla

Data Engineer

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Hive
Architecture
Data engineering
Java
Python
Scala
ETL
icon
Mumbai
icon
5 - 12 yrs
icon
₹10L - ₹20L / yr
JD of Data Engineer
As a Data Engineer, you are a full-stack data engineer that loves solving business problems.
You work with business leads, analysts and data scientists to understand the business domain
and engage with fellow engineers to build data products that empower better decision making.
You are passionate about data quality of our business metrics and flexibility of your solution that
scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses
on productivity and creativity, and allows you to be part of a world-class team while still being
yourself.

What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Think big! and drive the strategy for better data quality for the customers
● Collaborate with many teams - engineering and business, to build better data products

What We’re Looking For
● Over 1-3 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Pig, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
Read more
Job posted by
Reena Bandekar

Azure Data Engineer

at Marktine

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Big Data
Spark
PySpark
Data engineering
Data Warehouse (DWH)
Windows Azure
Python
SQL
Scala
Azure databricks
icon
Remote, Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹10L - ₹20L / yr

Azure – Data Engineer

  • At least 2 years hands on experience working with an Agile data engineering team working on big data pipelines using Azure in a commercial environment.
  • Dealing with senior stakeholders/leadership
  • Understanding of Azure data security and encryption best practices. [ADFS/ACLs]

Data Bricks –experience writing in and using data bricks Using Python to transform, manipulate data.

Data Factory – experience using data factory in an enterprise solution to build data pipelines. Experience calling rest APIs.

Synapse/data warehouse – experience using synapse/data warehouse to present data securely and to build & manage data models.

Microsoft SQL server – We’d expect the candidate to have come from a SQL/Data background and progressed into Azure

PowerBI – Experience with this is preferred

Additionally

  • Experience using GIT as a source control system
  • Understanding of DevOps concepts and application
  • Understanding of Azure Cloud costs/management and running platforms efficiently
Read more
Job posted by
Vishal Sharma

Event & Unstructured Data

at They provide both wholesale and retail funding. PM1

Agency job
via Multi Recruit
AWS KINESYS
Data engineering
AWS Lambda
DynamoDB
data pipeline
Data governance
Data processing
Amazon Web Services (AWS)
athena
Audio
Linux/Unix
Python
SQL
WebLogic
KINESYS
Lambda
icon
Mumbai
icon
5 - 7 yrs
icon
₹20L - ₹25L / yr
  • Key responsibility is to design & develop a data pipeline for real-time data integration, processing, executing of the model (if required), and exposing output via MQ / API / No-SQL DB for consumption
  • Provide technical expertise to design efficient data ingestion solutions to store & process unstructured data, such as Documents, audio, images, weblogs, etc
  • Developing API services to provide data as a service
  • Prototyping Solutions for complex data processing problems using AWS cloud-native solutions
  • Implementing automated Audit & Quality assurance Checks in Data Pipeline
  • Document & maintain data lineage from various sources to enable data governance
  • Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc

Skills

  • Programming experience using Python & SQL
  • Extensive working experience in Data Engineering projects, using AWS Kinesys,  AWS S3, DynamoDB, EMR, Lambda, Athena, etc for event processing
  • Experience & expertise in implementing complex data pipeline
  • Strong Familiarity with AWS Toolset for Storage & Processing. Able to recommend the right tools/solutions available to address specific data processing problems
  • Hands-on experience in Unstructured (Audio, Image, Documents, Weblogs, etc) Data processing.
  • Good analytical skills with the ability to synthesize data to design and deliver meaningful information
  • Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
  • Ability to understand business functionality, processes, and flows
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently

Functional knowledge

  • Real-time Event Processing
  • Data Governance & Quality assurance
  • Containerized deployment
  • Linux
  • Unstructured Data Processing
  • AWS Toolsets for Storage & Processing
  • Data Security

 

Read more
Job posted by
Sapna Deb

SQL Developer

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
SQL
SSIS
icon
Remote only
icon
5 - 7 yrs
icon
₹10L - ₹18L / yr
SQL Developer with Relevant experience of 7 Yrs with Strong Communication Skills.
 
Key responsibilities:
 
 
  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills
 
 

Experience
Experience Range

5 Years - 10 Years

Function Information Technology
Desired Skills
Must have Skills:  SQL

Hard Skills for a Data Warehouse Developer:
 
  • Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend
  • Sound knowledge of SQL
  • Experience with SQL databases such as Oracle, DB2, and SQL
  • Experience using Data Warehouse platforms e.g., SAP, Birst
  • Experience designing, developing, and implementing Data Warehouse solutions
  • Project management and system development methodology
  • Ability to proactively research solutions and best practice
 
Soft Skills for Data Warehouse Developers:
 
  • Excellent Analytical skills
  • Excellent verbal and written communications
  • Strong organization skills
  • Ability to work on a team, as well as independently
Read more
Job posted by
Sandhya JD

Machine Learning Engineer

at CES IT

Founded 1996  •  Services  •  1000-5000 employees  •  Profitable
Machine Learning (ML)
Deep Learning
Python
Data modeling
icon
Hyderabad
icon
7 - 12 yrs
icon
₹5L - ₹15L / yr
o Critical thinking mind who likes to solve complex problems, loves programming, and cherishes to work in a fast-paced environment.
o Strong Python development skills, with 7+ yrs. experience with SQL.
o A bachelor or master’s degree in Computer Science or related areas
o 5+ years of experience in data integration and pipeline development
o Experience in Implementing Databricks Delta lake and data lake
o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark
o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API
o experience with AWS Cloud on data integration with S3.
o Hands on Development experience with Python and/or Scala.
o Experience with SQL and NoSQL databases.
o Experience in using data modeling techniques and tools (focused on Dimensional design)
o Experience with micro-service architecture using Docker and Kubernetes
o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP
o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means
o Excellent verbal and written communications skills and strong leadership capabilities

Skills:
ML
MOdelling
Python
SQL
Azure Data Lake, dataFactory, Databricks, Delta Lake
Read more
Job posted by
Yash Rathod

Big Data Engineer

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Hive
Spark
Data engineering
Pig
Data Warehouse (DWH)
SQL
icon
Pune
icon
2.5 - 6 yrs
icon
₹1L - ₹8L / yr
Job Title/Designation: Big Data Engineers - Hadoop, Pig, Hive, Spark
Employment Type: Full Time, Permanent

Job Description:
 
Work Location - Pune
Work Experience - 2.5 to 6 Years
 
Note - Candidates with short notice periods will be given preference.
 
Mandatory Skills:
  • Working knowledge and hands-on experience of Big Data / Hadoop tools and technologies.
  • Experience of working in Pig, Hive, Flume, Sqoop, Kafka etc.
  • Database development experience with a solid understanding of core database concepts, relational database design, ODS & DWH.
  • Expert level knowledge of SQL and scripting preferably UNIX shell scripting, Perl scripting.
  • Working knowledge of Data integration solution and well-versed with any ETL tool (Informatica / Datastage / Abinitio/Pentaho etc).
  • Strong problem solving and logical reasoning ability.
  • Excellent understanding of all aspects of the Software Development Lifecycle.
  • Excellent written and verbal communication skills.
  • Experience in Java will be an added advantage
  • Knowledge of object oriented programming concepts
  • Exposure to ISMS policies and procedures.
Read more
Job posted by
Nikita Aher

Senior Data Scientist

at Kaleidofin

Founded 2018  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
Machine Learning (ML)
Python
SQL
Natural Language Processing (NLP)
icon
Chennai, Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
Best in industry
4+ year experience in advanced analytics, model building, statistical modeling,
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
Read more
Job posted by
Poornima B

Data Platform Engineer

at Hypersonix Inc

Founded 2018  •  Product  •  100-500 employees  •  Profitable
Python
Java
Scala
Apache Kafka
Datawarehousing
Data Warehouse (DWH)
Hadoop
Data migration
API
Spark
NOSQL Databases
data engineer
icon
Remote, Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹15L - ₹30L / yr
At HypersoniX our platform technology is aimed to solve regular and persistent problem in data platform domain. We’ve established ourselves as a leading developer of innovative software solutions. We’re looking for a highly-skilled Data-Platform engineer to join our program and platform design team. Our ideal candidate will have expert knowledge of software development processes and solid experience in designing/developing/evaluating/troubleshooting data platform and data driven applications If finding issues and fixing them with beautiful, meticulous code are among the talents that make you tick, we’d like to hear from you.

Objectives of this Role:
• Design, and develop creative and innovative frameworks/components for data platforms, as we continue to experience dramatic growth in the usage and visibility of our products
• work closely with data scientist and product owners to come up with better design/development approach for application and platform to scale and serve the needs.
• Examine existing systems, identifying flaws and creating solutions to improve service uptime and time-to-resolve through monitoring and automated remediation
• Plan and execute full software development life cycles (SDLC) for each assigned project, adhering to company standards and expectations Daily and Monthly Responsibilities:
• Design and build tools/frameworks/scripts to automate development, testing deployment, management and monitoring of the company’s 24x7 services and products
• Plan and scale distributed software and applications, applying synchronous and asynchronous design patterns, write code, and deliver with urgency and quality
• Collaborate with global team, producing project work plans and analyzing the efficiency and feasibility of project operations,
• manage large volume of data and process them on Realtime and batch orientation as needed.
• while leveraging global technology stack and making localized improvements Track, document, and maintain software system functionality—both internally and externally, leveraging opportunities to improve engineering productivity
• Code review, Git operation, CI-CD, Mentor and assign task to junior team members

Responsibilities:
• Writing reusable, testable, and efficient code
• Design and implementation of low-latency, high-availability, and performant applications
• Integration of user-facing elements developed by front-end developers with server-side logic
• Implementation of security and data protection
• Integration of data storage solutions

Skills and Qualifications
• Bachelor’s degree in software engineering or information technology
• 5-7 years’ experience engineering software and networking platforms
• 5+ years professional experience with Python or Java or Scala.
• Strong experience in API development and API integration.
• proven knowledge on data migration, platform migration, CI-CD process, orchestration workflows like Airflow or Luigi or Azkaban etc.
• Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Hadoop, No-SQl platform
• Prior experience in Datawarehouse and OLAP design and deployment.
• Proven ability to document design processes, including development, tests, analytics, and troubleshooting
• Experience with rapid development cycles in a web-based/Multi Cloud environment
• Strong scripting and test automation abilities Good to have Qualifications
• Working knowledge of relational databases as well as ORM and SQL technologies
• Proficiency with Multi OS env, Docker and Kubernetes
• Proven experience designing interactive applications and largescale platforms
• Desire to continue to grow professional capabilities with ongoing training and educational opportunities.
Read more
Job posted by
Manu Panwar
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Delivery Solutions?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort