Big Data Engineer

at Netmeds.com

DP
Posted by Vijay Hemnath
icon
Chennai
icon
2 - 5 yrs
icon
₹6L - ₹25L / yr (ESOP available)
icon
Full time
Skills
Big Data
Hadoop
Apache Hive
Scala
Spark
Datawarehousing
Machine Learning (ML)
Deep Learning
SQL
Data modeling
PySpark
Python
Amazon Web Services (AWS)
Java
Cassandra
DevOps
HDFS

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more

About Netmeds.com

Netmeds.com is a digital pharma platform that delivers authorized prescription and over-the-counter (OTC) medicine, in addition to other health-related products. With over 5.7 million customers in over 670 cities and towns throughout India, Netmeds aims to provide a solution for the speedy online purchase and quick delivery of verified prescriptions across India. Customers have a straightforward method for shopping for health products online through the website. To guarantee that only the appropriate medication is delivered, a group of licensed pharmacists examines the prescription to verify its legitimacy and determine the appropriate dosage. Users get access to over 70,000 prescription medications for chronic and recurring disorders via Netmeds. Additionally, customers have access to better lifestyle drugs and thousands of non-prescription goods through Netmeds.


A business located in Chennai called Dadha Pharma is promoting Netmeds.com. The Dadha family has a long history of working in the pharmaceutical sector. In 1914, they opened their first store selling pharmaceutical products, and in 1972, they branched out into the production of medications.

Read more
Founded
2015
Type
Product
Size
500-1000 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist - Kofax Accredited Developers

at a global provider of Business Process Management company

Agency job
via Jobdost
Kofax
RPAS
Machine Learning (ML)
Data Science
Statistical Analysis
Natural Language Processing (NLP)
Appian
icon
Mumbai, Pune, Bengaluru (Bangalore), Gurugram, Nashik, Chennai
icon
7 - 11 yrs
icon
₹18L - ₹25L / yr

Job Description:

Role Summary:

The Robotics Process Automation Business Analyst helps define the business case for the proposed automation of the business processes by reviewing the current process, identifying the automation potential of the process, and the potential FTE takeout. The process architect working with the customer subject matter experts, and the technical architect designs the steps in the process that can be automated (with or without reengineering), and which serves as a basis for the development team to implement the robotics.

The business analyst will also review the design at the design stage, validates the developed automation to ensure it meets the intended design and the business benefits.

 

B1 – Data Scientist - Kofax Accredited Developers

 

Total Experience – 7-10 Years

Mandatory –

  • Accreditation of Kofax KTA / KTM
  • Experience in Kofax Total Agility Development – 2-3 years minimum
  • Ability to develop and translate functional requirements to design
  • Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
  • Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
  • Application development using Kofax and KTM modules
  • Good/Advance understanding of Machine Learning /NLP/ Statistics
  • Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
  • Excellent communication skills and collaborative attitude
  • Work with multiple teams and stakeholders , like Analytics, RPA, Technology and Project management teams
  • Good understanding of compliance, data governance and risk control processes

 

Good to have

  • Previous experience of working on Agile & Hybrid delivery environment
  • Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services

 

Qualification -

  • Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA with 15+ years of full time education

 

Read more
Job posted by
Saida Jabbar

Data Engineer

at Top startup of India - News App

Agency job
via Jobdost
Linux/Unix
Python
Hadoop
Apache Spark
MongoDB
Data flow
BigQuery
NOSQL Databases
Google Cloud Platform (GCP)
icon
Noida
icon
2 - 5 yrs
icon
₹20L - ₹35L / yr
Responsibilities
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.

Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Read more
Job posted by
Sathish Kumar

GCP Data Engineer, WFH

at Multinational Company

Agency job
via Telamon HR Solutions
Data engineering
Google Cloud Platform (GCP)
Python
icon
Remote only
icon
5 - 15 yrs
icon
₹27L - ₹30L / yr

• The incumbent should have hands on experience in data engineering and GCP data technologies.

• Should Work with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform.

• Should Work with Agile and DevOps techniques and implementation approaches in the delivery.

• Showcase your GCP Data engineering experience when communicating with clients on their requirements, turning these into technical data solutions.

• Build and deliver Data solutions using GCP products and offerings.
• Have hands on Experience on Python 
Experience on SQL or MySQL. Experience on Looker is an added advantage.

Read more
Job posted by
Praveena Sagar

Data Warehouse Architect

at Data Warehouse Architect

Agency job
via The Hub
Data Warehouse (DWH)
ETL
Hadoop
Apache Spark
Spark
Big Data
Teradata
Agile/Scrum
SQL server
icon
Mumbai
icon
8 - 10 yrs
icon
₹20L - ₹23L / yr
• You will work alongside the Project Management to ensure alignment of plans with what is being
delivered.
• You will utilize your configuration management and software release experience; as well as
change management concepts to drive the success of the projects.
• You will partner with senior leaders to understand and communicate the business needs to
translate them into IT requirements. Consult with Customer’s Business Analysts on their Data
warehouse requirements
• You will assist the technical team in identification and resolution of Data Quality issues.
• You will manage small to medium-sized projects relating to the delivery of applications or
application changes.
• You will use Managed Services or 3rd party resources to meet application support requirements.
• You will interface daily with multi-functional team members within the EDW team and across the
enterprise to resolve issues.
• Recommend and advocate different approaches and designs to the requirements
• Write technical design docs
• Execute Data modelling
• Solution inputs for the presentation layer
• You will craft and generate summary, statistical, and presentation reports; as well as provide reporting and metrics for strategic initiatives.
• Performs miscellaneous job-related duties as assigned

Preferred Qualifications

• Strong interpersonal, teamwork, organizational and workload planning skills
• Strong analytical, evaluative, and problem-solving abilities as well as exceptional customer service orientation
• Ability to drive clarity of purpose and goals during release and planning activities
• Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail
• Excited by the opportunity to continually improve processes within a large company
• Healthcare background/ Automobile background.
• Familiarity with major big data solutions and products available in the market.
• Proven ability to drive continuous
Read more
Job posted by
Sridevi Viswanathan

Data Engineer & Sr Data Engineer

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Python
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹5L - ₹15L / yr
Job Description:

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Job posted by
Vamsikrishna G

AGM Data Engineering

at ACT FIBERNET

Founded 2008  •  Services  •  100-1000 employees  •  Profitable
Data engineering
Data Engineer
Hadoop
Informatica
Qlikview
Datapipeline
icon
Bengaluru (Bangalore)
icon
9 - 14 yrs
icon
₹20L - ₹36L / yr

Key  Responsibilities :

  • Development of proprietary processes and procedures designed to process various data streams around critical databases in the org
  • Manage technical resources around data technologies, including relational databases, NO SQL DBs, business intelligence databases, scripting languages, big data tools and technologies, visualization tools.
  • Creation of a project plan including timelines and critical milestones to success in support of the project
  • Identification of the vital skill sets/staff required to complete the project
  • Identification of crucial sources of the data needed to achieve the objective.

 

Skill Requirement :

  • Experience with data pipeline processes and tools
  • Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, ETL, ESB)
  • Experience with an existing ETL tool e.g Informatica and Ab initio etc
  • Deep understanding of big data systems like Hadoop, Spark, YARN, Hive, Ranger, Ambari
  • Deep knowledge of Qlik ecosystems like  Qlikview, Qliksense, and Nprinting
  • Python, or a similar programming language
  • Exposure to data science and machine learning
  • Comfort working in a fast-paced environment

Soft attributes :

  • Independence: Must have the ability to work on his/her own without constant direction or supervision. He/she must be self-motivated and possess a strong work ethic to strive to put forth extra effort continually
  • Creativity: Must be able to generate imaginative, innovative solutions that meet the needs of the organization. You must be a strategic thinker/solution seller and should be able to think of integrated solutions (with field force apps, customer apps, CCT solutions etc.). Hence, it would be best to approach each unique situation/challenge in different ways using the same tools.
  • Resilience: Must remain effective in high-pressure situations, using both positive and negative outcomes as an incentive to move forward toward fulfilling commitments to achieving personal and team goals.
Read more
Job posted by
Sumit Sindhwani

Senior Engineer - Artificial Intelligence / Computer Vision

at MulticoreWare

Founded 2009  •  Products & Services  •  100-1000 employees  •  Bootstrapped
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
C
C++
Embedded C++
Data processing
Deep Learning
recommendation algorithm
sensor fusion
icon
Chennai
icon
3 - 6 yrs
icon
₹7L - ₹12L / yr

Senior Engineer  – Artificial Intelligence / Computer Vision
(Business Unit – Autonomous Vehicles & Automotive - AVA)


We are seeking an exceptional, experienced senior engineer with deep expertise in Computer Vision, Neural Networks, 3D Scene Understanding and Sensor Data Processing. The expectation is to lead a growing team of engineers to help them build and deliver customized solutions for our clients. A solid engineering as well as team management background is a must.


About MulticoreWare Inc
MulticoreWare Inc is a software and solutions development company with top-notch talent and skill in a variety of micro-architectures, including multi-thread, multi-core, and heterogeneous hardware platforms. It works in sectors including High Performance Computing (HPC), Media & AI Analytics, Video Solutions, Autonomous Vehicle and Automotive software, all of which are rapidly expanding. The Autonomous Vehicles & Automotive business unit specializes in delivering optimized solutions for sophisticated sensor fusion intelligence and the design of algorithms & implementation of software to be deployed on a variety of automotive grade hardware platforms.


Role Responsibilities
● Lead a team to solve the problems in a perception / autonomous-systems scope and turn ideas into code & products
● Drive all technical elements of development, such as project requirements definition, design, implementation, unit testing, integration, and software delivery
● Implementing cutting edge AI solutions on embedded platforms and optimizing them for performance. Hardware architecture aware algorithm design and development
● Contribute to the vision and long-term strategy of the business unit


Required Qualifications (Must Have)
● 3 - 7 years of experience with real world system building, including design, coding (C++/Python) and evaluation/testing (C++/Python)
● Solid experience in 2D / 3D Computer Vision algorithms, Machine Learning and Deep Learning fundamentals – Theory & Practice. Hands-on experience with Deep Learning frameworks like Caffe, TensorFlow or PyTorch
● Expert level knowledge in any of the courses related Signal Data Processing / Autonomous or Robotics software development (Perception, Localization, Prediction, Planning), multi-object tracking, sensor fusion algorithms and familiarity on Kalman filters, particle filters, clustering methods etc.
● Good project management and execution capabilities, as well as good communication and coordination ability
● Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or related fields


Preferred Qualifications (Nice-to-Have)
● GPU architecture and CUDA programming experience, as well as knowledge of AI inference optimization using Quantization, Compression (or) Model Pruning
● Track record of research excellence with prior publication on top-tier conferences and journals

Read more
Job posted by
Amritha Baskaran

ML Engineer

at They combine artificial intelligence and neuroscience. (BS1)

Agency job
via Multi Recruit
Machine Learning (ML)
MRI Image
Python Programming
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹20L - ₹25L / yr

We are looking for an ML engineer (Neuroscience) and would like them to

  • Build ML algorithms on MRI/ Medical data sets
  • Design and build intelligent agents using continual learning and deep reinforcement learning techniques
  • Study and transform data science prototypes
  • Design machine learning systems
  • Research and implement appropriate ML algorithms and tools
  • Develop machine learning applications according to requirements
  • Select appropriate datasets and data representation methods
  • Run machine learning tests and experiments
  • Perform statistical analysis and fine-tune using test results
  • Train and retrain systems when necessary
  • Extend existing ML libraries and frameworks
  • Keep abreast of developments in the field
  • Collaborate on technical proposals to grow and define artificial intelligence research for intelligent systems

Must have:

  • Knowledge of MRI image processing and inferencing
  • Hands-on machine learning expertise with an intensive knowledge of hyperparameter optimization, statistical assumption, and implications
  • Experience in deep learning models
  • Excellent Python programming skills for ML coding
  • Understanding of ML integration with our software
  • Good understanding of neuroscience/ clinical data

Bonus if you:

  • Are enthusiastic about all things brain science and/or mental well-being
  • We are a company that highly values the ability to communicate well. We all take turns at the blog roster, so writing experience and/or enthusiasm is appreciated
  • Our core values encourage empathy and innovation, you are gold if you share these

 

Read more
Job posted by
Ragul Ragul

Data Scientist

at Pricelabs

Founded 2014  •  Product  •  20-100 employees  •  Bootstrapped
Data Science
Data Scientist
R Programming
Python
Data Structures
Data Analytics
Data Visualization
icon
Remote only
icon
4 - 7 yrs
icon
₹15L - ₹40L / yr

PriceLabs (https://www.chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand" target="_blank">chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand) is a cloud based software for vacation and short term rentals to help them dynamically manage prices just the way large hotels and airlines do! Our mission is to help small businesses in the travel and tourism industry by giving them access to advanced analytical systems that are often restricted to large companies. 

We're looking for someone with strong analytical capabilities who wants to understand how our current architecture and algorithms work, and help us design and develop long lasting solutions to address those. Depending on the needs of the day, the role will come with a good mix of team-work, following our best practices, introducing us to industry best practices, independent thinking, and ownership of your work.

 

Responsibilities:

  • Design, develop and enhance our pricing algorithms to enable new capabilities.
  • Process, analyze, model, and visualize findings from our market level supply and demand data.
  • Build and enhance internal and customer facing dashboards to better track metrics and trends that help customers use PriceLabs in a better way.
  • Take ownership of product ideas and design discussions.
  • Occasional travel to conferences to interact with prospective users and partners, and learn where the industry is headed.

Requirements:

  • Bachelors, Masters or Ph. D. in Operations Research, Industrial Engineering, Statistics, Computer Science or other quantitative/engineering fields.
  • Strong understanding of analysis of algorithms, data structures and statistics.
  • Solid programming experience. Including being able to quickly prototype an idea and test it out.
  • Strong communication skills, including the ability and willingness to explain complicated algorithms and concepts in simple terms.
  • Experience with relational databases and strong knowledge of SQL.
  • Experience building data heavy analytical models in the travel industry.
  • Experience in the vacation rental industry.
  • Experience developing dynamic pricing models.
  • Prior experience working at a fast paced environment.
  • Willingness to wear many hats.
Read more
Job posted by
Shareena Fernandes

Machine learning Developer

at Chariot Tech

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
Machine Learning (ML)
Big Data
Data Science
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 5 yrs
icon
₹15L - ₹16L / yr
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Read more
Job posted by
Raj Garg
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Netmeds.com?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort