Senior Data Engineer

at Fragma Data Systems

Agency job
icon
Remote only
icon
7 - 13 yrs
icon
₹15L - ₹35L / yr
icon
Full time
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
SQL
Python
Microsoft SQL Server DBA
ELT
Experience
Experience Range

2 Years - 10 Years

Function Information Technology
Desired Skills
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Education
Education Type Engineering
Degree / Diploma Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering
Specialization / Subject Any Specialisation
Job Type Full Time
Job ID 000018
Department Software Development
Read more

About Fragma Data Systems

Fragma is a leading Big data, AI and Advanced analytics company provideing services global clients.

Read more
Founded
2015
Type
Products & Services
Size
employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at PayU

Founded 2002  •  Product  •  500-1000 employees  •  Profitable
Python
ETL
Data engineering
Informatica
SQL
Spark
Snow flake schema
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹20L / yr

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more
Job posted by
Vishakha Sonde

Web Scraping Data Engineer

at Transform Solutions (P) Limited

Founded 2002  •  Services  •  20-100 employees  •  Profitable
Javascript
Python
Selenium
Web Scraping
HTML/CSS
XPath
icon
Surat
icon
3 - 5 yrs
icon
₹3L - ₹7L / yr
  • Design, build web crawlers to scrape data and URLs.
  • Integrate the data crawled and scraped into our databases
  • Create more/better ways to crawl relevant information
  • Strong knowledge of web technologies (HTML, CSS, Javascript, XPath, Regex)
  • Understanding of data privacy policies (esp. GDPR) and personally identifiable information
  • Develop automated and reusable routines for extracting information from various data sources
  • Prepare requirement summary and re-confirm with Operation team
  • Translate business requirements into specific solutions
  • Ability to relay technical information to non-technical users
  • Demonstrate Effective problem solving and analytical skill
  • Ability to pay attention to detail, pro-active, critical thinking and accuracy is essential
  • Ability to work to deadlines and give realistic estimates

    

 

Skills & Expertise

  • 2+ years of web scraping experience
  • Experience with two or more of the following web scraping frameworks and tools: Selenium, Scrapy, Import.io, Webhose.io, ScrapingHub, ParseHub, Phantombuster, Octoparse, Puppeter, etc.
  • Basic knowledge of data engineering (database ingestion, ETL, etc.) 
  • Solution orientation and "can do" attitude - with a desire to tackle complex problems.
Read more
Job posted by
sowmiya Venkat

Tableau Engineer

at Aideo Technologies

Founded 2009  •  Product  •  100-500 employees  •  Bootstrapped
Tableau
Natural Language Processing (NLP)
Computer Vision
Python
RESTful APIs
Microservices
Flask
SQL
icon
Mumbai, Navi Mumbai
icon
3 - 8 yrs
icon
₹4L - ₹22L / yr

We are establishing infrastructure for internal and external reporting using Tableau and are looking for someone with experience building visualizations and dashboards in Tableau and using Tableau Server to deliver them to internal and external users. 

 

Required Experience 

  • Implementation of interactive visualizations using Tableau Desktop  
  • Integration with Tableau Server and support of production dashboards and embedded reports with it 
  • Writing and optimization of SQL queries  
  • Proficient in Python including the use of Pandas and numpy libraries to perform data exploration and analysis 
  • 3  years of experience working as a Software Engineer / Senior Software Engineer 
  • Bachelors in Engineering – can be Electronic and comm , Computer , IT  
  • Well versed with Basic Data Structures Algorithms and system design 
  • Should be capable of working well in a team – and should possess very good communication skills 
  • Self-motivated and fun to work with and organized 
  • Productive and efficient working remotely 
  • Test driven mindset with a knack for finding issues and problems at earlier stages of development 
  • Interest in learning and picking up a wide range of cutting edge technologies 
  • Should be curious and interested in learning some Data science related concepts and domain knowledge 
  • Work alongside other engineers on the team to elevate technology and consistently apply best practices 

 

Highly Desirable 

  • Data Analytics 
  • Experience in AWS cloud or any cloud technologies 
  • Experience in BigData technologies and streaming like – pyspark, kafka is a big plus 
  • Shell scripting  
  • Preferred tech stack – Python, Rest API, Microservices, Flask/Fast API, pandas, numpy, linux, shell scripting, Airflow, pyspark 
  • Has a strong backend experience – and worked with Microservices and Rest API’s - Flask, FastAPI, Databases Relational and Non-relational 
Read more
Job posted by
Akshata Alekar
Tableau
SQL
Problem solving
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹8L - ₹12L / yr
  • Hands-on development/maintenance experience in Tableau: Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions using Tableau
  • Reviewing and improving existing Tableau dashboards and data models/ systems and collaborating with teams to integrate new systems
  • Provide support and expertise to the business community to assist with better utilization of Tableau
  • Understand business requirements, conduct analysis and recommend solution options for intelligent dashboards in Tableau
  • Experience with Data Extraction, Transformation and Load (ETL) – knowledge of how to extract, transform and load data
  • Execute SQL data queries across multiple data sources in support of business intelligence reporting needs. Format query results / reports in various ways
  • Participates in QA testing, liaising with other project team members and being responsive to client's needs, all with an eye on details in a fast-paced environment
  • Performing and documenting data analysis, data validation, and data mapping/design

 

  • Extensive experience in developing, maintaining and managing Tableau driven dashboards & analytics and working knowledge of Tableau administration /architecture.
  • A solid understanding of SQL, rational databases, and normalization
  • Proficiency in use of query and reporting analysis tools
  • Competency in Excel (macros, pivot tables, etc.)
  • Degree in Mathematics, Computer Science, Information Systems, or related field.
Read more
Job posted by
Jerrin Thomas

Senior consultant

at An IT Services Major, hiring for a leading insurance player.

Agency job
via Indventur Partner
Big Data
Hadoop
Apache Kafka
Apache Hive
Microsoft Windows Azure
Hbase
icon
Chennai
icon
3 - 5 yrs
icon
₹5L - ₹10L / yr

Client  An IT Services Major, hiring for a leading insurance player.

 

 

Position: SENIOR CONSULTANT

 

Job Description:

 

  • Azure admin- senior consultant with HD Insights(Big data)

 

Skills and Experience

 

  • Microsoft Azure Administrator certification
  • Bigdata project experience in Azure HDInsight Stack. big data processing frameworks such as Spark, Hadoop, Hive, Kafka or Hbase.
  • Preferred: Insurance or BFSI domain experience
  • 5 to 5 years of experience is required.
Read more
Job posted by
Vanshika kaur

Product Analyst

at MX Player

Founded 2011  •  Product  •  500-1000 employees  •  Profitable
Product Analyst
Product Analysis
Business Intelligence (BI)
Data Analytics
Data Analysis
SQL
Python
Tableau
R Language
R
sisense
icon
Remote, Mumbai, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
Best in industry
MX Player is the world’s best video player with an install base of 500+ million worldwide and 350+ million in India. We are installed on every second smartphone in India.

We cater to a wide range of entertainment categories including video streaming, music streaming, games and short videos via our MX Player and MX Takatak apps which are our flagship products.

Both MX Player and MX Takatak iOS apps are frequently featured amongst the top 5 apps in the Entertainment category on the Indian App Store. These are built by a small team of engineers based in Mumbai.


Position : Product Analyst

 

Key Responsibilities:

 

  • Driving the collection of new data that would help build the next generation of algorithms (E.g. audience segmentation, contextual targeting)
  • Understanding user behavior and performing root-cause analysis of changes in data trends to identify corrections or propose desirable enhancements in product & across different verticals
  • Excellent problem solving skills and the ability to make sound judgments based on trade-offs for different solutions to complex problem constraints
  • Defining and monitoring KPIs for product/content/business performance and identifying ways to improve them
  • Should be a strong advocate of data driven approach and drive analytics decisions by doing user testing, data analysis, and A/B testing
  • Help in defining the analytics roadmap for the product
  • Prior knowledge and experience in ad tech industry or other advertising platforms will be preferred

 

 

Tools/ Skillset:

  • Knowledge of Google DFP (prefered)
  • SQL
  • R/Python (preferred)
  • Any BI Tool such as tableau, sisense (preferred)
  • Go getter attitude
  • Ability to thrive in a fast paced dynamic environment
  • Self - Starter
Read more
Job posted by
Mittal Soni

ML Engineer

at Global content marketplace

Agency job
via Qrata
Machine Learning (ML)
Natural Language Processing (NLP)
Python
icon
Mumbai
icon
4 - 8 yrs
icon
₹20L - ₹30L / yr

We are building a global content marketplace that brings companies and content

creators together to scale up content creation processes across 50+ content verticals and 150+ industries. Over the past 2.5 years, we’ve worked with companies like India Today, Amazon India, Adobe, Swiggy, Dunzo, Businessworld, Paisabazaar, IndiGo Airlines, Apollo Hospitals, Infoedge, Times Group, Digit, BookMyShow, UpGrad, Yulu, YourStory, and 350+ other brands.
Our mission is to become the world’s largest content creation and distribution platform for all kinds of content creators and brands.

 

Our Team

 

We are a 25+ member company and is scaling up rapidly in both team size and our ambition.

If we were to define the kind of people and the culture we have, it would be -

a) Individuals with an Extreme Sense of Passion About Work

b) Individuals with Strong Customer and Creator Obsession

c) Individuals with Extraordinary Hustle, Perseverance & Ambition

We are on the lookout for individuals who are always open to going the extra mile and thrive in a fast-paced environment. We are strong believers in building a great, enduring

a company that can outlast its builders and create a massive impact on the lives of our

employees, creators, and customers alike.

 

Our Investors

 

We are fortunate to be backed by some of the industry’s most prolific angel investors - Kunal Bahl and Rohit Bansal (Snapdeal founders), YourStory Media. (Shradha Sharma); Dr. Saurabh Srivastava, Co-founder of IAN and NASSCOM; Slideshare co-founder Amit Ranjan; Indifi's Co-founder and CEO Alok Mittal; Sidharth Rao, Chairman of Dentsu Webchutney; Ritesh Malik, Co-founder and CEO of Innov8; Sanjay Tripathy, former CMO, HDFC Life, and CEO of Agilio Labs; Manan Maheshwari, Co-founder of WYSH; and Hemanshu Jain, Co-founder of Diabeto.
Backed by Lightspeed Venture Partners



Job Responsibilities:
● Design, develop, test, deploy, maintain and improve ML models
● Implement novel learning algorithms and recommendation engines
● Apply Data Science concepts to solve routine problems of target users
● Translates business analysis needs into well-defined machine learning problems, and
selecting appropriate models and algorithms
● Create an architecture, implement, maintain and monitor various data source pipelines
that can be used across various different types of data sources
● Monitor performance of the architecture and conduct optimization
● Produce clean, efficient code based on specifications
● Verify and deploy programs and systems
● Troubleshoot, debug and upgrade existing applications
● Guide junior engineers for productive contribution to the development
The ideal candidate must -

ML and NLP Engineer
● 4 or more years of experience in ML Engineering
● Proven experience in NLP
● Familiarity with language generative model - GPT3
● Ability to write robust code in Python
● Familiarity with ML frameworks and libraries
● Hands on experience with AWS services like Sagemaker and Personalize
● Exposure to state of the art techniques in ML and NLP
● Understanding of data structures, data modeling, and software architecture
● Outstanding analytical and problem-solving skills
● Team player, an ability to work cooperatively with the other engineers.
● Ability to make quick decisions in high-pressure environments with limited information.
Read more
Job posted by
Mrunal Kokate

Senior Software Engineer (Architect), Data

at Uber

Founded 2012  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Hadoop
kafka
Spark
Apache Hive
Java
Python
Scala
Apache Hadoop
Apache Impala
Apache Drill
Apache Spark
tez
presto
icon
Bengaluru (Bangalore)
icon
7 - 15 yrs
icon
₹0L / yr

Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities: 

 

  • Demonstrate tech expertise Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions. 
  • Execute large scale projects Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
  • Collaborate across teams Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
  • Coach engineers Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
  • Tech leadership Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.


What You’ll Do

  • Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
  • Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data. 
  • Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.

 

What You’ll Need

  • 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
  • Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
  • Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
  • Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
  • Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
  • A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
  • Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
Read more
Job posted by
Suvidha Chib

Data Scientist

at Jiva adventures

Founded 2017  •  Product  •  20-100 employees  •  Profitable
Data Science
Python
Machine Learning (ML)
Natural Language Processing (NLP)
icon
Bengaluru (Bangalore)
icon
1 - 4 yrs
icon
₹5L - ₹15L / yr
Should be experienced in building Machine learning pipelines.   Should be proficient in Python and scientific packages like pandas, numpy, scikit, matplotlib, etc. Experience with techniques such as Data mining, Distributed Computing, Applied Mathematics and Algorthims, Probablity & statistics, Strong problem solving and conceptual thinking abilities Hands on experience in Model building Building highly customized and optimized data pipelines integrating third party API’s and inhouse data sources. Extracting features from text data using tools like Scapy Deep learning for NLP using any modern framework
Read more
Job posted by
Bharat Chinhara

Big Data Evangelist

at UpX Academy

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
Spark
Hadoop
MongoDB
Python
Scala
Apache Kafka
Apache Flume
Cassandra
icon
Noida, Hyderabad, NCR (Delhi | Gurgaon | Noida)
icon
2 - 6 yrs
icon
₹4L - ₹12L / yr
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Read more
Job posted by
Suchit Majumdar
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Fragma Data Systems?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort