Cutshort logo
Fragma Data Systems logo
Fresher Data Engineer- Python+SQL (Internship+Job Opportunity)
Fresher Data Engineer- Python+SQL (Internship+Job Opportunity)
Fragma Data Systems's logo

Fresher Data Engineer- Python+SQL (Internship+Job Opportunity)

Evelyn Charles's profile picture
Posted by Evelyn Charles
0 - 1 yrs
₹2.5L - ₹4L / yr
Remote, Bengaluru (Bangalore), Hyderabad
Skills
SQL
Data engineering
Big Data
skill iconPython
● Hands-on Work experience as a Python Developer
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Fragma Data Systems

Founded :
2015
Type
Size
Stage :
Profitable
About

Fragma is a leading Big data, AI and Advanced analytics company provideing services global clients.

Read more
Connect with the team
Profile picture
Mallikarjun Degul
Profile picture
Sandhya JD
Profile picture
Varun Reddy
Profile picture
Priyanka U
Profile picture
Simpy kumari
Profile picture
Minakshi Kumari
Profile picture
Latha Yuvaraj
Profile picture
Vamsikrishna G
Company social profiles
bloglinkedintwitter

Similar jobs

Chennai, Tirunelveli
5 - 7 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Greetings!!!!


We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location


Required Education/Experience


● Bachelor’s degree in computer Science or related field

● 5-7 years’ experience in the following:

● Snowflake, Databricks management,

● Python and AWS Lambda

● Scala and/or Java

● Data integration service, SQL and Extract Transform Load (ELT)

● Azure or AWS for development and deployment

● Jira or similar tool during SDLC

● Experience managing codebase using Code repository in Git/GitHub or Bitbucket

● Experience working with a data warehouse.

● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML

● Exposure to working in an agile work environment


Read more
RandomTrees
at RandomTrees
1 recruiter
Amareswarreddt yaddula
Posted by Amareswarreddt yaddula
Hyderabad
5 - 16 yrs
₹1L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
SQL
+3 more

We are #hiring for AWS Data Engineer expert to join our team


Job Title: AWS Data Engineer

Experience: 5 Yrs to 10Yrs

Location: Remote

Notice: Immediate or Max 20 Days

Role: Permanent Role


Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.


Job Description:

 Able to develop ETL jobs.

Able to help with data curation/cleanup, data transformation, and building ETL pipelines.

Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.

Sql, Python, and Pyspark is a must.

Communication should be good





Read more
Disruptive Fintech Startup
Agency job
via Unnati by Sarika Tamhane
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹12L / yr
skill iconData Science
skill iconData Analytics
skill iconR Programming
skill iconPython
Investment analysis
+1 more
If you are interested in joining a purpose-driven community that is dedicated to creating ambitious and inclusive workplaces, then be a part of a high growth startup with a world-class team, building a revolutionary product!
 
Our client is a vertical fintech play focused on solving industry-specific financing gaps in the food sector through the application of data. The platform provides skin-in-the-game growth capital to much-loved F&B brands. Founded in 2019, they’re VC funded and based out of Singapore and India-Bangalore.
 
Founders are the alumnus of IIT-D, IIM-B and Wharton. They’ve 12+ years of experience as Venture capital and corporate entrepreneurship at DFJ, Vertex, InMobi and VP at Snyder UAE, investment banking at Unitus Capital - leading the financial services practice, and institutional equities at Kotak. They’ve a team of high-quality professionals coming together for this mission to disrupt the convention.
 
 
AsData Scientist, you will develop a first of its kind risk engine for revenue-based financing in India and automating investment appraisals for the company's different revenue-based financing products

What you will do:
 
  • Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
  • Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
  • Back-testing investment appraisal models at regular intervals to improve the same
  • Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
  • Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
  • Identifying relevant sub-sector criteria to score and rate investment opportunities internally

 


Candidate Profile:

 

Desired Candidate Profile

 

What you need to have:
 
  • Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
  • Experience in working in lending/investing fintech (mandatory)
  • Strong Excel skills (mandatory)
  • Previous experience in credit rating or credit scoring or investment analysis (preferred)
  • Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
  • Proficiency in data analysis (preferred)
  • Good verbal and written skills
Read more
Propellor.ai
at Propellor.ai
5 candid answers
1 video
Kajal Jain
Posted by Kajal Jain
Remote only
1 - 4 yrs
₹5L - ₹15L / yr
skill iconPython
SQL
Spark
Hadoop
Big Data
+2 more

Big Data Engineer/Data Engineer


What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge

• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries

These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation

At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment

What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily

Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!

To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0

About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.

Role Description

• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.

Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.

Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP)  • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure

• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design

Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams

Preferred Immediate Joiners

Read more
Rakuten
at Rakuten
1 video
1 recruiter
Agency job
via zyoin by RAKESH RANJAN
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹38L / yr
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
+4 more

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/">https://www.rakuten.com/

Crunchbase : https://www.crunchbase.com/organization/rakuten">Rakuten has raised a total of https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/rakuten">$42.4M in funding over https://www.crunchbase.com/search/funding_rounds/field/organizations/num_funding_rounds/rakuten">2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more
GreedyGame
at GreedyGame
1 video
5 recruiters
Shreyoshi Ghosh
Posted by Shreyoshi Ghosh
Bengaluru (Bangalore)
1 - 2 yrs
₹4L - ₹12L / yr
MS-Excel
SQL
skill iconData Analytics
skill iconPython
skill iconR Language
+1 more

About Us:

GreedyGame is looking for a Business Analyst to join its clan. We are looking to get an enthusiastic Business Analyst who likes to play with Data. You'll be building insights from Data, creating analytical dashboard and monitoring KPI values. Also you will coordinate with teams working on different layers of the infrastructure.

 

Job details:

 

Seniority Level: Associate

Level Industry: Marketing & Advertising

Employment Type: Full Time

Job Location: Bangalore

Experience: 1-2 years

 

WHAT ARE WE LOOKING FOR?

 

  • Excellent planning, organizational, and time management skills.
  • Exceptional analytical and conceptual thinking skills.
  • A previous experience of working closely with Operations and Product Teams.
  • Competency in Excel and SQL is a must.
  • Experience with a programming language like Python is required.
  • Knowledge of Marketing Tools is preferable.

 

 

WHAT WILL BE YOUR RESPONSIBILITIES?

 

  • Evaluating business processes, anticipating requirements, uncovering areas for improvement, developing and implementing solutions.
  • Should be able to generate meaningful insights to help the marketing team and product team in enhancing the user experience for Mobile and Web Apps.
  • Leading ongoing reviews of business processes and developing optimization strategies.
  • Performing requirements analysis from a user and business point of view
  • Combining data from multiple sources like SQL tables, Google Analytics, Inhouse Analytical signals etc and driving relevant insights
  • Deciding the success metrics and KPIs for different Products and features and making sure they are achieved.
  • Act as quality assurance liaison prior to the release of new data analysis or application.

 

Skills and Abilities:

  • Python
  • SQL
  • Business Analytics
  • BigQuery

 

WHAT'S IN IT FOR YOU?

  • An opportunity to be a part of a fast scaling start-up in the AdTech space that offers unmatched services and products.
  • To work with a team of young enthusiasts who are always upbeat and self-driven to achieve bigger milestones in shorter time spans.
  • A workspace that is wide open as per the open door policy at the company, located in the most happening center of Bangalore.
  • A well-fed stomach makes the mind work better and therefore we provide - free lunch with a wide variety on all days of the week, a stocked-up pantry to satiate your want for munchies, a Foosball table to burst stress and above all a great working environment.
  • We believe that we grow as you grow. Once you are a part of our team, your growth also becomes essential to us, and in order to make sure that happens, there are timely formal and informal feedbacks given
Read more
Srijan Technologies
at Srijan Technologies
6 recruiters
PriyaSaini
Posted by PriyaSaini
Remote only
3 - 8 yrs
₹5L - ₹12L / yr
skill iconData Analytics
Data modeling
skill iconPython
PySpark
ETL
+3 more

Role Description:

  • You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
  • You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
  • You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
  • You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
  • In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
  • You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
  • Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.

Job Requirement:

  • Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
  • 3+ years of professional work experience with a reputed analytics firm
  • Expertise in handling large amount of data through Python or PySpark
  • Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
  • Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
  • Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
  • A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
  • Strong problem-solving, requirement gathering, and leading.
  • Track record of completing projects successfully on time, within budget and as per scope

Read more
Passion Gaming
at Passion Gaming
2 recruiters
Trivikram Pathak
Posted by Trivikram Pathak
Panchkula
1 - 4 yrs
₹6.5L - ₹9.5L / yr
skill iconData Science
skill iconData Analytics
skill iconPython
SQL
NOSQL Databases
+5 more

We are currently looking for a Junior Data Scientist to join our growing Data Science team in Panchkula. As a Jr. Data Scientist, you will work closely with the Head of Data Science and a variety of cross-functional teams to identify opportunities to enhance the customer journey, reduce churn, improve user retention, and drive revenue.

Experience Required

  • Medium to Expert level proficiency in either R or Python.
  • Expert level proficiency in SQL scripting for RDBMS and NoSQL DBs (especially MongoDB)
  • Tracking and insights on key metrics around User Journey, User Retention, Churn Modelling and Prediction, etc.
  • Medium-to-Highly skilled in data-structures and ML algorithms, with the ability to create efficient solutions to complex problems.
  • Experience of working on an end-to-end data science pipeline: problem scoping, data gathering, EDA, modeling, insights, visualizations, monitoring and maintenance.
  • Medium-to-Proficient in creating beautiful Tableau dashboards.
  • Problem-solving: Ability to break the problem into small parts and apply relevant techniques to drive the required outcomes.
  • Intermediate to advanced knowledge of machine learning, probability theory, statistics, and algorithms. You will be required to discuss and use various algorithms and approaches on a daily basis.
  • Proficient in at least a few of the following: regression, Bayesian methods, tree-based learners, SVM, RF, XGBOOST, time series modelling, GLM, GLMM, clustering, Deep learning etc.

Good to Have

  • Experience in one of the upcoming technologies like deep learning, recommender systems, etc.
  • Experience of working in the Gaming domain
  • Marketing analytics, cross-sell, up-sell, campaign analytics, fraud detection
  • Experience in building and maintaining Data Warehouses in AWS would be a big plus!

Benefits

  • PF and gratuity
  • Working 5 days a week
  • Paid leaves (CL, SL, EL, ML) and holidays
  • Parties, festivals, birthday celebrations, etc
  • Equability: absence of favouritism in hiring & promotion
Read more
Hex Business Innovations
Dhruv Dua
Posted by Dhruv Dua
Faridabad
0 - 4 yrs
₹1L - ₹3L / yr
SQL
SQL server
MySQL
MS SQLServer
skill iconC#
+1 more

Job Summary
SQL development for our Enterprise Resource Planning (ERP) Product offered to SMEs. Regular modifications , creation and validation with testing of stored procedures , views, functions on MS SQL Server.
Responsibilities and Duties
Understanding the ERP Software and use cases.
Regular Creation,modifications and testing of

  • Stored Procedures
  • Views
  • Functions
  • Nested Queries
  • Table and Schema Designs

Qualifications and Skills
MS SQL

  • Procedural Language
  • Datatypes
  • Objects
  • Databases
  • Schema
Read more
Bengaluru (Bangalore)
1 - 5 yrs
₹15L - ₹20L / yr
Spark
Big Data
Data Engineer
Hadoop
Apache Kafka
+4 more
  • 1-5 years of experience in building and maintaining robust data pipelines, enriching data, low-latency/highly-performance data analytics applications.
  • Experience handling complex, high volume, multi-dimensional data and architecting data products in streaming, serverless, and microservices-based Architecture and platform.
  • Experience in Data warehousing, Data modeling, and Data architecture.
  • Expert level proficiency with the relational and NoSQL databases.
  • Expert level proficiency in Python, and PySpark.
  • Familiarity with Big Data technologies and utilities (Spark, Hive, Kafka, Airflow).
  • Familiarity with cloud services (preferable AWS)
  • Familiarity with MLOps processes such as data labeling, model deployment, data-model feedback loop, data drift.

Key Roles/Responsibilities:

  • Act as a technical leader for resolving problems, with both technical and non-technical audiences.
  • Identifying and solving issues with data pipelines regarding consistency, integrity, and completeness.
  • Lead data initiatives, architecture design discussions, and implementation of next-generation BI solutions.
  • Partner with data scientists, tech architects to build advanced, scalable, efficient self-service BI infrastructure.
  • Provide thought leadership and mentor data engineers in information presentation and delivery.

 

 

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos