Cutshort logo
Data Engineering Internship

Data Engineering Internship

at Dataweave Pvt Ltd

DP
Posted by Megha M
icon
Bengaluru (Bangalore)
icon
0 - 1 yrs
icon
Best in industry
icon
Full time, Internship
Skills
Data engineering
Internship
Python
Looking for the Candiadtes , good in coding
scraping , and problem skills
Read more

About Dataweave Pvt Ltd

Founded
2011
Type
Size
100-1000
Stage
Raised funding
About

About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take
key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable
competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to
help businesses develop data-driven strategies and make smarter decisions.

 

Data Engineering and Delivery @DataWeave

We the Delivery / Data engineering team at DataWeave, deliver the Intelligence with actionable data to the
customer. One part of the work is to write effective crawler bots to collect data over the web, which calls for
reverse engineering and writing scalable python code. Other part of the job is to crunch data with our big data
stack / pipeline. Underpinnings are: Tooling, domain awareness, fast paced delivery, and pushing the envelope.

Read more
Connect with the team
icon
Rashmi Seetharam
icon
Sandeep Sreenath
icon
Pramod Shivalingappa S
icon
Vikranth Ramanolla
icon
Rashmi Ram
icon
Sadananda Vaidya
icon
Rahul Ramesh
icon
Sandeep G Kurdagi
icon
Sanket Patil
icon
Megha M
icon
Byomkesh Jha
icon
Sundar Lakshmanan
icon
Shubhodeep Das
icon
Anshul Garg
icon
Raghav Yadav
icon
Pradeep Yogesh
icon
Avinash K
icon
Sudhanshu Chauhan
icon
Sandesh PS
icon
Sucheth Kumar
icon
Suyash Soni
icon
Purva Bansal
icon
Mukesh Kumar
icon
Shwet Kamal Mishra
icon
Chandrasekar S
icon
Abhishek Gibbidi
icon
Piyush Singh Bora
icon
purnendu SAHA
icon
Mithun N
icon
Rohit Vernekar
icon
Aniruddha K
icon
Mohamed Sharjeel
Company social profiles
icon
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

at Kwalee
3 candid answers
1 video
DP
Posted by Zoheb Ahmed
Bengaluru (Bangalore)
2 - 8 yrs
Best in industry
Python
SQL
Business Intelligence (BI)
Tableau
PowerBI
  • Job Title - Product Analyst
  • Reports Into - Lead Data Analyst 
  • Applications Closing Date - 20/3/2023
  • Location - Hybrid / Bangalore


A Little Bit about Kwalee….


Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.


What’s In It For You?


  • Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm
  • Flexible working hours - we trust you to choose how and when you work best
  • Profit sharing scheme - we win, you win 
  • Private medical cover - delivered through BUPA
  • Life Assurance - for long term peace of mind
  • On site gym - take care of yourself
  • Relocation support - available
  • Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars
  • Pitch and make your own games on Creative Wednesdays!


Are You Up To The Challenge?


As a Product Analyst you will be responsible for optimising in-game features and design of our free to play Casual Mobile games. You will do this by analysing how players interact with our games, utilising AB testing of in-game components and developing automated data visualisation dashboards. Your insights will be regularly communicated and leveraged by development, management and data science teams, to drive the growth of our free to play Casual Mobile games. 


Your Team Mates


The Data Science team is central in developing the technology behind the growth and monetisation of our games. We are a cross functional team that consists of analysts, engineers and data scientists, and work closely with the larger engineering team to deliver products spanning our modern, cloud first, tech stack. 


You will also work closely with the Casual Mobile Games team, a multi-national crew who are united by a common goal of trying to make the games we work first class experiences from a gameplay, visual and marketing perspective as best as we can.


What Does The Job Actually Involve?


  • Analyse data to quantify the relationships between game elements, player engagement and marketing strategies, using statistical analysis and data visualisations.
  • Develop highly effective, reliable Tableau dashboards on key gameplay metrics.
  • Design analytic events by translating in-game features into optimised data structures.
  • Design experiments and AB testing plans to reveal complex interactions between game elements, in collaboration with the free to play Casual Mobile development team.
  • Regularly communicate results with development, management and data science teams.


Your Hard Skills


  • Experience in manipulating and analysing data in a commercial or academic setting.
  • A track record of outstanding problem solving skills, understanding business challenges and translating them into robust data-driven insights.
  • Excellent knowledge of statistical testing and experiment design.
  • Experience using Python for data analysis and visualisation.
  • Experience manipulating data in SQL databases.
  • Experience in data visualisation and dashboard development using BI tools such as Tableau.
  • Strong communication skills through comprehensive reporting and presentations. 


Your Soft Skills


Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues. 


We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.


A Little More About Kwalee


Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.


Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.


We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France. 


We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!

Read more
A large South African technology company
Agency job
via Insourcehire by Daisy Myburgh
Remote only
5 - 20 yrs
₹30L - ₹30L / yr
Data Science
Statistical Analysis
Statistical Modeling
Machine Learning (ML)
Google Cloud Platform (GCP)
+20 more

**The salary offered for this role is up to 38.5 LPA**

 

Successful SA technology group is looking to hire a full-time remote Senior Data Scientist with experience and ability in statistical modeling (price optimisation and price elasticity). 

 

The Senior Data Scientist will actively lead the application of data science methods, which includes machine learning, deep learning, artificial intelligence and predictive analytics, required to meet the company’s business interests as well as that of their clients.

 

Requirements:

 

  • Relevant degree in Data Science, Statistics or equivalent quantitative field
  • Minimum 5+ years of experience

 

The ideal candidate would have knowledge in:

 

  • Machine learning
  • Google Cloud Platform (AI platform, BigQuery, Dataproc, Dataflow, Kubeflow, Vertex AI)
  • Deep learning (Demand Forecasting, Recommendations Engine, Image Modelling and NLP etc.)
  • Statistical modelling (Price Optimization, Price Elasticity etc.)
  • Data Modelling
  • Database management (MS SQL server, PostgresSQL or similar)
  • Data visualisation (Data Studio, Tableau)
  • Data Science
  • Data Analysis
  • Predictive analytics
  • Python

 

They would have the following skills:

 

  • Ability to program data using python, R;
  • Ability to use deep learning tools like Tensorflow, Keras, Sklearn, Pytorch
  • Ability to build predictive models and machine learning algorithms
  • Ability to manage both structured and unstructured data using SQL;
  • Ability to visualise data using various tools;
  • Ability to model data for prediction
  • Ability to work independently and propose solutions to business challenges
  • Ability to automate data pipelines and machine learning models
  • Ability to manage time and project deliverables

 

The main responsibilities of the role are:

 

  • Providing support and leadership role to the other data scientist;
  • Identifying and acting on new opportunities for data driven business in data science and analytics;
  • Loading and merging data originating from diverse sources;
  • Pre-processing and Transforming data for model building and analysis;
  • Leading the development of predictive models for business solution;
  • Performing descriptive analytics to discover trends and patterns in the data;
  • Deploying predictive and other models to production
Read more
at My Yoga Teacher
1 video
3 recruiters
DP
Posted by MYT HR
Bengaluru (Bangalore)
3 - 6 yrs
₹18L - ₹25L / yr
MySQL
MySQL DBA
Javascript
Amazon Redshift
XML
+7 more

Data Scientist


We are a growing startup in the healthcare space, our business model has been mostly unexplored and that is exciting! 


Our company decisions are heavily guided by insights we get from data.  So this position is key and core to our business growth.  You can make a real impact.  We are looking for a data scientist who is passionate about contributing to the growth of MyYogaTeacher and propelling the company to newer heights. 


We encourage you to spend some time browsing through content on our website  myyogateacher.com  and maybe even sign up for our service and try it out! 


As a Data Scientist, you’ll

  • Help collect data from a variety of sources - decipher and address quality of data, filter and cleanse data, identify missing data
  • Help measure, transform and  organize data into readily usable formats for reporting and further analysis
  • Develop and implement analytical databases and data collection systems
  • Analyze data in meaningful ways.  Use statistical methods and data mining algorithms to analyze data and generate useful insights and reports
  • Develop recommendation engines in a variety of areas
  • Identify and recommend new ways to optimize and streamline data collection processes
  • Collaborate with programmers, engineers, and organizational leaders to identify opportunities for process improvements, recommend system modifications, and develop policies for data governance

You are qualified if:

  • You’re a Bachelor and/or Master in Mathematics, Statistics,  Computer Engineering, Data Science,  Data Analytics or Data Mining
  • 3+ years experience in a data analyst role
  • 3+ years’ of data mining and machine learning experience
  • Great understanding of databases such as MySQL, and Amazon Redshift and are very adept at SQL 
  • Good knowledge of No SQL databases such as Mongo DB and ClickHouse
  • Understanding of scripts such as XML, Javascript, JSON
  • Knowledge of programming languages like SQL, Oracle, R, and MATLAB. Proficient in Python and Shell scripting
  • Understanding of ETL framework and ETL tools 
  • Proficiency in  statistical packages like Excel, SPSS, and SAS to be used for data set analyzing
  • Knowledge of how to create and apply the most appropriate algorithms to datasets to find solutions 

Would be nice if you also have 

  • Experience with data visualization tools such as Tableau, Business Objects, PowerBI or Qlik
  • Adept at using data processing platforms like Hadoop and Apache Spark
  • Experience handling unstructured data such as text, audio and video and extracting features from those 

You have 

  • Excellent analytical skills - the ability to identify trends, patterns and insights from data. You love numbers
  • Strong attention to detail 
  • Great communication and presentation skills – the ability to write and speak clearly to easily communicate complex ideas in a way that is easy to understand.  
  • effective stakeholder management and great problem-solving skills 
  • Keen desire to own up to things and get things done.  You follow through on commitments: live up to verbal and written agreements 
  • You are a  quick learner of new technologies and easily adapt to change 
  • Ability to collaborate effectively and work as part of a team
  • You follow through on commitments: Live up to verbal and written agreements, regardless of personal cost. 
  • Enthusiasm: exhibit passion, excitement and positive energy over work.

Here are couple of articles about us from our CEO Jitendra


Why we started MyYogaTeacher  https://www.myyogateacher.com/articles/why-i-started-myyogateacher

Our  mission and culture https://www.myyogateacher.com/articles/company-mission-culture


Look forward to hearing from you ! 





Read more
at Klubworks
4 recruiters
DP
Posted by Anupam Arya
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹18L / yr
Data Analytics
MS-Excel
MySQL
Python
Business Analysis
+9 more
We are looking to hire a Senior Data Analyst to join our data team. You will take responsibility for managing our master data set, developing reports, and troubleshooting data issues. To do well in this role you need a very fine eye for detail, experience as a data analyst, and a deep understanding of the popular data analysis tools and databases.

Responsibilities
  • Interpret data, analyze results using statistical techniques and provide ongoing reports
  • Develop and implement databases, data collection systems, data analytics, and other strategies that optimize statistical efficiency and quality
  • Acquire data from primary or secondary data sources and maintain databases/data systems
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Filter and clean data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems
  • Work with the teams to prioritize business and information needs
  • Locate and define new process improvement opportunities

Requirements- 
  • Minimum 3 year of working experience as a Data Analyst or Business Data Analyst
  • Technical expertise with data models, database design development, data mining, and segmentation techniques
  • Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL, etc), programming (XML, JavaScript, or ETL frameworks)
  • Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS, etc)
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Excellent written and verbal communication skills for coordinating across teams.
  • A drive to learn and master new technologies and techniques.
Read more
at Crewscale
6 recruiters
DP
Posted by vinodh Rajamani
Remote only
2 - 6 yrs
₹4L - ₹40L / yr
Python
SQL
Amazon Web Services (AWS)
ETL
Informatica
+2 more
Crewscale – Toplyne Collaboration:

The present role is a Data engineer role for Crewscale– Toplyne Collaboration.
Crewscale is exclusive partner of Toplyne.

About Crewscale:
Crewscale is a premium technology company focusing on helping companies building world
class scalable products. We are a product based start-up having a code assessment platform
which is being used top technology disrupters across the world.

Crewscale works with premium product companies (Indian and International) like - Swiggy,
ShareChat Grab, Capillary, Uber, Workspan, Ovo and many more. We are responsible for
managing infrastructure for Swiggy as well.
We focus on building only world class tech product and our USP is building technology can
handle scale from 1 million to 1 billion hits.

We invite candidates who have a zeal to develop world class products to come and work with us.

Toplyne

Who are we? 👋

Toplyne is a global SaaS product built to help revenue teams, at businesses with a self-service motion, and a large user-base, identify which users to spend time on, when and for what outcome. Think self-service or freemium-led companies like Figma, Notion, Freshworks, and Slack. We do this by helping companies recognize signals across their - product engagement, sales, billing, and marketing data.

Founded in June 2021, Toplyne is backed by marquee investors like Sequoia,Together fund and a bunch of well known angels. You can read more about us on -  https://bit.ly/ForbesToplyne  ,  https://bit.ly/YourstoryToplyne .

What will you get to work on? 🏗️

  • Design, Develop and maintain scalable data pipelines and Data warehouse to support continuing increases in data volume and complexity.

  • Develop and implement processes and systems to supervise data quality, data mining and ensuring production data is always accurate and available for key partners and business processes that depend on it.

  • Perform data analysis required to solve data related issues and assist in the resolution of data issues.

  • Complete ownership - You’ll build highly scalable platforms and services that support rapidly growing data needs in Toplyne. There’s no instruction book, it’s yours to write. You’ll figure it out, ship it, and iterate.

What do we expect from you? 🙌🏻

  • 3-6 years of relevant work experience in a Data Engineering role.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

  • Experience building and optimising data pipelines, architectures and data sets.

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

  • Strong analytic skills related to working with unstructured datasets.

  • Good understanding of Airflow, Spark, NoSql databases, Kakfa is nice to have.

Read more
They provide both wholesale and retail funding. PM1
Agency job
via Multi Recruit by Sapna Deb
Mumbai
5 - 7 yrs
₹20L - ₹25L / yr
AWS KINESYS
Data engineering
AWS Lambda
DynamoDB
data pipeline
+11 more
  • Key responsibility is to design & develop a data pipeline for real-time data integration, processing, executing of the model (if required), and exposing output via MQ / API / No-SQL DB for consumption
  • Provide technical expertise to design efficient data ingestion solutions to store & process unstructured data, such as Documents, audio, images, weblogs, etc
  • Developing API services to provide data as a service
  • Prototyping Solutions for complex data processing problems using AWS cloud-native solutions
  • Implementing automated Audit & Quality assurance Checks in Data Pipeline
  • Document & maintain data lineage from various sources to enable data governance
  • Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc

Skills

  • Programming experience using Python & SQL
  • Extensive working experience in Data Engineering projects, using AWS Kinesys,  AWS S3, DynamoDB, EMR, Lambda, Athena, etc for event processing
  • Experience & expertise in implementing complex data pipeline
  • Strong Familiarity with AWS Toolset for Storage & Processing. Able to recommend the right tools/solutions available to address specific data processing problems
  • Hands-on experience in Unstructured (Audio, Image, Documents, Weblogs, etc) Data processing.
  • Good analytical skills with the ability to synthesize data to design and deliver meaningful information
  • Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
  • Ability to understand business functionality, processes, and flows
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently

Functional knowledge

  • Real-time Event Processing
  • Data Governance & Quality assurance
  • Containerized deployment
  • Linux
  • Unstructured Data Processing
  • AWS Toolsets for Storage & Processing
  • Data Security

 

Read more
Consulting Leader
Agency job
via Buaut Tech by KAUSHANK nalin
Pune, Mumbai
8 - 10 yrs
₹8L - ₹16L / yr
Data integration
talend
Hadoop
Integration
Java
+1 more

 

Job Description for :

Role: Data/Integration Architect

Experience – 8-10 Years

Notice Period: Under 30 days

Key Responsibilities: Designing, Developing frameworks for batch and real time jobs on Talend. Leading migration of these jobs from Mulesoft to Talend, maintaining best practices for the team, conducting code reviews and demos.

Core Skillsets:

Talend Data Fabric - Application, API Integration, Data Integration. Knowledge on Talend Management Cloud, deployment and scheduling of jobs using TMC or Autosys.

Programming Languages - Python/Java
Databases: SQL Server, Other Databases, Hadoop

Should have worked on Agile

Sound communication skills

Should be open to learning new technologies based on business needs on the job

Additional Skills:

Awareness of other data/integration platforms like Mulesoft, Camel

Awareness Hadoop, Snowflake, S3

Read more
at Aptus Data LAbs
1 recruiter
DP
Posted by Merlin Metilda
Bengaluru (Bangalore)
5 - 10 yrs
₹6L - ₹15L / yr
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
+5 more

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Read more
at OpexAI
1 recruiter
DP
Posted by Jasmine Shaik
Hyderabad
0 - 1 yrs
₹0L - ₹1L / yr
OpenCV
Python
Deep Learning
Benefits: 1)Working with leaders of analytics Guru 2) working under 25+ years of business exp leaders 3) learning to market and sales pitch for product using social media 4) work from home based on your time 5) exp letter from leading analytics firm for 3 months 6) no stiphen
Read more
at mPaani Solutions Pvt Ltd
1 video
2 recruiters
DP
Posted by Julie K
Mumbai
3 - 7 yrs
₹5L - ₹15L / yr
Machine Learning (ML)
Python
Data Science
Big Data
R Programming
+2 more
Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Dataweave Pvt Ltd?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort