Cutshort logo
Product and Service based company logo
Data Engineer
Product and Service based company
Data Engineer
Product and Service based company's logo

Data Engineer

at Product and Service based company

Agency job
4 - 8 yrs
₹15L - ₹30L / yr
Hyderabad, Ahmedabad
Skills
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
Apache Hive
skill iconPostgreSQL
Cassandra
ETL
skill iconJava
skill iconScala
skill iconC#
HDFS
yarn
CI/CD
skill iconJenkins
JIRA
Apache Kafka

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

Similar jobs

Carsome
at Carsome
3 recruiters
Piyush Palkar
Posted by Piyush Palkar
Remote, Kuala Lumpur
2 - 5 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconDjango
skill iconFlask
TensorFlow
+2 more
Carsome is a growing startup that is utilising data to improve the experience of second hand car shoppers. This involves developing, deploying & maintaining machine learning models that are used to improve our customers' experience. We are looking for candidates who are aware of the machine learning project lifecycle and can help managing ML deployments.

Responsibilities: - Write and maintain production level code in Python for deploying machine learning models - Create and maintain deployment pipelines through CI/CD tools (preferribly GitLab CI) - Implement alerts and monitoring for prediction accuracy and data drift detection - Implement automated pipelines for training and replacing models - Work closely with with the data science team to deploy new models to production Required Qualifications: - Degree in Computer Science, Data Science, IT or a related discipline. - 2+ years of experience in software engineering or data engineering. - Programming experience in Python - Experience in data profiling, ETL development, testing and implementation - Experience in deploying machine learning models

Good to have: - Experience in AWS resources for ML and data engineering (SageMaker, Glue, Athena, Redshift, S3) - Experience in deploying TensorFlow models - Experience in deploying and managing ML Flow
Read more
Mumbai
10 - 15 yrs
₹8L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Exp-Min 10 Years

Location Mumbai

Sal-Nego

 

 

Powerbi, Tableau, QlikView,

 

 

Solution Architect/Technology Lead – Data Analytics

 

Role

Looking for Business Intelligence lead (BI Lead) having hands on experience BI tools (Tableau, SAP Business Objects, Financial and Accounting modules, Power BI), SAP integration, and database knowledge including one or more of Azure Synapse/Datafactory, SQL Server, Oracle, cloud-based DB Snowflake. Good knowledge of AI-ML, Python is also expected.

  • You will be expected to work closely with our business users. The development will be performed using an Agile methodology which is based on scrum (time boxing, daily scrum meetings, retrospectives, etc.) and XP (continuous integration, refactoring, unit testing, etc) best practices. Candidates must therefore be able to work collaboratively, demonstrate good ownership, leadership and be able to work well in teams.
  • Responsibilities :
  • Design, development and support of multiple/hybrid Data sources, data visualization Framework using Power BI, Tableau, SAP Business Objects etc. and using ETL tools, Scripting, Python Scripting etc.
  • Implementing DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code-utilizing tools like Git. Primary Skills

Requirements

  • 10+ years working as a hands-on developer in Information Technology across Database, ETL and BI (SAP Business Objects, integration with SAP Financial and Accounting modules, Tableau, Power BI) & prior team management experience
  • Tableau/PowerBI integration with SAP and knowledge of SAP modules related to finance is a must
  • 3+ years of hands-on development experience in Data Warehousing and Data Processing
  • 3+ years of Database development experience with a solid understanding of core database concepts and relational database design, SQL, Performance tuning
  • 3+ years of hands-on development experience with Tableau
  • 3+ years of Power BI experience including parameterized reports and publishing it on PowerBI Service
  • Excellent understanding and practical experience delivering under an Agile methodology
  • Ability to work with business users to provide technical support
  • Ability to get involved in all the stages of project lifecycle, including analysis, design, development, testing, Good To have Skills
  • Experience with other Visualization tools and reporting tools like SAP Business Objects.

 

Read more
Crayon Data
at Crayon Data
2 recruiters
Varnisha Sethupathi
Posted by Varnisha Sethupathi
Chennai
5 - 8 yrs
₹15L - ₹25L / yr
SQL
skill iconPython
Analytical Skills
Data modeling
Data Visualization
+1 more

Role : Senior Customer Scientist 

Experience : 6-8 Years 

Location : Chennai (Hybrid) 
 
 

Who are we? 
 
 

A young, fast-growing AI and big data company, with an ambitious vision to simplify the world’s choices. Our clients are top-tier enterprises in the banking, e-commerce and travel spaces. They use our core AI-based choice engine http://maya.ai/">maya.ai, to deliver personal digital experiences centered around taste. The http://maya.ai/">maya.ai platform now touches over 125M customers globally. You’ll find Crayon Boxes in Chennai and Singapore. But you’ll find Crayons in every corner of the world. Especially where our client projects are – UAE, India, SE Asia and pretty soon the US. 
 
 

Life in the Crayon Box is a little chaotic, largely dynamic and keeps us on our toes! Crayons are a diverse and passionate bunch. Challenges excite us. Our mission drives us. And good food, caffeine (for the most part) and youthful energy fuel us. Over the last year alone, Crayon has seen a growth rate of 3x, and we believe this is just the start. 
 

 
We’re looking for young and young-at-heart professionals with a relentless drive to help Crayon double its growth. Leaders, doers, innovators, dreamers, implementers and eccentric visionaries, we have a place for you all. 
 

 
 

Can you say “Yes, I have!” to the below? 
 
 

  1. Experience with exploratory analysis, statistical analysis, and model development 
     
  1. Knowledge of advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations 
     
  1. Knowledge of software packages such as SAS, R, Rapidminer for analytical modelling and data management. 
     
  1. Strong experience in SQL/ Python/R working efficiently at scale with large data sets 
     
  1. Experience in using Business Intelligence tools such as PowerBI, Tableau, Metabase for business applications 
     

 
 

 

Can you say “Yes, I will!” to the below? 
 
 

  1. Drive clarity and solve ambiguous, challenging business problems using data-driven approaches. Propose and own data analysis (including modelling, coding, analytics) to drive business insight and facilitate decisions. 
     
  1. Develop creative solutions and build prototypes to business problems using algorithms based on machine learning, statistics, and optimisation, and work with engineering to deploy those algorithms and create impact in production. 
     
  1. Perform time-series analyses, hypothesis testing, and causal analyses to statistically assess the relative impact and extract trends 
     
  1. Coordinate individual teams to fulfil client requirements and manage deliverable 
     
  1. Communicate and present complex concepts to business audiences 
     
  1. Travel to client locations when necessary  

 

 

Crayon is an equal opportunity employer. Employment is based on a person's merit and qualifications and professional competences. Crayon does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, marital status, pregnancy or related.  
 
 

More about Crayon: https://www.crayondata.com/">https://www.crayondata.com/  
 

More about http://maya.ai/">maya.ai: https://maya.ai/">https://maya.ai/  

 

 

Read more
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹2.5L - ₹4L / yr
SQL
Data engineering
Big Data
skill iconPython
● Hands-on Work experience as a Python Developer
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Sumangali Desai
Posted by Sumangali Desai
Pune
3 - 8 yrs
₹5L - ₹20L / yr
ETL
Data Warehouse (DWH)
IBM InfoSphere DataStage
DataStage
SQL
+1 more

Datametica is Hiring for Datastage Developer

  • Must have 3 to 8 years of experience in ETL Design and Development using IBM Datastage Components.
  • Should have extensive knowledge in Unix shell scripting.
  • Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
  • Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
  • Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
  • Should be good in writing complex SQL queries.

About Us!

A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

 

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

 

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

 

We have our own products!

Eagle – Data warehouse Assessment & Migration Planning Product

Raven – Automated Workload Conversion Product

Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

 

Why join us!

Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

 

Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy

 

Check out more about us on our website below!

www.datametica.com

 

Read more
MNC
MNC
Agency job
via Fragma Data Systems by Priyanka U
Chennai
1 - 5 yrs
₹6L - ₹12L / yr
skill iconData Science
Natural Language Processing (NLP)
Data Scientist
skill iconR Programming
skill iconPython
Skills
  • Python coding skills
  • Scikit-learn, pandas, tensorflow/keras experience
  • Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
  • Implementing Machine learning models and pushing it to production 
  • Creating docker images for ML models, REST API creation in Python
1) Data scientist with NLP experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
2) Data scientist with Computer vision for documents experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
Read more
MNC
MNC
Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore), Hyderabad
3 - 6 yrs
₹10L - ₹15L / yr
Big Data
Spark
ETL
Apache
Hadoop
+2 more
Desired Skill, Experience, Qualifications, and Certifications:
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Read more
MY Remote Developer
MY Remote Developer
Agency job
via Freelancer by Guruprasad R Shet
Chennai
10 - 18 yrs
₹7L - ₹12L / yr
skill iconJava
Akka
Microservices
Contract Hire - 6 Months

Scala/Akka, Microservices, Akka streams, Play, Kafka, NoSQL. Java/J2EE, Spring, Architecture & Design, Enterprise integration ML/Data analytics, BigData technologies, Python/R programming, ML Algorithms/data models, ML tools(H2O/Tensorflow/Prediction.io), Realtime dashboards & batch, cluster compute (Spark/Akka), Distributed Data platforms (HDFS, Elastic search, Splunk, Casandra), Kafka with cross DC replication, NoSQL & In-memory DB
Read more
I Base IT
at I Base IT
1 recruiter
Sravanthi Alamuri
Posted by Sravanthi Alamuri
Hyderabad
9 - 13 yrs
₹10L - ₹23L / yr
skill iconData Analytics
Data Warehouse (DWH)
Data Structures
Spark
Architecture
+4 more
Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop
Read more
SigTuple
at SigTuple
1 video
5 recruiters
Sneha Chakravorty
Posted by Sneha Chakravorty
Bengaluru (Bangalore)
2 - 6 yrs
₹4L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
We are looking for highly passionate and enthusiastic players for solving problems in medical data analysis using a combination of image processing, machine learning and deep learning. As a Senior Computer Scientist at SigTuple you will have the onus of creating and leveraging the state-of-the-art algorithms in machine learning, image processing and AI which will impact billions of people across the world by creating healthcare solutions that are accurate and affordable. You will collaborate with our current team of super awesome geeks in cracking super complex problems in a simple way by creating experiments, algorithms and prototypes that not only yield high-accuracy but are also designed and engineered to scale. We believe in innovation - needless to say that you will be part of creating intellectual properties like patents and contributing to the research communities by publishing papers - it is something that we value the most What we are looking for: · Hands on experience along with a strong understanding of foundational algorithms in either machine learning, computer vision or deep learning. Prior experience of applying these techniques on images and videos would be good-to-have. · Hands on experience in building and implementing advanced statistical analysis and machine learning and data mining algorithms. · Programming experience in C, C++, Python What should you have: · 2 - 5 years of relevant experience in solving problems using machine learning or computer vision · Bachelor degree or Master degree or PhD in computer science or related fields. · Be an innovative and creative thinker, somebody who is not afraid to try something new and inspire others to do so. · Thrive in a fast-paced and fun environment. · Work with a bunch of data scientist geeks and disruptors striving for a big cause. What SigTuple can offer: You will be working with an incredible team of smart & supportive people, driven by a common force to change things for the better. With an opportunity to deliver high-calibre mobile and desktop solutions integrated with hardware that will transform healthcare ground up, there will ultimately be different challenges for you to face. Sufficient to say that if you thrive in these environments, the buzz alone will keep you energized. In short you will snag a place at the table of one of the most vibrant start-ups in the industry!!
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos