Cutshort logo
Wissen Technology logo
ETL-Database Developer/Lead
ETL-Database Developer/Lead
Wissen Technology's logo

ETL-Database Developer/Lead

Lokesh Manikappa's profile picture
Posted by Lokesh Manikappa
5 - 12 yrs
₹15L - ₹35L / yr
Bengaluru (Bangalore)
Skills
Data Warehouse (DWH)
Informatica
ETL
Data modeling
Spark
Databases
Shell Scripting
Perl
skill iconPython
KDB

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Wissen Technology

Founded :
2000
Type
Size :
1000-5000
Stage :
Profitable
About

Established in the year 2000 in the US, we have global offices in US, India, UK, Australia, Mexico and Canada, with best in class infrastructure and development facilities spread across the globe. We are an end to end solution provider in Banking & Financial Services, Telecom, Healthcare, Manufacturing & Energy verticals and have successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. We have more than 3000+ highly skilled professionals. The Leadership, Senior Management and Technologists of Wissen have degrees from the Ivy League Universities of the world like MIT, Wharton, IITs, IIMs and BITS and have rich work experience in some of the biggest companies in the world.

We offer an array of services that includes Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation and Infrastructure Management. Wissen is uniquely positioned to help you with your needs in Building Enterprise Systems, Implementing a Digital Strategy and Gaining Competitive Advantage with Business Transformation. Our expertise in a wide range of technologies such as Artificial Intelligence, Machine Learning and Data Analytics allow us to help you make an informed decision and leverage the most appropriate technology for the problem. We also offer services in ERP, Salesforce, E-Commerce and Production Support.

Wissen utilizes its multi-location facilities and industry standard processes, such as ITIL to provide the ‘best-in-class’ cost-effective solutions that promise maximum returns on minimum IT spend.

Read more
Connect with the team
Profile picture
Lokesh Manikappa
Profile picture
Vijayalakshmi Selvaraj
Profile picture
Adishi Sood
Profile picture
Shiva Kumar J Goud
Company social profiles
bloglinkedinfacebook

Similar jobs

Innovative Startup
Remote only
3 - 6 yrs
₹18L - ₹28L / yr
Business Intelligence (BI)
Tableau
CleverTap
skill iconPython
Analytics
Bachelor Degree in a quantitative field (i.e. Mathematics, Statistics, Computer
Science)
Have 2 to 6 years of experience working in a similar role in a startup environment
SQL and Excel have no secrets for you
You love visualizing data with Tableau
Any experience with product analytics tools (Mixpanel, Clevertap) is a plus
You solve math puzzles for fun
A strong analytical mindset with a problem-solving attitude
Comfortable with being critical and speaking your mind
You can easily switch between coding (R or Python) and having a business
discussion
Be a team player who thrives in a fast-paced and constantly changing environment
Read more
Red.Health
at Red.Health
2 candid answers
Mayur Bellapu
Posted by Mayur Bellapu
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Job Description: Data Engineer

We are looking for a curious Data Engineer to join our extremely fast-growing Tech Team at StanPlus

 

About RED.Health (Formerly Stanplus Technologies)

Get to know the team:

Join our team and help us build the world’s fastest and most reliable emergency response system using cutting-edge technology.

Because every second counts in an emergency, we are building systems and flows with 4 9s of reliability to ensure that our technology is always there when people need it the most. We are looking for distributed systems experts who can help us perfect the architecture behind our key design principles: scalability, reliability, programmability, and resiliency. Our system features a powerful dispatch engine that connects emergency service providers with patients in real-time

.

Key Responsibilities

●     Build Data ETL Pipelines

●     Develop data set processes

●     Strong analytic skills related to working with unstructured datasets

●     Evaluate business needs and objectives

●     Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery

●     Interpret trends and patterns

●     Work with data and analytics experts to strive for greater functionality in our data system

●     Build algorithms and prototypes

●     Explore ways to enhance data quality and reliability

●     Work with the Executive, Product, Data, and D   esign teams, to assist with data-related technical issues and support their data infrastructure needs.

●     Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.

 

Key Requirements

●     Proven experience as a data engineer, software developer, or similar of at least 3 years.

●     Bachelor's / Master’s degree in data engineering, big data analytics, computer engineering, or related field.

●     Experience with big data tools: Hadoop, Spark, Kafka, etc.

●     Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

●     Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

●     Experience with Azure, AWS cloud services: EC2, EMR, RDS, Redshift

●     Experience with BigQuery

●     Experience with stream-processing systems: Storm, Spark-Streaming, etc.

●     Experience with languages: Python, Java, C++, Scala, SQL, R, etc.

●     Good hands-on with Hive, Presto.

 


Read more
Snapblocs
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 10 yrs
₹20L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
You should hold a B. Tech/MTech degree. • You should have 5 to 10 years of experience with a minimum of 3 years in working in any data driven company/platform. • Competency in core java is a must. • You should have worked with distributed data processing frameworks like Apache Spark, Apache Flink or Hadoop. • You should be a team player and have an open mind to approach the problems to solve them in the right manner with the right set of tools and technologies by working with the team. • You should have knowledge of frameworks & distributed systems, be good at algorithms, data structures, and design patterns. • You should have an in-depth understanding of big data technologies and NoSql databases (Kafka, HBase, Spark, Cassandra, MongoDb etc). • Work experience with AWS cloud platform, Spring Boot and developing API will be a plus. • You should have exceptional problem solving and analytical abilities, and organisation skills with an eye for detail
Read more
Slintel
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
4 - 9 yrs
₹20L - ₹28L / yr
Big Data
ETL
Apache Spark
Spark
Data engineer
+5 more
Responsibilities
  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.

Requirements
  • 5+ years of experience in a Data Engineer role.
  • Proficiency in Linux.
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
  • Must have experience with Python/Scala.
  • Must have experience with Big Data technologies like Apache Spark.
  • Must have experience with Apache Airflow.
  • Experience with data pipeline and ETL tools like AWS Glue.
  • Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Deep-Rooted.co (formerly Clover)
at Deep-Rooted.co (formerly Clover)
6 candid answers
1 video
Likhithaa D
Posted by Likhithaa D
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹15L / yr
skill iconJava
skill iconPython
SQL
AWS Lambda
HTTP
+5 more

Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.


Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.


Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.  

How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.


We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.

Purpose of the role:

* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making.
* Handle nuances of Excel and Google Sheets API.
* Pull data in and manage it growth, freshness and correctness.
* Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads.
* Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.

Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python.
* Good Knowledge of Data Warehousing, Data Architecture.
* Experience with Data Transformations and ETL; 
* Experience with API tools and more closed systems like Excel, Google Sheets etc.
* Experience AWS Cloud Platform and Lambda
* Experience with distributed data processing tools.
* Experiences with container-based deployments on cloud.

Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹10L - ₹24L / yr
skill iconData Science
skill iconR Programming
skill iconPython
SQL
skill iconMachine Learning (ML)
+1 more

Responsibilities:

  • Design and develop strong analytics system and predictive models
  • Managing a team of data scientists, machine learning engineers, and big data specialists
  • Identify valuable data sources and automate data collection processes
  • Undertake pre-processing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

Requirements:

  • Proven experience as a seasoned Data Scientist
  • Good Experience in data mining processes
  • Understanding of machine learning and Knowledge of operations research is a value addition
  • Strong understanding and experience in R, SQL, and Python; Knowledge base with Scala, Java, or C++ is an asset
  • Experience using business intelligence tools (e. g. Tableau) and data frameworks (e. g. Hadoop)
  • Strong math skills (e. g. statistics, algebra)
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • Experience in Natural Language Processing (NLP)
  • Strong competitive coding skills
  • BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
Read more
Sopra Steria
Chennai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 8 yrs
₹2L - ₹12L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
Good hands-on experience on Spark and Scala.
Should have experience in Big Data, Hadoop.
Currently providing WFH.
immediate joiner or 30 days
Read more
AES Technologies
at AES Technologies
3 recruiters
Ragavendra G
Posted by Ragavendra G
Dubai
2 - 4 yrs
Best in industry
skill iconPython
Windows Azure
skill iconJava
Big Data
skill iconScala

As a Data Engineer, your role will encompass: 

  • Designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture using Scala, Python, Talend etc.   
  • Gather and address technical and design requirements.  
  • Refactor existing applications to optimize its performance through setting the appropriate architecture and integrating the best practices and standards. 
  • Participate in the entire data life-cycle mainly focusing on coding, debugging, and testing. 
  • Troubleshoot and debug ETL Pipelines. 
  • Documentation of each process. 

Technical Requirements: - 

  • BSc degree in Computer Science/Computer Engineering. (Masters is a plus.) 
  • 2+ years of experience as a Data Engineer. 
  • In-depth understanding of core ETL concepts, Data Modelling, Data Lineage, Data Governance, Data Catalog, etc. 
  • 2+ years of work experience in Scala, Python, Java. 
  • Good Knowledge on Big Data Tools such as Spark/HDFS/Hive/Flume, etc. 
  • Hands on experience on ETL tools like Talend/Informatica is a plus. 
  • Good knowledge in Kafka and spark streaming is a big plus. 
  • 2+ years of experience in using Azure cloud and its resources/services (like Azure Data factory, Azure Databricks, SQL Synapse, Azure Devops, Logic Apps, Power Bi, Azure Event Hubs, etc). 
  • Strong experience in Relational Databases (MySQL, SQL Server)  
  • Exposure on data visualization tools like Power BI / Qlik sense / MicroStrategy 
  • 2+ years of experience in developing APIs (REST & SOAP protocols). 
  • Strong knowledge in Continuous Integration & Continuous Deployment (CI/CD) utilizing Docker containers, Jenkins, etc. 
  • Strong competencies in algorithms and software architecture. 
  • Excellent analytical and teamwork skills. 

 Good to have: - 

  • Previous on-prem working experience is a plus. 
  • In-depth understanding of the entire web development process (design, development, and deployment) 
  • Previous experience in automated testing including unit testing & UI testing. 

 

Read more
Quantiphi Inc.
at Quantiphi Inc.
1 video
10 recruiters
Anwar Shaikh
Posted by Anwar Shaikh
Mumbai
1 - 5 yrs
₹4L - ₹15L / yr
skill iconPython
skill iconMachine Learning (ML)
skill iconDeep Learning
TensorFlow
Keras
+1 more
1. The candidate should be passionate about machine learning and deep learning.
2. Should understand the importance and know-how of taking the machine-learning-based solution to the consumer.
3. Hands-on experience with statistical, machine-learning tools and techniques
4. Good exposure to Deep learning libraries like Tensorflow, PyTorch.
5. Experience in implementing Deep Learning techniques, Computer Vision and NLP. The candidate should be able to develop the solution from scratch with Github codes exposed.
6. Should be able to read research papers and pick ideas to quickly reproduce research in the most comfortable Deep Learning library.
7. Should be strong in data structures and algorithms. Should be able to do code complexity analysis/optimization for smooth delivery to production.
8. Expert level coding experience in Python.
9. Technologies: Backend - Python (Programming Language)
10. Should have the ability to think long term solutions, modularity, and reusability of the components.
11. Should be able to work in a collaborative way. Should be open to learning from peers as well as constantly bring new ideas to the table.
12. Self-driven missile. Open to peer criticism, feedback and should be able to take it positively. Ready to be held accountable for the responsibilities undertaken.
Read more
PayU
at PayU
1 video
6 recruiters
Deeksha Srivastava
Posted by Deeksha Srivastava
gurgaon, NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹7L - ₹15L / yr
skill iconPython
skill iconR Programming
skill iconData Analytics
R

What you will be doing:

As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -

  • Dive into the data and identify patterns
  • Development of end-to-end Credit models and credit policy for our existing credit products
  • Leverage alternate data to develop best-in-class underwriting models
  • Working on Big Data to develop risk analytical solutions
  • Development of Fraud models and fraud rule engine
  • Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
  • Working on cutting-edge techniques e.g. machine learning and deep learning models

Example of projects done in past:

  • Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
  • Fraud model development, deployment and rules for EMEA region

 

Basic Requirements:

  • 1-3 years of work experience as a Data scientist (in Credit domain)
  • 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
  • Strong problem solving and understand and execute complex analysis
  • Experience in at least one of the languages - R/Python/SAS and SQL
  • Experience in in Credit industry (Fintech/bank)
  • Familiarity with the best practices of Data Science

 

Add-on Skills : 

  • Experience in working with big data
  • Solid coding practices
  • Passion for building new tools/algorithms
  • Experience in developing Machine Learning models
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos