Cutshort logo
IBM InfoSphere DataStage Jobs in Bangalore (Bengaluru)

11+ IBM InfoSphere DataStage Jobs in Bangalore (Bengaluru) | IBM InfoSphere DataStage Job openings in Bangalore (Bengaluru)

Apply to 11+ IBM InfoSphere DataStage Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest IBM InfoSphere DataStage Job opportunities across top companies like Google, Amazon & Adobe.

Ibm in other cities
IBM AIX JobsIBM AIX Jobs in Bangalore (Bengaluru)IBM BPM JobsIBM BPM Jobs in Bangalore (Bengaluru)IBM BPM Jobs in ChennaiIBM BPM Jobs in Delhi, NCR and GurgaonIBM BPM Jobs in MumbaiIBM BPM Jobs in PuneIBM Cognos BI JobsIBM Cognos BI Jobs in Delhi, NCR and GurgaonIBM Cognos JobsIBM Cognos Report Studio JobsIBM Cognos Report Studio Jobs in Delhi, NCR and GurgaonIBM Cognos TM1 JobsIBM Cognos TM1 Jobs in Delhi, NCR and GurgaonIBM DB2 DBA JobsIBM DB2 DBA Jobs in Bangalore (Bengaluru)IBM DB2 JobsIBM DB2 Jobs in Bangalore (Bengaluru)IBM DB2 Jobs in Delhi, NCR and GurgaonIBM DB2 Jobs in HyderabadIBM Director JobsIBM Director Jobs in Bangalore (Bengaluru)IBM Director Jobs in ChennaiIBM Director Jobs in Delhi, NCR and GurgaonIBM Director Jobs in HyderabadIBM HTTP Server JobsIBM InfoSphere DataStage JobsIBM InfoSphere DataStage Jobs in PuneIBM Rational ClearQuest JobsIBM Rational ClearQuest Jobs in Bangalore (Bengaluru)IBM Rational ClearQuest Jobs in HyderabadIBM Rational DOORS JobsIBM Rational DOORS Jobs in Bangalore (Bengaluru)IBM Rational DOORS Jobs in ChennaiIBM Rational DOORS Jobs in HyderabadIBM Rational DOORS Jobs in PuneIBM Rational JobsIBM Rational Jobs in Delhi, NCR and GurgaonIBM Rational Jobs in JaipurIBM Rational Rhapsody JobsIBM Rational Rhapsody Jobs in PuneIBM Rational Team Concert JobsIBM Rational Team Concert Jobs in Bangalore (Bengaluru)IBM RPG JobsIBM RPG Jobs in Bangalore (Bengaluru)IBM Sterling B2B Integrator JobsIBM Sterling B2B Integrator Jobs in ChennaiIBM Sterling B2B Integrator Jobs in HyderabadIBM Sterling Commerce JobsIBM Sterling Commerce Jobs in Bangalore (Bengaluru)IBM Sterling Commerce Jobs in HyderabadIBM Tivoli Identity Manager JobsIBM Tivoli Identity Manager Jobs in Bangalore (Bengaluru)IBM WebSphere Application Server JobsIBM WebSphere Application Server Jobs in Bangalore (Bengaluru)IBM WebSphere Commerce JobsIBM WebSphere Commerce Jobs in Bangalore (Bengaluru)IBM WebSphere Commerce Jobs in Delhi, NCR and GurgaonIBM WebSphere Commerce Jobs in HyderabadIBM WebSphere Commerce Jobs in MumbaiIBM WebSphere Commerce Jobs in PuneIBM Websphere DataPower JobsIBM Websphere DataPower Jobs in Bangalore (Bengaluru)IBM WebSphere JobsIBM WebSphere Jobs in Bangalore (Bengaluru)IBM WebSphere Jobs in Delhi, NCR and GurgaonIBM WebSphere Jobs in HyderabadIBM WebSphere Jobs in MumbaiIBM WebSphere Jobs in PuneIBM WebSphere MQ JobsIBM WebSphere MQ Jobs in ChennaiIBM WebSphere MQ Jobs in MumbaiIBM Worklight JobsIBM Worklight Jobs in Pune
icon
Japanese MNC
Agency job
via CIEL HR Services by sundari chitra
Bengaluru (Bangalore)
5 - 10 yrs
₹7L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
IBM InfoSphere DataStage
Datastage
We are looking ETL Datastage developer for a Japanese MNC.

Role: ETL Datastage developer.

Eperience: 5 years

Location: Bangalore (WFH as of now).

Roles:

Design, develop, and schedule DataStage ETL jobs to extract data from disparate source systems, transform, and load data into EDW for data mart consumption, self-service analytics, and data visualization tools. 

Provides hands-on technical solutions to business challenges & translates them into process/technical solutions. 

Conduct code reviews to communicate high-level design approaches with team members to validate strategic business needs and architectural guidelines are met.

 Evaluate and recommend technical feasibility and effort estimates of proposed technology solutions. Provide operational instructions for dev, QA, and production code deployments while adhering to internal Change Management processes. 

Coordinate Control-M scheduler jobs and dependencies Recommend and implement ETL process performance tuning strategies and methodologies. Conducts and supports data validation, unit testing, and QA integration activities. 

Compose and update technical documentation to ensure compliance to department policies and standards. Create transformation queries, stored procedures for ETL processes, and development automations. 

Interested candidates can forward your profiles.
Read more
globe teleservices
deepshikha thapar
Posted by deepshikha thapar
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹25L / yr
ETL
skill iconPython
Informatica
Talend



Good experience in the Extraction, Transformation, and Loading (ETL) of data from various sources into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,

Designer, Workflow Manager, Workflow Monitor, Metadata Manager), Power Connect as ETL tool on Oracle, and SQL Server Databases.



 Knowledge of Data Warehouse/Data mart, ODS, OLTP, and OLAP implementations teamed with

project scope, Analysis, requirements gathering, data modeling, ETL Design, development,

System testing, Implementation, and production support.

 Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts

and Dimensions

 Used various transformations like Filter, Expression, Sequence Generator, Update Strategy,

Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.

 Developed mapping parameters and variables to support SQL override.

 Created applets to use them in different mappings.

 Created sessions, configured workflows to extract data from various sources, transformed data,

and loading into the data warehouse.

 Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

 Modified existing mappings for enhancements of new business requirements.

 Involved in Performance tuning at source, target, mappings, sessions, and system levels.

 Prepared migration document to move the mappings from development to testing and then to

production repositories

 Extensive experience in developing Stored Procedures, Functions, Views and Triggers, Complex

SQL queries using PL/SQL.


 Experience in resolving on-going maintenance issues and bug fixes; monitoring Informatica

/Talend sessions as well as performance tuning of mappings and sessions.

 Experience in all phases of Data warehouse development from requirements gathering for the

data warehouse to develop the code, Unit Testing, and Documenting.

 Extensive experience in writing UNIX shell scripts and automation of the ETL processes using

UNIX shell scripting.

 Experience in using Automation Scheduling tools like Control-M.

 Hands-on experience across all stages of Software Development Life Cycle (SDLC) including

business requirement analysis, data mapping, build, unit testing, systems integration, and user

acceptance testing.

 Build, operate, monitor, and troubleshoot Hadoop infrastructure.

 Develop tools and libraries, and maintain processes for other engineers to access data and write

MapReduce programs.

Read more
Bengaluru (Bangalore)
1 - 6 yrs
₹2L - ₹8L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+9 more

ROLE AND RESPONSIBILITIES

Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should

be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and

transform data into insights that drive business value, through use of data analytics, data visualization and data

modeling techniques.


QUALIFICATIONS AND EDUCATION REQUIREMENTS

Technical Bachelor’s Degree.

Non-Technical Degree holders should have 1+ years of relevant experience.

Read more
Compile

at Compile

16 recruiters
Sarumathi NH
Posted by Sarumathi NH
Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
Spark

You will be responsible for designing, building, and maintaining data pipelines that handle Real-world data at Compile. You will be handling both inbound and outbound data deliveries at Compile for datasets including Claims, Remittances, EHR, SDOH, etc.

You will

  • Work on building and maintaining data pipelines (specifically RWD).
  • Build, enhance and maintain existing pipelines in pyspark, python and help build analytical insights and datasets.
  • Scheduling and maintaining pipeline jobs for RWD.
  • Develop, test, and implement data solutions based on the design.
  • Design and implement quality checks on existing and new data pipelines.
  • Ensure adherence to security and compliance that is required for the products.
  • Maintain relationships with various data vendors and track changes and issues across vendors and deliveries.

You have

  • Hands-on experience with ETL process (min of 5 years).
  • Excellent communication skills and ability to work with multiple vendors.
  • High proficiency with Spark, SQL.
  • Proficiency in Data modeling, validation, quality check, and data engineering concepts.
  • Experience in working with big-data processing technologies using - databricks, dbt, S3, Delta lake, Deequ, Griffin, Snowflake, BigQuery.
  • Familiarity with version control technologies, and CI/CD systems.
  • Understanding of scheduling tools like Airflow/Prefect.
  • Min of 3 years of experience managing data warehouses.
  • Familiarity with healthcare datasets is a plus.

Compile embraces diversity and equal opportunity in a serious way. We are committed to building a team of people from many backgrounds, perspectives, and skills. We know the more inclusive we are, the better our work will be.         

Read more
Quicken Inc

at Quicken Inc

2 recruiters
Shreelakshmi M
Posted by Shreelakshmi M
Bengaluru (Bangalore)
5 - 8 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
ETL QA
+1 more
  • Graduate+ in Mathematics, Statistics, Computer Science, Economics, Business, Engineering or equivalent work experience.
  • Total experience of 5+ years with at least 2 years in managing data quality for high scale data platforms.
  • Good knowledge of SQL querying.
  • Strong skill in analysing data and uncovering patterns using SQL or Python.
  • Excellent understanding of data warehouse/big data concepts such data extraction, data transformation, data loading (ETL process).
  • Strong background in automation and building automated testing frameworks for data ingestion and transformation jobs.
  • Experience in big data technologies a big plus.
  • Experience in machine learning, especially in data quality applications a big plus.
  • Experience in building data quality automation frameworks a big plus.
  • Strong experience working with an Agile development team with rapid iterations. 
  • Very strong verbal and written communication, and presentation skills.
  • Ability to quickly understand business rules.
  • Ability to work well with others in a geographically distributed team.
  • Keen observation skills to analyse data, highly detail oriented.
  • Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution.
  • Able to identify stakeholders, build relationships, and influence others to get work done.
  • Self-directed and self-motivated individual who takes complete ownership of the product and its outcome.
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore)
5 - 7 yrs
₹10L - ₹20L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

Data Analyst

Job Description

 

Summary

Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.

 

At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.

 

Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.

 

Key Qualifications

  • Experience in building highly cost optimised data analytics solutions
  • Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
  • Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
  • Experience in working with AWS or GCP
  • Experience with relational and NoSQL databases
  • Experience to full stack web development (Preferably Python)
  • Expertise with data visualisation systems such as Tableau and Quick Sight
  • Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
  • Familiarity with ML/AÍ technologies is a plus
  • Demonstrate strong understanding of development processes and agile methodologies
  • Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly

 

Description

Data Analytics is at the core of our work, and you will have the opportunity to:

 

  • Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
  • Lead quick prototypes by integrating data from multiple sources
  • Do advanced Business Analytics through ad-hoc SQL queries
  • Work on Sales Finance reporting solutions using tableau, HTML5, React applications

 

We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.

 

Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..

 

Education & Experience

A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.

 

Read more
PayU

at PayU

1 video
6 recruiters
Vishakha Sonde
Posted by Vishakha Sonde
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹20L / yr
skill iconPython
ETL
Data engineering
Informatica
SQL
+2 more

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Lokesh Manikappa
Posted by Lokesh Manikappa
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
+5 more

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more
jhjkhhk
Agency job
via CareerBabu by Tanisha Takkar
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹40L / yr
Apache Spark
Big Data
skill iconJava
Spring
Data Structures
+5 more
  • Owns the end to end implementation of the assigned data processing components/product features  i.e. design, development, deployment, and testing of the data processing components and associated flows conforming to best coding practices 

  • Creation and optimization of data engineering pipelines for analytics projects. 

  • Support data and cloud transformation initiatives 

  • Contribute to our cloud strategy based on prior experience 

  • Independently work with all stakeholders across the organization to deliver enhanced functionalities 

  • Create and maintain automated ETL processes with a special focus on data flow, error recovery, and exception handling and reporting 

  • Gather and understand data requirements, work in the team to achieve high-quality data ingestion and build systems that can process the data, transform the data 

  • Be able to comprehend the application of database index and transactions 

  • Involve in the design and development of a Big Data predictive analytics SaaS-based customer data platform using object-oriented analysis, design and programming skills, and design patterns 

  • Implement ETL workflows for data matching, data cleansing, data integration, and management 

  • Maintain existing data pipelines, and develop new data pipeline using big data technologies 

  • Responsible for leading the effort of continuously improving reliability, scalability, and stability of microservices and platform

Read more
Rivet Systems Pvt Ltd.

at Rivet Systems Pvt Ltd.

1 recruiter
Shobha B K
Posted by Shobha B K
Bengaluru (Bangalore)
5 - 19 yrs
₹10L - ₹30L / yr
ETL
Hadoop
Big Data
Pig
Spark
+2 more
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig

To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Read more
Yulu Bikes

at Yulu Bikes

1 video
3 recruiters
Keerthana k
Posted by Keerthana k
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹28L / yr
Big Data
Spark
skill iconScala
Hadoop
Apache Kafka
+5 more
Job Description
We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.

Responsibilities
Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure

Skills 
Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort