9+ Data integration Jobs in Bangalore (Bengaluru) | Data integration Job openings in Bangalore (Bengaluru)
Apply to 9+ Data integration Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Data integration Job opportunities across top companies like Google, Amazon & Adobe.
NASDAQ listed, Service Provider IT Company
Job Summary:
As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.
Key Responsibilities:
- Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
- Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
- Develop and maintain cloud-native solutions leveraging services from various cloud providers.
- Architect and deploy microservices using REST, GraphQL to support our application development needs.
- Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
- Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
- Implement robust security practices and policies to protect cloud environments and data.
- Design and implement data management strategies, including data governance, data integration, and data security.
- Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
- Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
- Optimize cost and performance across different cloud environments.
Qualifications/ Experience & Skills Required:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Experience: 10 - 15 Years
- Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
- Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
- Strong knowledge of cloud-native solutions and microservices architecture.
- Proficiency in using GraphQL for designing and implementing APIs.
- Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
- Experience with DevOps practices and tools, including CI/CD pipelines.
- Excellent problem-solving skills and ability to troubleshoot complex issues.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
- Experience with data management, including data governance, data integration, and data security.
Preferred Skills:
- Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
- Experience with containerization technologies (Docker, Kubernetes).
- Familiarity with cloud cost management and optimization tools.
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.
Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.
What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.
Qualifications & Experience relevant for the role
• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).
• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Global Media Agency - A client of Merito
Job Description
- Lead the Data Solutions function and be the POC for all Agency Digital and Data leads to liaise on client requirements, and ensuring the appropriate solutions are rolled out.
- General agency pitch support for data as needed but more proactively creating consulting frameworks around client 1st PD data architecture, guidance, and recommendation around CDPs implementation, vs DMP, Mar-tech stack
- Establish working relationship and SOP for implementation delivery for Mar-Tech projects
- Contribute to team projects as the executive sponsor for client data initiatives, support the agencies in audience planning strategies utilising all data assets
- Own the setup elements such as first party data, tag integration (tag management systems, custom data integrations, CRM connections, etc.), custom data partner integration development.
- Work with data partners to integrate their data sets into [m]insights (Onboarding & Testing Data Sets)
- Manage the Mar-tech partners, working on JBPs to achieve shared goals of client enablement
- Work closely with the global and regional product team to test and rollout new features, special with the cookieless future of audience planning tools
Requirements
- 10+ years of experience in Technology Consulting role within Adtech/Martech
- Bachelor’s or master’s degree in a relevant field (Data, Technology, IT, Marketing)
- Certifications (or advanced knowledge) in DMP, CDP, Mar-tech & digital analytics platforms
- A strong understanding of all digital marketing channels
- Familiar with advanced analytics and data management concepts
- Proven ability to build and grow relationships with key stakeholders
- Excellent written and verbal communication skills.
- Flexibility to work in a cross-functional team but also have the initiative to problem-solve independently.
- Highly organised with demonstrated project management capabilities
A global business process management company
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
Responsibilities for Data Engineer
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Several years of experience in designing web applications
- 4-7 years of Industry experience in IT or consulting organizations
- 3+ years of experience defining and delivering Informatica Cloud Data Integration & Application Integration enterprise applications in lead developer role
- Must have working knowledge on integrating with Salesforce, Oracle DB, JIRA Cloud
- Must have working scripting knowledge (windows or Nodejs)
Soft Skills
- Superb interpersonal skills, both written and verbal, in order to effectively develop materials that are appropriate for variety of audience in business & technical teams
- Strong presentation skills, successfully present and defend point of view to Business & IT audiences
- Excellent analysis skills and ability to rapidly learn and take advantage of new concepts, business models, and technologies
15 years US based Product Company
- Should have good hands-on experience in Informatica MDM Customer 360, Data Integration(ETL) using PowerCenter, Data Quality.
- Must have strong skills in Data Analysis, Data Mapping for ETL processes, and Data Modeling.
- Experience with the SIF framework including real-time integration
- Should have experience in building C360 Insights using Informatica
- Should have good experience in creating performant design using Mapplets, Mappings, Workflows for Data Quality(cleansing), ETL.
- Should have experience in building different data warehouse architecture like Enterprise,
- Federated, and Multi-Tier architecture.
- Should have experience in configuring Informatica Data Director in reference to the Data
- Governance of users, IT Managers, and Data Stewards.
- Should have good knowledge in developing complex PL/SQL queries.
- Should have working experience on UNIX and shell scripting to run the Informatica workflows and to control the ETL flow.
- Should know about Informatica Server installation and knowledge on the Administration console.
- Working experience with Developer with Administration is added knowledge.
- Working experience in Amazon Web Services (AWS) is an added advantage. Particularly on AWS S3, Data pipeline, Lambda, Kinesis, DynamoDB, and EMR.
- Should be responsible for the creation of automated BI solutions, including requirements, design,development, testing, and deployment