6+ Amazon Redshift Jobs in Hyderabad | Amazon Redshift Job openings in Hyderabad
Apply to 6+ Amazon Redshift Jobs in Hyderabad on CutShort.io. Explore the latest Amazon Redshift Job opportunities across top companies like Google, Amazon & Adobe.


The Opportunity
We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions.
You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries.
What You’ll Do
- Design, build, and maintain scalable data pipelines and ETL/ELT processes.
- Architect and optimize data models and storage solutions for analytics and operational use.
- Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar.
- Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable.
- Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake).
- Write complex, high-performance SQL queries to support reporting and analytics needs.
- Implement observability, alerting, and data quality monitoring for critical pipelines.
- Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD.
- Contribute to the evolution of our next-generation data lakehouse and BI architecture.
What We’re Looking For
Minimum Qualifications
- 5+ years of experience as a Data Engineer or in a hybrid data/reporting role.
- Strong programming skills in Python, Java, or Scala.
- Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift.
- Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow).
- Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics.
- Experience with BI tools such as Looker Studio, Power BI, or Tableau.
- Experience in building and maintaining robust ETL/ELT pipelines in production.
- Understanding of data quality, observability, and governance best practices.
Bonus Points
- Experience with dbt, Terraform, or Kubernetes.
- Familiarity with real-time data processing or streaming architectures.
- Understanding of data privacy, compliance, and security best practices in analytics and reporting.
Why You’ll Love Working Here
- Data with purpose: Work on problems that directly impact how the world builds secure software.
- Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations.
- Modern tooling: Leverage the best of open-source and cloud-native technologies.
- Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact.

About the Role
We’re hiring a Data Engineer to join our Data Platform team. You’ll help build and scale the systems that power analytics, reporting, and data-driven features across the company. This role works with engineers, analysts, and product teams to make sure our data is accurate, available, and usable.
What You’ll Do
- Build and maintain reliable data pipelines and ETL/ELT workflows.
- Develop and optimize data models for analytics and internal tools.
- Work with team members to deliver clean, trusted datasets.
- Support core data platform tools like Airflow, dbt, Spark, Redshift, or Snowflake.
- Monitor data pipelines for quality, performance, and reliability.
- Write clear documentation and contribute to test coverage and CI/CD processes.
- Help shape our data lakehouse architecture and platform roadmap.
What You Need
- 2–4 years of experience in data engineering or a backend data-related role.
- Strong skills in Python or another backend programming language.
- Experience working with SQL and distributed data systems (e.g., Spark, Kafka).
- Familiarity with NoSQL stores like HBase or similar.
- Comfortable writing efficient queries and building data workflows.
- Understanding of data modeling for analytics and reporting.
- Exposure to tools like Airflow or other workflow schedulers.
Bonus Points
- Experience with DBT, Databricks, or real-time data pipelines.
- Familiarity with cloud infrastructure tools like Terraform or Kubernetes.
- Interest in data governance, ML pipelines, or compliance standards.
Why Join Us?
- Work on data that supports meaningful software security outcomes.
- Use modern tools in a cloud-first, open-source-friendly environment.
- Join a team that values clarity, learning, and autonomy.
If you're excited about building impactful software and helping others do the same, this is an opportunity to grow as a technical leader and make a meaningful impact.
Qualifications:*
1. 10+ years of experience, with 3+ years as Database Architect or related role
2. Technical expertise in data schemas, Amazon Redshift, Amazon S3, and Data Lakes
3. Analytical skills in data warehouse design and business intelligence
4. Strong problem-solving and strategic thinking abilities
5. Excellent communication skills
6. Bachelor's degree in Computer Science or related field; Master's degree preferred
*Skills Required:*
1. Database architecture and design
2. Data warehousing and business intelligence
3. Cloud-based data infrastructure (Amazon Redshift, S3, Data Lakes)
4. Data governance and security
5. Analytical and problem-solving skills
6. Strategic thinking and communication
7. Collaboration and team management
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Experience: 8+ Years
Work Location: Hyderabad
Mode of work: Work from Office
Senior Data Engineer / Architect
Summary of the Role
The Senior Data Engineer / Architect will be a key role within the data and technology team, responsible for engineering and building data solutions that enable seamless use of data within the organization.
Core Activities
- Work closely with the business teams and business analysts to understand and document data usage requirements
- Develop designs relating to data engineering solutions including data pipelines, ETL, data warehouse, data mart and data lake solutions
- Develop data designs for reporting and other data use requirements
- Develop data governance solutions that provide data governance services including data security, data quality, data lineage etc.
- Lead implementation of data use and data quality solutions
- Provide operational support for users for the implemented data solutions
- Support development of solutions that automate reporting and business intelligence requirements
- Support development of machine learning and AI solution using large scale internal and external datasets
Other activities
- Work on and manage technology projects as and when required
- Provide user and technical training on data solutions
Skills and Experience
- At least 5-8 years of experience in a senior data engineer / architect role
- Strong experience with AWS based data solutions including AWS Redshift, analytics and data governance solutions
- Strong experience with industry standard data governance / data quality solutions
- Strong experience with managing a Postgres SQL data environment
- Background as a software developer working in AWS / Python will be beneficial
- Experience with BI tools like Power BI and Tableau
Strong written and oral communication skills

Excellent knowledge in Core Java (J2SE) and J2EE technologies.
Hands-on experience with RESTful services, API design are must.
Knowledge of microservices architecture is must.
Knowledge of design patterns is must.
Strong knowledge in Exception handling and logging mechanism is must.
Agile scrum participation experience. Work experience with several agile teams on an application built
with microservices and event-based architectures to be deployed on hybrid (on-prem/cloud)
environments.
Good knowledge of Spring framework (MVC, Cloud, Data and Security. Etc) and ORM framework like
JPA/Hibernate.
Experience in managing the Source Code Base through Version Control tool like SVN, GitHub,
Bitbucket, etc.
Experience in using and configuration of Continuous Integration tools Jenkins, Travis, GitLab, etc.
Experience in design and development of SaaS/PaaS based architecture and tenancy models.
Experience in SaaS/PaaS based application development used by a high volume of
subscribers/customers.
Awareness and understanding of data security and privacy.
Experience in performing Java Code Review using review tools like SonarQube, etc.
Good understanding of end-to-end software development lifecycle. Ability to read and understand
requirements and design documents.
Good Analytical skills and should be self-driven.
Good communication with inter-personal skills.
Open for learning new technologies and domain.
A good team player and ready to take up new challenges. Active communication and coordination with
Clients and Internal stake holder
Requirements: Skills and Qualifications
6-8 years of experience in developing Java/J2EE based Enterprise Web Applications
Languages: Java, J2EE, and Python
Databases: MySQL, Oracle, SQL Server, PostgreSQL, Redshift, MongoDB
DB Script: SQL and PL/SQL
Frameworks: Spring, Spring Boot, Jersey, Hibernate and JPA
OS: Windows, Linux/Unix.
Cloud Services: AWS and Azure
Version Controls/ Devops tools: Git, Bitbucket and Jenkins.
Message brokers: RabbitMQ, and Kafka
Deployment Servers: Tomcat, Docker, and Kubernetes
Build Tools: Gradle/Maven