11+ Teradata Jobs in Hyderabad | Teradata Job openings in Hyderabad
Apply to 11+ Teradata Jobs in Hyderabad on CutShort.io. Explore the latest Teradata Job opportunities across top companies like Google, Amazon & Adobe.
We are looking for a Teradata developer for one of our premium clients, Kindly contact me if interested
Business Analyst (EDI Domain)
Role Summary:
Acts as the bridge between business users and the technical team, gathering requirements and ensuring alignment with migration goals.
Key Responsibilities:
- Document functional requirements for EDI migration.
- Analyze customer-specific configurations and needs.
- Facilitate workshops with SMEs and stakeholders.
- Validate technical solutions against business expectations.
- Support change management and training.
Skills Required:
- Experience in logistics and transportation EDI systems.
- Strong understanding of customer-specific EDI needs.
- Requirements gathering and documentation skills.
- Excellent stakeholder communication and facilitation skills.
Familiarity with process mapping tools.

consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry
- Data Engineer
Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Job Description
Job Title: Data Engineer
Location: Hyderabad, India
Job Type: Full Time
Experience: 5 – 8 Years
Working Model: On-Site (No remote or work-from-home options available)
Work Schedule: Mountain Time Zone (3:00 PM to 11:00 PM IST)
Role Overview
The Data Engineer will be responsible for designing and implementing scalable backend systems, leveraging Python and PySpark to build high-performance solutions. The role requires a proactive and detail-orientated individual who can solve complex data engineering challenges while collaborating with cross-functional teams to deliver quality results.
Key Responsibilities
- Develop and maintain backend systems using Python and PySpark.
- Optimise and enhance system performance for large-scale data processing.
- Collaborate with cross-functional teams to define requirements and deliver solutions.
- Debug, troubleshoot, and resolve system issues and bottlenecks.
- Follow coding best practices to ensure code quality and maintainability.
- Utilise tools like Palantir Foundry for data management workflows (good to have).
Qualifications
- Strong proficiency in Python backend development.
- Hands-on experience with PySpark for data engineering.
- Excellent problem-solving skills and attention to detail.
- Good communication skills for effective team collaboration.
- Experience with Palantir Foundry or similar platforms is a plus.
Preferred Skills
- Experience with large-scale data processing and pipeline development.
- Familiarity with agile methodologies and development tools.
- Ability to optimise and streamline backend processes effectively.
Description:
We are looking for a highly motivated Full Stack Backend Software Intern to join our team. The ideal candidate should have a strong interest in AI, LLM (Large Language Models), and related technologies, along with the ability to work independently and complete tasks with minimal supervision.
Responsibilities:
- Research and gather requirements for backend software projects.
- Develop, test, and maintain backend components of web applications.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic.
- Optimize applications for maximum speed and scalability.
- Implement security and data protection measures.
- Stay up-to-date with emerging technologies and industry trends.
- Complete tasks with minimal hand-holding and supervision.
- Assist with frontend tasks using JavaScript and React if required.
Requirements:
- Proficiency in backend development languages such as Python or Node.js
- Familiarity with frontend technologies like HTML, CSS, JavaScript, and React.
- Experience with relational and non-relational databases.
- Understanding of RESTful APIs and microservices architecture.
- Knowledge of AI, LLM, and related technologies is a plus.
- Ability to work independently and complete tasks with minimal supervision.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
- Currently pursuing or recently completed a degree in Computer Science or related field.
Benefits:
- Opportunity to work on cutting-edge technologies in AI and LLM.
- Hands-on experience in developing backend systems for web applications.
- Mentorship from experienced developers and engineers.
- Flexible working hours and a supportive work environment.
- Possibility of a full-time position based on performance.
If you are passionate about backend development, AI, and LLM, and are eager to learn and grow in a dynamic environment, we would love to hear from you. Apply now to join our team as a Full Stack Backend Software Intern.
Experience: 3+ years of experience in Cloud Architecture
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
Cloud Architect / Lead
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Job Location: Pune/Bangalore/ Hyderabad/ Indore
- Very good knowledge of MuleSoft components.
- Prior work experience in setting up a COE using MuleSoft Integration Software.
- Good understanding of various integration patterns.
- Ability to deliver projects independently with little or no supervision.
- Previous experience working in a multi-geographic team.
- Previous experience with best programming practices.
- Good written and oral communication skills – English.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
- Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
- Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
- Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
- Proficient with various development methodologies like waterfall, agile/scrum and iterative
- Good Interpersonal skills and excellent communication skills for US and UK based clients
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
1. Strong fundamentals OOPS concepts, Exception Handling, Coding Standards, Logging
2. Creating custom, general use modules and components which extend the elements and modules of core Angular.
3. Creating configuration, build, and test scripts for Continuous Integration environments
4. Communicating with external web services and processing data
5. Experience with offline storage threading and performance tuning
6. Review code and maintain the code quality and suggest best practices
7. Knowledge and experience on data science and programming languages
8. Demonstrable abilities to optimize code. Strong analytical skills for effective problem solving


