
About Teradata
About
Connect with the team
Similar jobs
Required Skill Set :--
- Data Model & Mapping
- MS SQL Database
- Analytics SQL Query
- Genesys Cloud Reporting & Analytics API
- Snow Flake (Good to have)
- Cloud Exposure – AWS or Azure
Technical Experience –
· 5 - 8 Years of experience, preferable at technology or Financial firm
· Strong understanding of data analysis & reporting tools.
· Experience with data mining & machine learning techniques.
· Excellent communication & presentation skills
· Must have at least 2 – 3 years of experience in Data Model/Analysis /mapping
· Must have hands on experience in database tools & technologies
· Must have exposure to Genesys cloud, WFM, GIM, Genesys Analytics API
· Good to have experience or exposure on salesforce, AWS or AZUre , & Genesys cloud
· Ability to work independently & as part of a team
· Strong attention to detail and accuracy.
Work Scope –
- Data Model similar GIM database based on the Genesys Cloud data.
- API to column data mapping.
- Data Model for business for Analytics
- Data base artifacts
- Scripting – Python
- Autosys, TWS job setup.
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.
Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
- Monitor and optimize Azure environments to ensure high availability, performance, and security.
- Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
- Troubleshoot and resolve issues related to build, deployment, and infrastructure.
- Implement and manage version control systems, primarily using Git.
- Manage containerization and orchestration using tools like Docker and Kubernetes.
- Ensure compliance with industry standards and best practices for security, scalability, and reliability.
Desired Competencies:
Ø Expertise in Azure Data Factory V2
Ø Expertise in other Azure components like Data lake Store, SQL Database, Databricks
Ø Must have working knowledge of spark programming
Ø Good exposure to Data Projects dealing with Data Design and Source to Target documentation including defining transformation rules
Ø Strong knowledge of CICD Process
Ø Experience in building power BI reports
Ø Understanding of different components like Pipelines, activities, datasets & linked services
Ø Exposure to dynamic configuration of pipelines using data sets and linked Services
Ø Experience in designing, developing and deploying pipelines to higher environments
Ø Good knowledge on File formats for flexible usage, File location Objects (SFTP, FTP, local, HDFS, ADLS, BLOB, Amazon S3 etc.)
Ø Strong knowledge in SQL queries
Ø Must have worked in full life-cycle development from functional design to deployment
Ø Should have working knowledge of GIT, SVN
Ø Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
Ø Should have working knowledge of different resources available in Azure like Storage Account, Synapse, Azure SQL Server, Azure Data Bricks, Azure Purview
Ø Any experience related to metadata management, data modelling, and related tools (Erwin or ER Studio or others) would be preferred
Preferred Qualifications:
Ø Bachelor's degree in Computer Science or Technology
Ø Proven success in contributing to a team-oriented environment
Ø Proven ability to work creatively and analytically in a problem-solving environment
Ø Excellent communication (written and oral) and interpersonal skills
Qualifications
BE/BTECH
KEY RESPONSIBILITIES :
|
You will join a team designing and building a data warehouse covering both relational and dimensional models, developing reports, data marts and other extracts and delivering these via SSIS, SSRS, SSAS, and PowerBI. It is seen as playing a vital role in delivering a single version of the truth on Client’s data and delivering MI & BI that will feature in enabling both operational and strategic decision making. You will be able to take responsibility for projects over the entire software lifecycle and work with minimum supervision. This would include technical analysis, design, development, and test support as well as managing the delivery to production. The initial project being resourced is around the development and implementation of a Data Warehouse and associated MI/BI functions. |
|
Principal Activities: 1. Interpret written business requirements documents 2. Specify (High Level Design and Tech Spec), code and write automated unit tests for new aspects of MI/BI Service. 3. Write clear and concise supporting documentation for deliverable items. 4. Become a member of the skilled development team willing to contribute and share experiences and learn as appropriate. 5. Review and contribute to requirements documentation. 6. Provide third line support for internally developed software. 7. Create and maintain continuous deployment pipelines. 8. Help maintain Development Team standards and principles. 9. Contribute and share learning and experiences with the greater Development team. 10. Work within the company’s approved processes, including design and service transition. 11. Collaborate with other teams and departments across the firm. 12. Be willing to travel to other offices when required. |
Location – Bangalore

We are seeking a Production Support Engineer to join our team.
Responsibilites:
- Be the first line of defense for production and test environment issues.
- Work collaboratively with the team to identify, manage, and resolve ongoing incidents.
- Troubleshoot and connect with appropriate teams to effectively triage issues impacting test and production environments.
- Understand system architecture, upstream, and downstream dependencies to enable effective participation in triage and restoration activities.
- Perform systems monitoring of applications within the IRS domain after service restoration and post patching, maintenance, and upgrades.
- Create necessary service tickets and ensure tickets are routed to the appropriate technical teams.
- Provide weekend support for various activities including patching, release deployments, security updates, and 3rd party updates.
- Keep up with info alerts, patching alerts, and delivery partners' activities.
- Update stakeholders to plan for upcoming maintenance as well as alert them about service issues and restoration.
- Manage and communicate about upcoming maintenance in the test environment on a daily basis.
- Liaise with various stakeholders to gain approval for alert communications, including confirmation before an all-clear communication.
- Work closely with testing and development teams to prepare for infrastructure updates and release readiness.
- Submit Application Redirects tickets for planned maintenance after gaining approval from management.
- Participate in analysis and improvement of system performance.
- Host daily operational standup.
- Provide additional support to existing production support procedures and process improvements.
- Provide regular status reports to management on application status and other metrics.
- Collaborate with management to improve and customize reports related to production support.
- Plan and manage support for incident management tools and processes.
Requirements:
- Bachelor's Degree in computer science, engineering, or related field.
- AWS Cloud certification.
- 3+ years of relevant IT work experience with cloud experience.
- Knowledge of Java and microservice development and deployments.
- Understanding of the business processes behind applications.
- Strong analytical, problem-solving, negotiation, task and project management, and organizational skills.
- Strong oral and written communication skills, including process documentation.
- Proficiency in Microsoft Office applications (Word, PowerPoint, Excel, and Project).
- Proficiency in knowledge of computer systems, databases, and SharePoint.
- Knowledge of Splunk and AppDynamics.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link: https://zrec.in/gQWFK?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
(Kindly note this is not a Developmental role. Experience and/ or interest in Production Support/ Operations is mandatory.)
C2H position (Long Contract) with absolute potential of full-time recruitment with a brilliant company.
- Deep hands-on experience in designing & developing Python based applications
- Hands-on experience building database-backed web applications using Python based frameworks
- Excellent knowledge of Linux and experience developing Python applications that are deployed in Linux environments
- Experience building client-side and server-side API-level integrations in Python
- Experience in containerization and container orchestration systems like Docker, Kubernetes, etc.
- Experience with NoSQL document stores like the Elastic Stack (Elasticsearch, Logstash, Kibana)
- Development experience with modern JavaScript based front end frameworks, especially Vue.js
- Experience in test automation and TDD
- Experience testing interactive applications with unit testing frameworks for the various technology stacks
- Experience in using and managing Git based version control systems - Azure DevOps, GitHub, Bitbucket etc.
- Experience in using project management tools like Jira, Azure DevOps etc.
- Expertise in Cloud based development and deployment using cloud providers like AWS or Azure
Role: Senior Software Developer
Experience: 3-6 years of experience.
Work Location: Bangalore
Job responsibility The Software Developer contributes, to create a customer’s solution, in the building phase of the software development life cycle. The Software Engineer is responsible for performing the detailed design of application and technical architecture components and classes in accordance to the specification provided by the Solution Architect, for coding of SW components, and for the early testing phases (component testing), and system testing support
Requirement:
Must Have skills: Java/J2ee, SQL, Web services, Spring Boot, Elastic search
Job description
- Hands on experience in Java, Python, scripting
- Design, implement, deploy and support web-based applications and web services using server technologies stacks that include Java, MySQL
- Experience in Java/J2EE using web frameworks Play or Spring boot, REST API Development and ORM (JPA, Hibernate eBean etc.)
- In depth knowledge of Design patterns and Data structures
- Databases - MySQL, Oracle, SQL
- Solid understanding of concepts like Web Services, SOA, REST APIs, Message Queue, Caching, Distributed/Scalable Architecture
- Secure application development best practices, such as OWASP
- Document and maintain all design documents and involve in review process.
- Experienced problem solving and debugging skills.
- Good verbal and written communication and interpersonal skills
- Experience in Agile methodology is added advantage
- Knowledge of Cloud Knowledge /Azure
- Must know about Unix knowledge
- experience in Elastic search
Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.
Responsibilities:
This is a highly accountable role and the candidate must meet the following professional expectations:
• Owning and improving the scalability and reliability of our products.
• Working directly with product engineering and infrastructure teams.
• Designing and developing various monitoring system tools.
• Accountable for developing deployment strategies and build configuration management.
• Deploying and updating system and application software.
• Ensure regular, effective communication with team members and cross-functional resources.
• Maintaining a positive and supportive work culture.
• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.
• Develop tooling and processes to drive and improve customer experience, create playbooks.
• Eliminate manual tasks via configuration management.
• Intelligently migrate services from one AWS region to other AWS regions.
• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.
• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.
• Evangelize configuration management and automation to other product developers.
• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.
Required Candidate profile :
• 3+ years of proven experience working in a DevOps environment.
• 3+ years of proven experience working in AWS Cloud environments.
• Solid understanding of networking and security best practices.
• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.
• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)
• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.
• Hands on Experience in Docker is a big plus.
• Experience working in an Agile, fast paced, DevOps environment.
• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.
• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.
• Fluency with version control systems with a preference for Git *
• Strong Linux-based infrastructures, Linux administration
• Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.
• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.
• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.
d ability to rain others on technical and procedural topics.









