11+ Information architecture Jobs in Hyderabad | Information architecture Job openings in Hyderabad
Apply to 11+ Information architecture Jobs in Hyderabad on CutShort.io. Explore the latest Information architecture Job opportunities across top companies like Google, Amazon & Adobe.
Job Description
We are looking for a user experience (UX) designer able to understand our business requirements and any technical limitations, as well as be responsible for conceiving and conducting user research, interviews, and surveys, and translating them into sitemaps, user flows, customer journey maps, wireframes, mockups, and prototypes.
Responsibilities
- Translate concepts into user flow, wireframes, mockups, and prototypes that lead to intuitive user experiences.
- Facilitate the product vision by researching, conceiving, sketching, prototyping and user-testing experiences for digital products.
- Design and deliver wireframes, user stories, user journeys, and mockups optimized for a wide range of devices and interfaces.
- Identify design problems and devise elegant solutions.
- Take a user-centred design approach and rapidly test and iterate your designs.
- Collaborate with other team members and stakeholders.
- Ask smart questions, take risks and champion new ideas.
Requirements
- (3+ Yr.’s) three or more years of UX design experience. Preference will be given to candidates who have experience designing complex solutions for complete digital environments.
- Expertise in standard UX software such as Sketching, Design, Photoshop, Wireframing, User Research, Prototype, UX UI Designer,(Added Advantage OmniGraffle, Azure RP, In Vision, UX Pin, )etc
- Ability to understand detailed requirements and design complete user experiences that meet the product’s needs and vision.
- Ability to iterate designs and solutions efficiently and intelligently.
- Ability to clearly and effectively communicate design processes, ideas, and solutions to teams and clients.
- A clear understanding of the importance of user-centred design and design thinking.
- Ability to work effectively in a team setting including synthesizing abstract ideas into concrete design implications.
- Be excited about collaborating and communicating closely with teams and other stakeholders via a distributed model, to regularly deliver design solutions for approval.
- Be passionate about resolving user pain points through great design.
- Be open to receiving feedback and constructive criticism.
The Store Manager – Apparel is responsible for overseeing all store operations, ensuring excellent customer service, driving sales performance, managing inventory, and leading a high-performing team to achieve business goals. The ideal candidate will combine strong leadership skills with retail expertise in the fashion/apparel industry.
Key Responsibilities
1. Store Operations
- Oversee daily store operations to ensure efficiency and compliance with company policies.
- Ensure the store meets visual merchandising standards and maintains an attractive, organized presentation.
- Manage cash handling, banking, loss prevention, and audit procedures.
- Ensure compliance with health, safety, and security policies.
2. Sales & Business Performance
- Drive sales to achieve or exceed monthly and annual targets.
- Monitor key performance indicators (KPIs) such as conversion rate, average transaction value (ATV), and customer retention.
- Analyze sales trends and optimize strategies accordingly.
- Implement promotions, campaigns, and pricing strategies.
3. Customer Experience
- Deliver exceptional customer service and resolve customer complaints or issues promptly.
- Train staff to provide product knowledge, styling advice, and personalized service.
- Foster a welcoming and engaging shopping environment.
4. Team Leadership & Development
- Recruit, onboard, and develop store staff.
- Coach, motivate, and lead the team to enhance performance and productivity.
- Conduct performance reviews, set goals, and manage staffing schedules.
- Encourage teamwork and accountability.
5. Inventory & Stock Management
- Ensure accurate stock levels and timely replenishment.
- Supervise receiving, stock processing, and inventory audits.
- Minimize shrinkage and manage stock integrity.
6. Reporting & Administration
- Prepare and submit reports on sales, stock, payroll, and store performance.
- Manage budgets and operational expenses.
- Coordinate with merchandising, HR, and regional teams.
Qualifications & Skills
Education:
- Bachelor’s degree in Business Administration, Retail Management, Fashion, or related field (preferred)
Experience:
- Minimum 3–5 years of retail experience, with at least 2 years in a supervisory or managerial role in apparel/fashion retail.
We are seeking an experienced AI Architect to design, build, and scale production-ready AI voice conversation agents deployed locally (on-prem / edge / private cloud) and optimized for GPU-accelerated, high-throughput environments.
You will own the end-to-end architecture of real-time voice systems, including speech recognition, LLM orchestration, dialog management, speech synthesis, and low-latency streaming pipelines—designed for reliability, scalability, and cost efficiency.
This role is highly hands-on and strategic, bridging research, engineering, and production infrastructure.
Key Responsibilities
Architecture & System Design
- Design low-latency, real-time voice agent architectures for local/on-prem deployment
- Define scalable architectures for ASR → LLM → TTS pipelines
- Optimize systems for GPU utilization, concurrency, and throughput
- Architect fault-tolerant, production-grade voice systems (HA, monitoring, recovery)
Voice & Conversational AI
- Design and integrate:
- Automatic Speech Recognition (ASR)
- Natural Language Understanding / LLMs
- Dialogue management & conversation state
- Text-to-Speech (TTS)
- Build streaming voice pipelines with sub-second response times
- Enable multi-turn, interruptible, natural conversations
Model & Inference Engineering
- Deploy and optimize local LLMs and speech models (quantization, batching, caching)
- Select and fine-tune open-source models for voice use cases
- Implement efficient inference using TensorRT, ONNX, CUDA, vLLM, Triton, or similar
Infrastructure & Production
- Design GPU-based inference clusters (bare metal or Kubernetes)
- Implement autoscaling, load balancing, and GPU scheduling
- Establish monitoring, logging, and performance metrics for voice agents
- Ensure security, privacy, and data isolation for local deployments
Leadership & Collaboration
- Set architectural standards and best practices
- Mentor ML and platform engineers
- Collaborate with product, infra, and applied research teams
- Drive decisions from prototype → production → scale
Required Qualifications
Technical Skills
- 7+ years in software / ML systems engineering
- 3+ years designing production AI systems
- Strong experience with real-time voice or conversational AI systems
- Deep understanding of LLMs, ASR, and TTS pipelines
- Hands-on experience with GPU inference optimization
- Strong Python and/or C++ background
- Experience with Linux, Docker, Kubernetes
AI & ML Expertise
- Experience deploying open-source LLMs locally
- Knowledge of model optimization:
- Quantization
- Batching
- Streaming inference
- Familiarity with voice models (e.g., Whisper-like ASR, neural TTS)
Systems & Scaling
- Experience with high-QPS, low-latency systems
- Knowledge of distributed systems and microservices
- Understanding of edge or on-prem AI deployments
Preferred Qualifications
- Experience building AI voice agents or call automation systems
- Background in speech processing or audio ML
- Experience with telephony, WebRTC, SIP, or streaming audio
- Familiarity with Triton Inference Server / vLLM
- Prior experience as Tech Lead or Principal Engineer
What We Offer
- Opportunity to architect state-of-the-art AI voice systems
- Work on real-world, high-scale production deployments
- Competitive compensation and equity (if applicable)
- High ownership and technical influence
- Collaboration with top-tier AI and infrastructure talent
Supercharge Your Career at Technoidentity!
Are you ready to tackle challenges that will push your boundaries and accelerate your career growth? At Technoidentity, were a Data+AI product engineering company building cuttingedge solutions in the FinTech domain for over 13 years.Were growing faster than ever and expanding globally. Nows the perfect time to join our team of tech innovators and leave your mark.
Principal Technical Lead - Golang
Location: Hyderabad
Were looking for a Principal Technical Lead with 6+ years of experience in backend or distributed systems engineering to take a hands-on leadership role in supporting our growth in the India region. Based in Hyderabad, you’ll work directly with engineering teams at leading companies, helping them design, build, and scale applications using strong Go Lang and distributed computing expertise.In this role, you’ll be a key technical voice, enabling customers to adopt Go Lang and distributed computing platforms effectively while guiding
their architecture decisions, resolving technical challenges, and helping them get the most value out of our technology. If you're passionate about distributed systems, love engaging with engineering teams, and want to help shape how modern applications are built — this is your opportunity.
What Will You Be Doing?
• Work closely with customer engineering teams to design scalable, fault-tolerant
systems using Go Lang and distributed computing platforms.
• Lead architecture discussions, whiteboarding sessions, and technical deep-dives with
customers.
• Identify key use cases and guide teams through proofs of concept, prototypes, and
production rollouts.
• Educate stakeholders on the benefits, trade-offs, and best practices of building with
Go Lang and distributed computing platforms.
• Help customers compare Go Lang and distributed computing approaches with
alternative technologies and make informed decisions.
• Partner with Account Executives to develop strategic technical engagement plans.
• Build strong relationships with technical stakeholders and decision-makers in
customer organizations.
• Provide feedback to our product and engineering teams based on customer needs
and use cases.
What Makes You the Perfect Fit?
• Must be based in Hyderabad and legally eligible to work in India without
sponsorship.
• 6+ years of experience in software engineering with a focus on backend, cloud, or
distributed systems.
• Excellent communication skills — capable of breaking down complex concepts in
clear, technical language.
• Proficiency in Go Lang and at least one of the following languages: Java, TypeScript,
Python, C#, or PHP.
• Strong understanding of distributed systems fundamentals (consistency, availability,
reliability, etc.).
• Experience designing and building applications in a production cloud environment
(AWS, Azure, or GCP).
• Familiarity with CI/CD, monitoring, performance tuning, and operational best
practices.
• Strong collaboration skills with the ability to partner cross-functionally and influence
technical direction.
• Experience with distributed computing platforms is a plus.
Who we are:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions
Kanerika has won several awards over the years, including:
CMMI Level 3 Appraised in 2024.
Best Place to Work 2022 & 2023 by Great Place to Work®.
Top 10 Most Recommended RPA Start-Ups in 2022 by RPA today.
NASSCOM Emerge 50 Award in 2014.
Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture.
Recognized for ISO 27701, 27001, SOC2, and GDPR compliances.
Featured as Top Data Analytics Services Provider by GoodFirms.
Working for us
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees. Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
Locations
We are located in Austin (USA), Singapore, Hyderabad, Indore and Ahmedabad (India).
Required Qualifications:
- 15+ years in data governance and management.
- Expertise in Microsoft Purview, Informatica, and related platforms.
- Experience leading end-to-end governance initiatives.
- Strong understanding of metadata, lineage, policy management, and compliance regulations.
- Hands-on skills in Azure Data Factory, REST APIs, PowerShell, and governance architecture.
- Familiar with Agile methodologies and stakeholder communication.
What You Will Do:
As a Data Governance Architect at Kanerika, you will play a pivotal role in shaping and executing the enterprise data governance strategy. Your responsibilities include:
1. Strategy, Framework, and Governance Operating Model
- Develop and maintain enterprise-wide data governance strategies, standards, and policies.
- Align governance practices with business goals like regulatory compliance and analytics readiness.
- Define roles and responsibilities within the governance operating model.
- Drive governance maturity assessments and lead change management initiatives.
2. Stakeholder Alignment & Organizational Enablement
- Collaborate across IT, legal, business, and compliance teams to align governance priorities.
- Define stewardship models and create enablement, training, and communication programs.
- Conduct onboarding sessions and workshops to promote governance awareness.
3. Architecture Design for Data Governance Platforms
- Design scalable and modular data governance architecture.
- Evaluate tools like Microsoft Purview, Collibra, Alation, BigID, Informatica.
- Ensure integration with metadata, privacy, quality, and policy systems.
4. Microsoft Purview Solution Architecture
- Lead end-to-end implementation and management of Microsoft Purview.
- Configure RBAC, collections, metadata scanning, business glossary, and classification rules.
- Implement sensitivity labels, insider risk controls, retention, data map, and audit dashboards.
5. Metadata, Lineage & Glossary Management
- Architect metadata repositories and ingestion workflows.
- Ensure end-to-end lineage (ADF → Synapse → Power BI).
- Define governance over business glossary and approval workflows.
6. Data Classification, Access & Policy Management
- Define and enforce rules for data classification, access, retention, and sharing.
- Align with GDPR, HIPAA, CCPA, SOX regulations.
- Use Microsoft Purview and MIP for policy enforcement automation.
7. Data Quality Governance
- Define KPIs, validation rules, and remediation workflows for enterprise data quality.
- Design scalable quality frameworks integrated into data pipelines.
8. Compliance, Risk, and Audit Oversight
- Identify risks and define standards for compliance reporting and audits.
- Configure usage analytics, alerts, and dashboards for policy enforcement.
9. Automation & Integration
- Automate governance processes using PowerShell, Azure Functions, Logic Apps, REST APIs.
- Integrate governance tools with Azure Monitor, Synapse Link, Power BI, and third-party platforms.
We are looking for a highly skilled Sr. Big Data Engineer with 3-5 years of experience in
building large-scale data pipelines, real-time streaming solutions, and batch/stream
processing systems. The ideal candidate should be proficient in Spark, Kafka, Python, and
AWS Big Data services, with hands-on experience in implementing CDC (Change Data
Capture) pipelines and integrating multiple data sources and sinks.
Responsibilities
- Design, develop, and optimize batch and streaming data pipelines using Apache Spark and Python.
- Build and maintain real-time data ingestion pipelines leveraging Kafka and AWS Kinesis.
- Implement CDC (Change Data Capture) pipelines using Kafka Connect, Debezium or similar frameworks.
- Integrate data from multiple sources and sinks (databases, APIs, message queues, file systems, cloud storage).
- Work with AWS Big Data ecosystem: Glue, EMR, Kinesis, Athena, S3, Lambda, Step Functions.
- Ensure pipeline scalability, reliability, and performance tuning of Spark jobs and EMR clusters.
- Develop data transformation and ETL workflows in AWS Glue and manage schema evolution.
- Collaborate with data scientists, analysts, and product teams to deliver reliable and high-quality data solutions.
- Implement monitoring, logging, and alerting for critical data pipelines.
- Follow best practices for data security, compliance, and cost optimization in cloud environments.
Required Skills & Experience
- Programming: Strong proficiency in Python (PySpark, data frameworks, automation).
- Big Data Processing: Hands-on experience with Apache Spark (batch & streaming).
- Messaging & Streaming: Proficient in Kafka (brokers, topics, partitions, consumer groups) and AWS Kinesis.
- CDC Pipelines: Experience with Debezium / Kafka Connect / custom CDC frameworks.
- AWS Services: AWS Glue, EMR, S3, Athena, Lambda, IAM, CloudWatch.
- ETL/ELT Workflows: Strong knowledge of data ingestion, transformation, partitioning, schema management.
- Databases: Experience with relational databases (MySQL, Postgres, Oracle) and NoSQL (MongoDB, DynamoDB, Cassandra).
- Data Formats: JSON, Parquet, Avro, ORC, Delta/Iceberg/Hudi.
- Version Control & CI/CD: Git, GitHub/GitLab, Jenkins, or CodePipeline.
- Monitoring/Logging: CloudWatch, Prometheus, ELK/Opensearch.
- Containers & Orchestration (nice-to-have): Docker, Kubernetes, Airflow/Step
- Functions for workflow orchestration.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- Experience in large-scale data lake / lake house architectures.
- Knowledge of data warehousing concepts and query optimisation.
- Familiarity with data governance, lineage, and cataloging tools (Glue Data Catalog, Apache Atlas).
- Exposure to ML/AI data pipelines is a plus.
Tools & Technologies (must-have exposure)
- Big Data & Processing: Apache Spark, PySpark, AWS EMR, AWS Glue
- Streaming & Messaging: Apache Kafka, Kafka Connect, Debezium, AWS Kinesis
- Cloud & Storage: AWS (S3, Athena, Lambda, IAM, CloudWatch)
- Programming & Scripting: Python, SQL, Bash
- Orchestration: Airflow / Step Functions
- Version Control & CI/CD: Git, Jenkins/CodePipeline
- Data Formats: Parquet, Avro, ORC, JSON, Delta, Iceberg, Hudi
About the Organization:
WeMakeScholars is an organization funded and supported by the Ministry of IT, Government of India under the 'Digital India Campaign'. We offer international education finance via scholarships and education loans, to study abroad aspirants. We are currently a 120 members team. Last financial year, we disbursed 2200 Cr in education loans to 8300 students across India.
We are looking for a PHP Developer who is keen to work in a startup environment esp. a product-based company.
Job Overview:
We are hiring developers (from 0 to 10 experience) with relevant knowledge and experience with the development and deployment of large-scale cloud-native enterprise systems on PHP, REST APIs, Node Js, Flutter, AWS & MySql.
As a full-stack engineer, you will be working to create scalable, maintainable, bug repellent state of art web applications.
About Tech Department:
- Our internal tech team is responsible for end-to-end in-house tech support which includes and is not limited to design, development & deployment.
- We are continuously developing and improvising our products to enhance customer and team experience.
- As we are in the Fintech space, our team has to ensure the database security of the application.
Job Summary:
- Participate in full lifecycle development
- Work closely with product managers and UI/UX designers to analyze requirements
- Review peer code changes
- Make recommendations for design and implementation improvements
- Integrating the Frontend UI with Backend APIs.
- Design and implement APIs for web/mobile (both Android & IOS) applications.
- Work with the Product team in prioritizing development activities for weekly sprints
Experience & Skills Required
- Prior experience- not mandatory
- Ability to quickly learn - PHP, MySql, JQuery, Ajax, Git
- Have familiarity with- HTML, CSS, JS
- Strong team player with an open mindset to learn new technologies and Knowledge of common SDLC.
- A passion for solving problems and providing workable solutions
Preference will be given to those with
- Internships experience
- Worked on own projects
- Experience in PHP, MySql
What are the Key Responsibilities:
- Responsibilities include writing and testing code, debugging programs, and integrating applications with third-party web services.
- Write effective, scalable code
- Develop back-end components to improve responsiveness and overall performance
- Integrate user-facing elements into applications
- Improve functionality of existing systems
- Implement security and data protection solutions
- Assess and prioritize feature requests
- Creates customized applications for smaller tasks to enhance website capability based on business needs
- Ensures web pages are functional across different browser types; conducts tests to verify user functionality
- Verifies compliance with accessibility standards
- Assists in resolving moderately complex production support problems
What are we looking for:
- 3+ Years of work experience as a Python Developer
- Expertise in at least one popular Python framework: Django
- Knowledge of NoSQL databases (Elastic search, MongoDB)
- Familiarity with front-end technologies like JavaScript, HTML5, and CSS3
- Familiarity with Apache Kafka will give you an edge over others
- Good understanding of the operating system and networking concepts
- Good analytical and troubleshooting skills
- Graduation/Post Graduation in Computer Science / IT / Software Engineering
- Decent verbal and written communication skills to communicate with customers, support personnel, and management
Job Description:
- Should have work experience in using GDS - Amadeus for Ticketing.
- Able to Issue, Reissue, Refund, Cancellation of tickets.
- Should have experience in communicating with Clients/Customers through Emails, Calls, and Chat.
- Should have Good Communication and Presentation skills.
- Should be able to improvise from time to time as per business requirements.
We have exciting Perks and Benefits which will be informed to the shortlisted candidate after the interview.
Job Title: Chief Engineer: Deep Learning Compiler Expert
You will collaborate with experts in machine learning, algorithms and software to lead our effort of deploying machine learning models onto Samsung Mobile AI platform.
In this position, you will contribute, develop and enhance our compiler infrastructure for high-performance by using open-source technology like MLIR, LLVM, TVM and IREE.
Necessary Skills / Attributes:
- 6 to 15 years of experience in the field of compiler design and graph mapping.
- 2+ years hands-on experience with MLIR and/or LLVM.
- Experience with multiple toolchains, compilers, and Instruction Set Architectures.
- Strong knowledge of resource management, scheduling, code generation, and compute graph optimization.
- Strong expertise in writing modern standards (C++17 or newer) C++ production quality code along test-driven development principles.
- Comfortable and experienced in software development life cycle - coding, debugging, optimization, testing, and continuous integration.
- Familiarity with parallelization techniques for ML acceleration.
- Experience working on and contributing to an active compiler toolchain codebase, such as LLVM, MLIR, or Glow.
- Experience in deep learning algorithms and techniques, e.g., convolutional neural networks, recurrent networks, etc.
- Experience of developing in a mainstream machine-learning framework, e.g. PyTorch, Tensorflow or Caffe.
- Experience operating in a fast-moving environment where the workloads evolve at a rapid pace.
- Understanding of the interplay of hardware and software architectures on future algorithms, programming models and applications.
- Experience developing innovative architectures to extend the state of the art in DL performance and efficiency.
- Experience with Hardware and Software Co-design.
M.S. or higher degree, in CS/CE/EE or equivalent with industry or open-source experience.
Work Profile:
- Design, implement and test compiler features and capabilities related to infrastructure and compiler passes.
- Ingest CNN graphs in Pytorch/TF/TFLite/ONNX format and map them to hardware implementations, model data-flows, create resource utilization cost-benefit analysis and estimate silicon performance.
- Develop graph compiler optimizations (operator fusion, layout optimization, etc) that are customized to each of the different ML accelerators in the system.
- Integrate open-source and vendor compiler technology into Samsung ML internal compiler infrastructure.
- Collaborate with Samsung ML acceleration platform engineers to guide the direction of inferencing and provide requirements and feature requests for hardware vendors.
- Closely follow industry and academic developments in the ML compiler domain and provide performance guidelines and standard methodologies for other ML engineers.
- Create and optimize compiler backend to leverage the full hardware potential, efficiently optimizing them using novel approaches.
- Evaluate code performance, debug, diagnose and drive resolution of compiler and cross-disciplinary system issues.
- Contribute to the development of machine-learning libraries, intermediate representations, export formats and analysis tools.
- Communicate and collaborate effectively with cross-functional hardware and software engineering teams.
- Champion engineering and operational excellence, establishing metrics and processes for regular assessment and improvement.
Keywords to source candidates
Senior Developer, Deep Learning, Prediction engine, Machine Learning, Compiler
An ideal candidate must possess excellent Logical & Analytical skills. You will be working in a team as well on diverse projects. The candidate must be able to deal smoothly and confidently with the Clients & Personnel.
Key roles and Responsibilities:
⦁ Able to design and build efficient, testable and reliable code.
⦁ Should be a team player sharing ideas with the team for continuous improvement and development process.
⦁ Good Knowledge on Spring Boot, Spring MVC, J2EE and SQL Queries.
⦁ Stay updated of new tools, libraries, and best practices.
⦁ Adaptable, Self-Motivated, must be willing to learn new things.
⦁ Sound Good knowledge on HTML, CSS, JavaScript.
Basic Requirements:
⦁ Bachelors' Degree in Computer Science Engineering / IT or related discipline with a good academic record.
⦁ Excellent communication skills and interpersonal skills.
⦁ Knowledge on SDLC flow from requirement analysis to deployment phase.
⦁ Should be able to design, develop and deploy applications.
⦁ Able to identify bugs and devise solutions to address and resolve the issues.






