11+ UCCA Jobs in India
Apply to 11+ UCCA Jobs on CutShort.io. Find your next job, effortlessly. Browse UCCA Jobs and apply today!
- This role includes both product and Project management.Involve in Data collection,Data filtering,refining, and sizing huge incidents data.
- Preferred Domain- Telecom
- Good to have Voice, Unified Communication, Contact center, Messaging experience, Application to Person (i.e. integration with WhatsApp,etc.)
- Work closely with the Engineering Team,Customer Business Team, different cross-functional Team to realize a road map of RPA implementation, Starting from requirement gathering, Ideation to Solution design and implementation
- create PDD (Process Design Document) and review SDD (Solution design document),POC development, Solutioning/Sizing/Business Case/Pricing activities
- Work closely with Marketing Team for proposal preparation,Business case development
Job Summary:
Technical Support Associates
We are looking for technically skilled and customer-oriented SME Voice – Technical Support Associates to provide voice-based support to enterprise clients. The role involves real-time troubleshooting of complex issues across servers, networks, cloud platforms (Azure), databases, and more. Strong communication and problem-solving skills are essential.
Key Responsibilities:
- Provide technical voice support to B2B (enterprise) customers.
- Troubleshoot and resolve issues related to:
- SQL, DNS, VPN, Server Support (Windows/Linux)
- Networking (TCP/IP, routing, firewalls)
- Cloud Services – especially Microsoft Azure
- Application and system-level issues
- Assist with technical configurations and product usage.
- Accurately document cases and escalate unresolved issues.
- Ensure timely resolution while meeting SLAs and quality standards.
Required Skills & Qualifications:
- 2.5 to 5 years in technical support (voice-based, B2B preferred)
Proficiency in:
- SQL, DNS, VPN, Server Support
- Networking, Microsoft Azure
- Basic understanding of coding/scripting
- Strong troubleshooting and communication skills
- Ability to work in a 24x7 rotational shift environment
What you'll do:
- Mandatory - Experience & hands-on experience in setting up, optimizing, and securing analytical distributed data sources such as ClickHouse, Druid, or similar distributed database systems. Intermidiate DBA skills required.
- Deploy, configure, and maintain Kubernetes clusters for our microservices architecture.
- Utilize Git and Helm for version control and deployment management.
- Implement and manage monitoring solutions using Prometheus and Grafana.
- Work on continuous integration and continuous deployment (CI/CD) pipelines.
- Containerize applications using Docker and manage orchestration.
- Manage and optimize AWS services, including but not limited to EC2, S3, RDS, and AWS CDN.
- Maintain and optimize MySQL databases, Airflow, and Redis instances.
- Write automation scripts in Bash or Python for system administration tasks.
- Perform Linux administration tasks and troubleshoot system issues.
- Utilize Ansible and Terraform for configuration management and infrastructure as code.
- Demonstrate knowledge of networking and load-balancing principles.
- Collaborate with development teams to ensure applications meet reliability and performance standards.
Additional Skills (Good to Know):
- Familiarity with ClickHouse and Druid for data storage and analytics.
- Experience with Jenkins for continuous integration.
- Basic understanding of Google Cloud Platform (GCP) and data center operations.
Who you are:
- Minimum 3 years of experience in a Site Reliability Engineer role or similar.
- Proven experience with Kubernetes, Git, Helm, Prometheus, Grafana, CI/CD, Docker, and microservices architecture.
- Strong knowledge of AWS services, MySQL, Airflow, Redis, AWS CDN.
- Proficient in scripting languages such as Bash or Python.
- Hands-on experience with Linux administration.
- Familiarity with Ansible and Terraform for infrastructure management.
- Understanding of networking principles and load balancing.
- Bachelor's degree in Computer Science, Information Technology, or a related field.
Designation: Java developer
Experience :5+yrs
Job location:Chennai/Bangalore (Hybrid)
Must have skills
6 to 8 Years of Total experience as a Java Developer. Java and Raptor experience is mandatory,
Unit Testing and Functional testing are good
.Immediate Joiners are preferred
Interns day to day responsibilities include:
- Do the quality check for the catalogue images as per Image guidelines, to review product catalogues based on category guidelines and ensure that delivered data is 100% accurate and complete with a positive impact on customer experience.
- Do frequent checks on existing processes to help the Catalogue team achieve more consistent results with high quality and to own new projects which will aim at achieving high productivity of an individual/team.
- Populate 100% accurate content from various sources (Brand Websites, E commerce Portals, Product info Pages, Seller’s, Consumers, Companies, Subject Matter Experts) for specific as well as general information about the product through various channels (Verbal & Written communication, through on ground activities as well as through various online search techniques).
- Threat and vulnerability analysis.
- Investigating, documenting, and reporting on any information security (InfoSec) issues as well as emerging trends.
- Analysis and response to previously unknown hardware and software vulnerabilities.
- Preparing disaster recovery plans.
SOC analysts are considered the last line of defense and they usually work as part of a large security team, working alongside security managers and cybersecurity engineers. Typically, SOC analysts report to the company’s chief information security officer (CISO).
SOC analysts need to be detail oriented because they are responsible for monitoring many aspects simultaneously. They need to watch the protected network and respond to threats and events. The level of responsibility typically depends on the size of the organization.
Exposure to development and implementation practices in a modern systems environment together with exposure to working in a project team particularly with reference to industry methodologies, e.g. Agile, continuous delivery, etc
- At least 3-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Experience using DevOps methodology and Infrastructure as Code
- Automation / CI/CD tools – Bitbucket Pipelines, Jenkins
- Infrastructure as code – Terraform, Cloudformation, etc
- Strong experience deploying and managing infrastructure with Terraform
- Automated provisioning and configuration management – Ansible, Chef, Puppet
- Experience with Docker, GitHub, Jenkins, ELK and deploying applications on AWS
- Improve CI/CD processes, support software builds and CI/CD of the development departments
- Develop, maintain, and optimize automated deployment code for development, test, staging and production environments
- Set up, manage, automate and deploy AI models in development and production infrastructure.
- Orchestrate life cycle management of AI Models
- Create APIs and help business customers put results of the AI models into operations
- Develop MVP ML learning models and prototype applications applying known AI models and verify the problem/solution fit
- Validate the AI Models
- Make model performant (time and space) based on the business needs
- Perform statistical analysis and fine-tuning using test results
- Train and retrain systems when necessary
- Extend existing ML libraries and frameworks
- Processing, cleansing, and verifying the integrity of data used for analysis
- Ensuring that algorithms generate accurate user recommendations/insights/outputs
- Keep abreast with the latest AI tools relevant to our business domain
- Bachelor’s or master's degree in Computer Science, Statistics or related field
- A Master’s degree in data analytics, or similar will be advantageous.
- 3 - 5 years of relevant experience in deploying AI models to production
- Understanding of data structures, data modeling, and software architecture
- Good knowledge of math, probability, statistics, and algorithms
- Ability to write robust code in Python/ R
- Proficiency in using query languages, such as SQL
- Familiarity with machine learning frameworks such as PyTorch, Tensorflow and libraries such as scikit-learn
- Worked with well know machine learning models ( SVM, clustering techniques, forecasting models, Random Forest, etc.)
- Having knowledge in CI/CD for the building and hosting the solutions
- We don’t expect you to be an expert or an AI researcher, but you must be able to take existing models and best practices and adapt them to our environment.
- Adherence to compliance procedures in accordance with regulatory standards, requirements, and policies.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with an innate ability to engage with data architects, analysts, and scientists.





