
Provide 24X7X365 NOC Support for Voice related issues
· SMSC, SMPP, HTTP, HTTPS, SIGTRAN, SS7 experience
· Knowledge and relevant experience on VPN and messaging nodes (SMSC, Bulk SMS, SMSR, SMS HUB)
· Manage day to day operations of SMPP SS7 SMS NOC Customer/Vendor/Route/Rate Configuration Management, Trace Capturing & Troubleshooting
· New clients interconnect and interconnection testing
· Monitoring of all alarms/alerts/Performance from system and System health checklist for various nodes
· Monitoring €“ Disk Space, log files, Dumps/logs purging, application up-time
· Capacity management and reporting in case of possible breach of capacity
· Answer customer emails/calls and provide timely & high level of service
· Use a variety of tools (Ethereal/Wireshark, ping, traceroute, browser, etc.) to quickly verify reported events Head to Head testing/configuration with clients
· Management reporting as per requirement Maintain a good understanding of all NOC processes and implement them appropriately
Skills
● Telecom industry experience will be an added advantage.
● Understanding of all selection methods and techniques
● Average communicator
● Well-organized
--

About globe teleservices
About
Similar jobs
Experience- | 1+ Years
We are looking for a Flutter Developer with 1+ years of experience in building and maintaining Android & iOS applications.
Responsibilities:
- Develop and maintain mobile apps using Flutter & Dart
- Work with device features (storage, media, contacts, permissions)
- Integrate AdMob and In-App Purchases
- Publish apps to Play Store & App Store
- Fix crashes, optimize performance, and scale apps long-term
Must Have:
- 1+ years Flutter experience
- Live apps on Play Store / App Store
- Knowledge of App Store & Play Store guidelines
- Experience with Firebase, crash fixing & performance optimization
Frontend developer with knowledge and experience in ReactJS, Jquery, Html, CSS.
Good Analytical thinking
Understand client requirements
Good Know ledge of testing APIs for integration.
Experience in Redux and React
About the role:
Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.
Here’s what will be expected out of you:
➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.
➢ Develop data pipelines that make data available across platforms.
➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.
➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.
➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.
What we want:
➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.
➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.
➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).
➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.
➢ Good understanding of orchestration tools like Airflow.
➢ Strong Python and SQL coding skills.
➢ Strong Experience in distributed systems like spark.
➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).
➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.
Note :
Product based companies, Ecommerce companies is added advantage
Hi Kirti,
Job Title: Data Analytics Engineer
Experience: 3 to 6 years
Location: Gurgaon (Hybrid)
Employment Type: Full-time
Job Description:
We are seeking a highly skilled Data Analytics Engineer with expertise in Qlik Replicate, Qlik Compose, and Data Warehousing to build and maintain robust data pipelines. The ideal candidate will have hands-on experience with Change Data Capture (CDC) pipelines from various sources, an understanding of Bronze, Silver, and Gold data layers, SQL querying for data warehouses like Amazon Redshift, and experience with Data Lakes using S3. A foundational understanding of Apache Parquet and Python is also desirable.
Key Responsibilities:
1. Data Pipeline Development & Maintenance
- Design, develop, and maintain ETL/ELT pipelines using Qlik Replicate and Qlik Compose.
- Ensure seamless data replication and transformation across multiple systems.
- Implement and optimize CDC-based data pipelines from various source systems.
2. Data Layering & Warehouse Management
- Implement Bronze, Silver, and Gold layer architectures to optimize data workflows.
- Design and manage data pipelines for structured and unstructured data.
- Ensure data integrity and quality within Redshift and other analytical data stores.
3. Database Management & SQL Development
- Write, optimize, and troubleshoot complex SQL queries for data warehouses like Redshift.
- Design and implement data models that support business intelligence and analytics use cases.
4. Data Lakes & Storage Optimization
- Work with AWS S3-based Data Lakes to store and manage large-scale datasets.
- Optimize data ingestion and retrieval using Apache Parquet.
5. Data Integration & Automation
- Integrate diverse data sources into a centralized analytics platform.
- Automate workflows to improve efficiency and reduce manual effort.
- Leverage Python for scripting, automation, and data manipulation where necessary.
6. Performance Optimization & Monitoring
- Monitor data pipelines for failures and implement recovery strategies.
- Optimize data flows for better performance, scalability, and cost-effectiveness.
- Troubleshoot and resolve ETL and data replication issues proactively.
Technical Expertise Required:
- 3 to 6 years of experience in Data Engineering, ETL Development, or related roles.
- Hands-on experience with Qlik Replicate & Qlik Compose for data integration.
- Strong SQL expertise, with experience in writing and optimizing queries for Redshift.
- Experience working with Bronze, Silver, and Gold layer architectures.
- Knowledge of Change Data Capture (CDC) pipelines from multiple sources.
- Experience working with AWS S3 Data Lakes.
- Experience working with Apache Parquet for data storage optimization.
- Basic understanding of Python for automation and data processing.
- Experience in cloud-based data architectures (AWS, Azure, GCP) is a plus.
- Strong analytical and problem-solving skills.
- Ability to work in a fast-paced, agile environment.
Preferred Qualifications:
- Experience in performance tuning and cost optimization in Redshift.
- Familiarity with big data technologies such as Spark or Hadoop.
- Understanding of data governance and security best practices.
- Exposure to data visualization tools such as Qlik Sense, Tableau, or Power BI.
Responsibilities:
- Provide technical leadership of critical integrations by MuleSoft mostly with contact-center solution.
- Provide MuleSoft technical expertise and leadership when evaluating and designing integration solutions ensuring all components and subsystems impacted are properly addressed during builds and deployments.
- Collaborate cross-functionally with teammates to implement integration solution.
- Troubleshoot MuleSoft/API technical issues as needed
Qualifications
- Bachelor's Degree required. In lieu of a degree, a comparable combination of education and experience may be considered.
- 3+ years of experience in building scalable, highly available, distributed solutions and services
- 1+ years of experience in middleware technologies: Enterprise Service Bus (ESB), most preferably with MuleSoft CloudHub and Orchestration, Routing and Transformation
- 3+ years of experience working with Java
- Experience in RESTful API architectures, specifications and implementations
- Working knowledge of progressive development processes like scrum, XP, Kanban, TDD, BDD and continuous delivery
- Concept understanding on Google Cloud platforms is a major plus
Candidate must have hands on experience into the core HR department.
He must be ready to work in US shifts from Mon-Fri from 7 PM till 4 AM.
IT/Staffing/Healthcare staffing experience is plus but accept all.
Mongodb
Nodejs

● Must have worked on Java and hands on experience on frameworks (like Spring),
database layers (like iBatis, Hibernate, etc.)
● Experience on web application & RESTful web service development.
● Build the front-end of applications through appealing visual design and Integrating our
front-end UI with the constructed API.
● Hands on UI experience in HTML, JS, CSS, Bootstrap, AngularJS 4/5/6, React,
JavaScript, XML, JQuery, Ionic 3/4 & Node JS.
● Hands on experience on project life cycle activities on development and maintenance
projects including creating Junit, Unit testing, Code reviews etc.
● To lead the development and lifetime maintenance of software products, as required to
enhance product line. You are also responsible for managing the continuous
improvement process within the software product’s lifecycle.
● Development of planned tasks, participate in the entire application lifecycle, focusing
on coding, Troubleshoot and debugging.
● Work closely with clients on issues related to design and development.
● Collaborate with Front-end developers to integrate user-facing elements with
server-side logic.
● Knowledge of Application server/container configuration management and application
deployment (whether it is Tomcat, JBoss, etc).
● Experience on web application, OOPs, Design Patterns, Interface, Serialization and Git
Version control.
● Ability to handle a team in a diverse/ multiple stakeholder environment.
● Must think logically and be a self-motivated problem solver.
● Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces








