Requirements
- 6-10 years of experience in technical customer support or network operations
- IP Networking basics: TCP/IP(ARP, IP, ICMP, TCP, UDP), Subnetting, IP Packet flow, OSI layers
- Routing technologies: OSPF, ISIS, BGP, MPLS (L2 & L3 VPN), RSVP, LDP, Multicast protocols (IGMP, PIM), Multicast VPN (MVPN)
- BNG protocols: PPPoE, DHCP, IPoE, L2TP
- Forwarding: Hierarchical QoS, uRPF, Firewalls, ACLs
- Switching and Data Center technologies: VLAN/Trunking, STP, RSTP, VSTP, VXLAN/EVPN, IP-Fabric as an added advantage
- Very Strong automation skills and experience with scripting languages like Python/Robot Framework
- Experience with traffic generators and network protocols analysis tools like IXIA, Spirent
- Any Open Source tools usage & certifications is an added advantage
- Knowledge of Linux, ONL infra and containers is an advantage
Responsibilities
- Proactively work with customers to enable them maximize usage of RtBrick products with least possible effort
- Understand customer usecases and help with qualification of RtBrick products & features
- Triage and resolve any issues customers face with products
- Work with engineering to resolve new defects
- Help engineering by collecting relevant information from live setup and reproduce the issue in LAB occasionally
- Understand in-house regression coverage and identify gaps in test case as per customer deployments
- Write and automate test cases related to customer usage and defect scenarios
About Rtbrick
Similar jobs
Job Description:
We are looking for an experienced SQL Developer to become a valued member of our dynamic team. In the role of SQL Developer, you will be tasked with creating top-notch database solutions, fine-tuning SQL databases, and providing support for our applications and systems. Your proficiency in SQL database design, development, and optimization will be instrumental in delivering efficient and dependable solutions to fulfil our business requirements.
Responsibilities:
● Create high-quality database solutions that align with the organization's requirements and standards.
● Design, manage, and fine-tune SQL databases, queries, and procedures to achieve optimal performance and scalability.
● Collaborate on the development of DBT pipelines to facilitate data transformation and modelling within our data warehouse.
● Evaluate and interpret ongoing business report requirements, gaining a clear understanding of the data necessary for insightful reporting.
● Conduct research to gather the essential data for constructing relevant and valuable reporting materials for stakeholders.
● Analyse existing SQL queries to identify areas for performance enhancements, implementing optimizations for greater efficiency.
● Propose new queries to extract meaningful insights from the data and enhance reporting capabilities.
● Develop procedures and scripts to ensure smooth data migration between systems, safeguarding data integrity.
● Deliver timely management reports on a scheduled basis to support decision-making processes.
● Investigate exceptions related to asset movements to maintain accurate and dependable data records.
Duties and Responsibilities:
● A minimum of 3 years of hands-on experience in SQL development and administration, showcasing a strong proficiency in database management.
● A solid grasp of SQL database design, development, and optimization techniques.
● A Bachelor's degree in Computer Science, Information Technology, or a related field.
● An excellent understanding of DBT (Data Build Tool) and its practical application in data transformation and modelling.
● Proficiency in either Python or JavaScript, as these are commonly utilized for data-related tasks.
● Familiarity with NoSQL databases and their practical application in specific scenarios.
● Demonstrated commitment and pride in your work, with a focus on contributing to the company's overall success.
● Strong problem-solving skills and the ability to collaborate effectively within a team environment.
● Excellent interpersonal and communication skills that facilitate productive collaboration with colleagues and stakeholders.
● Familiarity with Agile development methodologies and tools that promote efficient project management and teamwork.
Required Qualifications:
∙Bachelor’s degree in computer science, Information Technology, or related field, or equivalent experience.
∙5+ years of experience in a DevOps role, preferably for a SaaS or software company.
∙Expertise in cloud computing platforms (e.g., AWS, Azure, GCP).
∙Proficiency in scripting languages (e.g., Python, Bash, Ruby).
∙Extensive experience with CI/CD tools (e.g., Jenkins, GitLab CI, Travis CI).
∙Extensive experience with NGINX and similar web servers.
∙Strong knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
∙Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation).
∙Ability to work on-call as needed and respond to emergencies in a timely manner.
∙Experience with high transactional e-commerce platforms.
Preferred Qualifications:
∙Certifications in cloud computing or DevOps are a plus (e.g., AWS Certified DevOps Engineer,
Azure DevOps Engineer Expert).
∙Experience in a high availability, 24x7x365 environment.
∙Strong collaboration, communication, and interpersonal skills.
∙Ability to work independently and as part of a team.
Responsibilities:
- Collaborating with data scientists and machine learning engineers to understand their requirements and design scalable, reliable, and efficient machine learning platform solutions.
- Building and maintaining the applications and infrastructure to support end-to-end machine learning workflows, including inference and continual training.
- Developing systems for the definition deployment and operation of the different phases of the machine learning and data life cycles.
- Working within Kubernetes to orchestrate and manage containers, ensuring high availability and fault tolerance of applications.
- Documenting the platform's best practices, guidelines, and standard operating procedures and contributing to knowledge sharing within the team.
Requirements:
- 3+ years of hands-on experience in developing and managing machine learning or data platforms
- Proficiency in programming languages commonly used in machine learning and data applications such as Python, Rust, Bash, Go
- Experience with containerization technologies, such as Docker, and container orchestration platforms like Kubernetes.
- Familiarity with CI/CD pipelines for automated model training and deployment. Basic understanding of DevOps principles and practices.
- Knowledge of data storage solutions and database technologies commonly used in machine learning and data workflows.
Location: Bangalore
Experience - 3 ~ 9 Years
o Excellent C++ programming skills in an embedded environment
o Strong knowledge of Design patterns in C++
o Development experience on MPEG-DASH, HLS, Smooth streaming
o Excellent development skills in Multimedia framework, Gstreamer
o Development experience in Multimedia container formats (AVI, TS, MP4, WMV, RM, FLV, MKV, and PS), audio/video codecs
o Strong knowledge of DRM/TEE/Security Domain, W3C EME, CDM, Common Encryption, DTCP, HDCP2, Crypto spec
o Strong understanding of Linux/RTOS and system programming
o Excellent analytical and problem-solving skills
Plus Points
o Knowledge of Open source integration
o Cross-compiling for ARM architecture, profiling tools
o Knowledge of tools GIT/Gerrit/GCOV/LCOV
o Familiarity with agile development
Hello,
Greetings for the day !!!
Tridat Technologies is hiring “Senior PostgreSql Developer" for one of the tech based organization specialized in banking domain @ Navi Mumbai !!!
JOB QUALIFICATIONS & PROFESSIONAL SKILLS:
• Bachelor's degree
What You Will Do:
- Ensure top quality for our PostgreSQL offerings
- Take an active role in the development lifecycle and enforce appropriate quality gates at the required stages.
- Analyze and continuously improve the test automation frameworks and infrastructure
- Monitor the coverage and efficacy of the automated test suites and take proactive actions to ensure satisfactory results
- Research and recommend new tools, strategies, or processes to improve testing and delivery capabilities.
- Prepare test plans for upcoming releases and ensure appropriate visibility for delivery status and test results.
Your Experience:
- Knowledge and experience working with SQL Databases
- Working with multiple automation tools and frameworks
- Solid debugging skills (analyze logs, spot patterns, etc.)
- Hands-on experience with Ansible, Jenkins, Python, and Github
- Familiarity with the Linux ecosystem
What Will Make You Stand Out:
- Exposure to database-related technologies
- Experience with compiling/building/packaging
- Worked with container or virtualization environments
- Experience working with PostgreSQL
- Working experience in the opensource world or contributions to open-source projects
- Experience with Molecule
- Great attitude and capability to easily communicate and work well with people from distributed teams.
EXPERIENCE: 7+ years
Key role interactions
• Will be expected to work from office
Location: Rabale, Navi Mumbai
Working Days: Monday to Friday
Employment Mode: Contract to hire (Full time opportunity)
Joining Period: Immediate to max 15 days
Thank You & Regards,
Shraddha Kamble
HR Recruiter
Job Description
Technical Support Engineer – Elastic Stack
ABOUT US
Established in 2009, Ashnik is a leading open source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
RESPONSIBILITIES
· Be the First point of contact for support queries
· Be responsible for solving customer queries and tickets in timely manner.
· Communicate with customer and internal team regarding problems reported and solved status in timely and effective manner.
· Log monitoring, event monitoring and resolving tickets in defined SLA.
· Apply updates and patches to keep the software up-to-date in line with organizational policies
· Provide support for installation and configuration.
· Monitor and identify areas of performance improvement
· Identify and write scripts for automating support tasks.
· Communicate effectively internally and with product support team to reproduce, resolve support cases and document them.
ESSENTIAL SKILLS
· Hands-On experience and skills in Linux operating system
· Knowledge of ELK stack or relevant technologies such as SolaR, Splunk etc.
· Good understanding of Indices Mapping , Index management such as archiving, reindexing, sharding etc.
· Good understanding of Analytical data structure and experience in deployment of same in the form of Reports and Dashboards
· Hands-On experience of Linux operating system
· Experience in NoSQL or RDBMS technology is desirable
· Python/Node.js or relevant data processing programming experience is preferred.
· Knowledge of Designing and Developing data pipeline using ETL tool such as Elastic Stack.
· Knowledge of real-time data collection with various data sources.
Experience in deploying scalable Elastic cluster is desirable
QUALIFICATION AND EXPERIENCE
· 2 -4 years of experience in technical support role.
· At least 2+ years experience working across multi-cultural and geographically distributed teams
· Experience in trouble shooting, maintaining and supporting production setup
· Engineering or equivalent degree
· Ability to interact effectively with customers for problem resolution.
· Sense of urgency and ownership to get problems solved in timely manner
· Attention to details.
· Ability to work on multiple tickets/ support cases effectively and to be able to manage the time critical tasks.
Location: Bangalore
Experience: Minimum 2 yrs
Package: upto 8 LPA
- Working with Banks in 24x7 based environment and identify Application problems and advising on the solution.
- Analyzing application/OS logs to spot common trends and underlying problems.
- Analyzing underlying database using SQL queries (MS SQL /Oracle or any other relational database).
- Deploying, testing, and reporting bugs/issues in new modules and support clients in testing.
Skills Required:
- Good analytical and problem-solving skills.
- knowledge of Linux Commands and database queries.
- Knowledge of Application servers i.e Apache Tomcat/ Jboss.
- Knowledge of shell script
- knowledge on Banking Digital Products i.e. IMPS, UPI
Job Title: Sr. Developer/ Developer – Robotics & AI Business Unit
Business Unit: Robotics & AI
Reporting To: Head – Robotics & AI Business Unit / Delivery Head - Robotics & AI Business Unit
Supervises: Junior developers
Skill Sets Required:
- Strong Expertise and experience with Automation Anywhere. With an overall experience of around 3 - 6 years, of which at least 2 years of proven experience in working on live automation projects using Automation Anywhere and /all UIPATH.
- Experience in A2019 version (mandatory) with knowledge of new tools like AARI will be preferred. Strong working knowledge of A2019 AARI, IQ BOT, Discovery BOT.
- Other RPA platforms experience like UiPath, Microsoft Power Automate, Softomotive, etc. will be an added preference. Working knowledge of UIPATH - Studio X, Ai-Fabric etc will be an added preference.
- Expertise and experience in using technologies like AI, NLP, OCR, Machine Learning/ML etc. which will be used for various aspects of business process automation, will be an added preference. Usage of Microsoft Azure AI / AWS Lex / Google AI.
- Experience and Expertise in coding languages like Python, Java/.Net. Working knowledge on Python is desirable.
- Working Knowledge of Restful API’s, Microservices and BOT Integration frameworks.
- Experience and good understanding of the Software Development Lifecycle (Requirements Gathering, Development, Testing and Change Management).
- Experience with Process Discovery, Design, Analysis and Implementation (knowledge about common methods and frameworks like ITIL, Six Sigma etc)
- Knowledge and usage of platforms & technologies like Django, SPACY, BERT, GIT, JIRA, Lucid, POSTGRE SQL etc
Job Purpose:
- To contribute to the Delivery unit of the RPA and AI business as a Senior Developer in accordance with the vision, mission, objective and goals of the RPA and AI Business Unit and the Organization.
- To be recognised as a high-performance team member of RPA and AI Business Unit.
- To excel in Project and Program Delivery for customers and create a comprehensive Project Delivery eco-system with the best practices and methodologies in line with RPA and AI.
- Develop, maintain and improve project delivery efficiency to optimize productivity & resource utilisation and maximize profitability through delivery excellence and customer delight.
- To assist the pre-sales and product/platform building plans of the RPA and AI business.
- To create in mind of customers and partners, a positive brand image of the Company.
Key Job Responsibilities:
- Delivering RPA and AI projects as per scope, on time, within Budget with meeting all KPI’s and Objectives.
- Delivering Customer Projects: Oversee the design and implementation of RPA & AI solutions using the selected RPA & AI platform(s). This will involve managing a team of architects & coders. It will also need to be “hands-on” to some degree to ensure smooth delivery of the project and program.
- As part of Project Sprints, identification of process improvement and regular reviews to identify opportunities where automation can drive benefits and efficiencies for the business
- Working with the project teams and business/customer SMEs and IT teams, analyse existing business processes in detail in order to assess feasibility and to redesign those processes for an RPA/AI supported solution
- Reporting metrics and governance structures are put in place to measure the performance and outcome of the RPA/AI solutions being delivered to customers/internal
- Documenting Best Practices and Methodologies to enhance and improve the quality of delivery of RPA and AI projects, creating of re-usable delivery assets.
- Effective Communication with the customer and the relevant stakeholders to ensure smooth delivery of the projects and programs.
- As a part of RPA/AI Delivery, there will be responsibilities for the ongoing support of Solutions and services provided and the quality of service and performance
- Monitor, support and control service delivery ensuring that the procedures put in place are effective and properly implemented
- Play a critical part in business expansion and growth of the business by “Delivery Driven sales” – change requests, identifying & building new use cases for RPA & AI implementation, etc.
- To assist in pre-sales aspects (customer use cases/business analysis, POCs, demos, ROI creation, proposals, SOWs, etc.), working with sales and pre-sales teams for customers and alliance partners presentations/discussions, conducting events like webinars, etc.
- To contribute to the product/platform & reusable assets building, planning-to-realisation lifecycle of Netlabs, in discussions with the senior leadership team
- To plan & realise competency building of the team of junior developers, based on the BU plans & priorities, in discussions with the senior leadership team
- Documenting project information and co-ordinate with account management and marketing teams to generate case studies
- Contributing thought-leadership articles and content for whitepapers, blogs, POVs
Team Responsibilities:
- Mentoring & retaining a high-performance team of junior developers
- Assigning team responsibilities/work allocations, maintaining job descriptions, monitoring competency building, tasks and quality
- Developing performance standards and communicating the Key Result Areas (KRAs)
- Managing performance by periodic reviews, appraisals and motivating people through rewards, incentives and recognition policies in line with company’s HR policies
- General administrative supervision of all team members
Looking for a fresher willing to learn and upgrade themselves on future trends.
- 2 years of hands-on Linux System Administration experience (RHEL / CentOS).
- 2 years experience in AWS Cloud administration
- 2 years relevant work experience in a high-volume and/or critical production service environment.
- Minimum 1+ years experience with Windows Server operating system administration
- Minimum 2 years experience with Cloud Service Providers (AWS)
- Minimum 2 years experience in enterprise environments
- Bachelor degree or equivalent experience preferred
- Any Amazon Web Services (AWS) certification preferred.
- Understanding of supporting Enterprise applications such as SAP systems and databases on AWS EC2 like (HANA, Db2, Oracle) with cluster management, HA management, etc.
- Implement security policies and controls (e.g. CIS, malware, anti-virus, secret management) using AWS services
- Strong ability to document standards, methods and produce drawings of systems and applications.
- Strong Scripting skills with any scripting languages (Python (Lambda), Bash Shell)
- Administration of Linux / Windows servers
- Deep understanding of AWS of IT Infra Architecture, Automations in deployments, SAP Architecture, Integration of On-Prem, Hybrid Infra, etc.
- Participate in Design Discussion, a new way of working (Automations, operational management), etc.
- Must have knowledge of AWS, Azure, Git