11+ RAN Jobs in Bangalore (Bengaluru) | RAN Job openings in Bangalore (Bengaluru)
Apply to 11+ RAN Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest RAN Job opportunities across top companies like Google, Amazon & Adobe.



Mandatory Skills
- Proficient in automating Test cases using Python / C++
- Should have work on RAN or Core Network on LTE / 4G, 5G , 3G
- Should have done Protocol Conformance Testing / RF Conformance Testing / Carrier Acceptance Testing
- Should be able design test cases.
- Should have worked on protocols / network element like NAS/SIAP / RRC / PHY MAC / GCF / MME / SGW / PGW / PCRG / eNodeB / UTRAN / RACH
Good to Have
- Test automation framework & test suite development using Python or platform specific development expertise
- Experience in 5G will be an added advantage
- Interop testing and validation with chipset vendors / OEMs / Service Providers
What kind of profile will not be suitable
- Only RF Optimization
- Only Field Testing
- Only Log Analysis
- Network Monitoring & configuration
Job Description
Cateina Technologies is looking for an API Specialist with the following skillset
Technical Skills
- IBM DataPower Gateway
- IBM API Connect
- Microservices
- OpenAPI Specification
- API Security
- API Lifecycle Management
- REST
- JSON
- XML
- XML Schema
- XPath
- XSLT
- XQuery
Required Competencies
- Development and implementation of complex Internet and Intranet applications on multiple platforms.
- Recommend architectural improvements, design solutions, and integration solutions.
- Design N/W architecture and extranet Integration.
- Experience in designing and developing API's.
- Experience in installation and configuration of Datapower, API Connect and Advance Developer Portal for standalone and cluster environment
- Implemented different services like MPGW, WAF, WebServiceProxy, Web XML firewall in Datapower.
- Experience in configuring the API Connect Cloud with Datapower.
- Configuration and Customization of Developer Portal.
- Backup and Restore of API Connect Configuration data, API's and Products.
- Integration with an external user registry.
- Designed LoopBack Application.
- Implement user-defined policies, built-in policies, security definition, gateway script, error handling for API's.
- Experience in integrating internal and external applications using various protocols and message formats – REST, SOAP, JSON, XML.
Skills
- Enthusiastic, Creative and flexible
- Organized, with an ability to prioritize time-sensitive assignments
- Capable of working both independently and in a team environment
- Professional with work ethic
- Strong communication skills both written and verbal
- Any Degree
Who can apply
Candidates who:
- have the relevant skills and interests
- willing to relocate to Mumbai
Office Location
Cateina Technologies
Vikhroli (West),
Mumbai, Maharashtra 400083
Responsibilities:
- Develop customized solutions on the Salesforce platform using Apex, Visualforce, Lightning Web Components (LWC), and other Salesforce technologies.
- Design and implement complex business logic, data models, and workflows to support business requirements.
- Collaborate with stakeholders to gather requirements, analyze business processes, and recommend best practices for Salesforce implementation.
- Customize and configure Salesforce features including objects, fields, page layouts, validation rules, and process automation.
- Integrate Salesforce with external systems using APIs, middleware tools, and custom integration solutions.
- Perform data migration, data manipulation, and data quality management tasks as needed.
- Conduct code reviews, troubleshoot issues, and optimize performance to ensure the stability and scalability of the Salesforce platform.
- Stay updated with the latest Salesforce releases, features, and best practices to continuously improve the platform.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Minimum of four years of hands-on experience in Salesforce development.
- Proficiency in Apex programming language, Visualforce, Lightning Web Components (LWC), and other Salesforce development tools.
- Experience with Salesforce configuration including objects, fields, workflows, validation rules, and process automation.
- Strong understanding of Salesforce data model, data management, and data integration techniques.
- Experience with Salesforce APIs, REST/SOAP web services, and integration tools like MuleSoft.
- Salesforce certifications such as Salesforce Certified Platform Developer (PD1) or equivalent are preferred.
- Excellent problem-solving skills, attention to detail, and ability to work independently as well as part of a team.
- Strong communication skills with the ability to effectively collaborate with stakeholders at all levels.
Responsibilities
- Software Engineering
- Design and develop highly scalable, available, reliable, secure and fault tolerant systems with minimal
guidance for a market leader in the logistics industry
- Partner with team members on functional and nonfunctional requirements and spread design
philosophy, goals and improve the code quality across the team
- Research new technologies and tools that enable building the next generation of our services
- Provide technology leadership to the team and foster engineering excellence
- Product Delivery
- Partner with product managers to define and execute on the feature roadmap
- Translate business requirements into scalable and extensible design
- Coordinate with various cross functional teams on planning and execution
- Maintaining automated build / test / deployment environments
Qualifications
- Software Engineering
- Should have at least 2 years of hands on experience in designing, developing, testing and deploying
applications on Java, Ruby, Kotlin, Python, Node or Go for large scale applications
- Deep knowledge of one of the programming language they have experience in
- Proficient in OOP and Design Patterns, experience with functional programming would be a plus
- Data modelling experience in Relational Databases
- Ability to design and implement low latency RESTful services
- Product Delivery
- Ability to scope, review and refine user stories for technical completeness and to alleviate dependency
risks.
- Well versed in working with agile methodologies which includes phases such as design, development,
code review, testing and release management
- Ability to have worked in a CI/CD environment, with hands on experience with Git or any similar source
code management tools
- Product Maintenance
- Experience troubleshooting server performance issues such as memory tuning, GC tuning, resource leaks
etc
- Continuously refactor applications to ensure high quality design.

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
1. Python Developer with Snowflake
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.
- Experience in Snowflake data load using Python.
- Experience in creating user-defined functions in Snowflake.
- Snowsql implementation.
- Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Interpret/analyze business requirements & functional specification
- Good to have DBT, FiveTran, and AWS Knowledge.
•Hands on experience in Object oriented programming
•Hands on experience in Java, SpringBoot (min 2 years) or kafka or Cassandra or MongoDB
•Experience with developing/enhancing applications connecting to different Databases : Oracle/MySQL/Cassandra/MongoDB
•Strong knowledge of common Data structures and algorithms and when to use them.
•Experience in Xpath or XML or REST or JSON or protobuff
• Experience with software version control (such as git)
•Experience in working in an agile environment such as Scrum.
Preferable:
•Experience in public cloud PaaS (AWS, GCP, Azure)
•Real time stream data handling (Kafka, Kinesis
- Gathering project requirements from customers and supporting their requests.
- Creating project estimates and scoping the solution based on clients’ requirements.
- Delivery on key project milestones in line with project Plan/ Budget.
- Establishing individual project plans and working with the team in prioritizing production schedules.
- Communication of milestones with the team and to clients via scheduled work-in-progress meetings
- Designing and documenting product requirements.
- Possess good analytical skills - detail-orientemd
- Be familiar with Microsoft applications and working knowledge of MS Excel
- Knowledge of MIS Reports & Dashboards
- Maintaining strong customer relationships with a positive, can-do attitude
About Hop:
We are a London, UK based FinTech startup with a subsidiary in India. Hop is working towards building the next generation digital banking platform for seamless and economical currency exchange, with technology at the crux of it. In a technology driven era, many financial services platforms still lack the customer experience and are cumbersome to use. Hop aims at building a ‘state of the art’ tech-centric, customer focused solution.
moneyHOP is India’s first cross-border neo-bank providing millennials the ability to ‘Send’ & ‘Spend’ conveniently and economically across the globe using HOPRemit (An online remittance portal) and HOP app + Card (A multi-currency bank account).
This position is a crucially important position in the firm and the person hired will have the liberty to drive the product and provide direction in line with business needs.
Website: https://moneyhop.co/">https://moneyhop.co/
About Individual
Looking for an enthusiastic individual who is passionate about technology and has worked with either a start-up or a blue-chip firm in the past.
The candidate needs to be a multi-tasker, highly self-motivated, self-starter and have the ability to work in a high stress environment. He/she should be tech savvy and willing to embrace new technology comfortably.
Ideally, the candidate should have experience working with the technology stack in the scalable and high growth mobile application software.
General Skills
- 3-4 years of experience in DevOps.
- Bachelor's degree in Computer Science, Information Science, or equivalent practical experience.
- Exposure to Behaviour Driven Development and experience in programming and testing.
- Excellent verbal and written communication skills.
- Good time management and organizational skills.
- Dependability
- Accountability and Ownership
- Right attitude and growth mindset
- Trust-worthiness
- Ability to embrace new technologies
- Ability to get work done
- Should have excellent analytical and troubleshooting skills.
Technical Skills
- Work with developer teams with a focus on automating build and deployment using tools such as Jenkins.
- Implement CI/CD in projects (GitLabCI preferred).
- Enable software build and deploy.
- Provisioning both day to day operations and automation using tools, e. g. Ansible, Bash.
- Write, plan, create infra as a code using Terraform.
- Monitoring, ITSM automation incident creation from alerts using licensed and open source tools.
- Manage credentials for AWS cloud servers, github repos, Atlassian Cloud services, Jenkins, OpenVPN, and the developers environment.
- Building environments for unit tests, integration tests, system tests, and acceptance tests using Jenkins.
- Create and spin off resource instances.
- Experience implementing CI/CD.
- Experience with infrastructure automation solutions (Ansible, Chef, Puppet, etc. ).
- Experience with AWS.
- Should have expert Linux and Network administration skills to troubleshoot and trace symptoms back to the root cause.
- Knowledge of application clustering / load balancing concepts and technologies.
- Demonstrated ability to think strategically about developing solution strategies, and deliver results.
- Good understanding of design of native Cloud applications Cloud application design patterns and practices in AWS.
Day-to-Day requirements
- Work with the developer team to enhance the existing CI/CD pipeline.
- Adopt industry best practices to set up a UAT and prod environment for scalability.
- Manage the AWS resources including IAM users, access control, billing etc.
- Work with the test automation engineer to establish a CI/CD pipeline.
- Work on replication of environments easy to implement.
- Enable efficient software deployment.
Roles & Responsibilities
- Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
- Deep understanding of Linux from kernel mechanisms through user space management
- Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
- Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards. Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.
- Wide understanding of IP networking as well as data centre infrastructure
Skills
- Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
- Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
- Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
- Strong understanding and must have experience:
- Apache spark framework, specifically spark core and spark streaming,
- Orchestration platforms, mesos and kubernetes,
- Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
- Core presentation technologies kibana, and grafana.
- Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products
Certification
Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms
Responsibilities:
- Strong Technical Chops: You should know how to build highly scalable, robust, and fault-tolerant services that support our unique rate-of-growth requirements.
- You should be on top of the latest architectural trends.
- Fast Learners: We are looking for folks who thrive on new technologies and don't believe in one-size-fit all solutions.
- You should be able to adapt easily to meet the needs of our massive growth and rapidly evolving business environment.
- You understand requirements beyond the written word.
- Whether you're working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
Requirements:
- Strong knowledge of MySQL, NoSQL, PostgreSQL, ElasticSearch.
- Experience in Java and web technologies.
- Experience in any one scripting language like Python.
- Hands-on experience with systems that are asynchronous, RESTful and demand concurrency.
- Knowledge of best software engineering practices for all stages of software development life cycle, including coding standards, code reviews, testing and deployment.
- Good experience and exposure of cloud native architecture, development and deployment on public clouds AWS, Google Cloud etc
- Responsible for Linux server installation, maintain, monitoring, data backup and recovery, securtiy and administration
- Understanding of clusters, distributed architecture, container environment
- Experience in networking, including linux, software defined networking, network virtualization, open protocols, application acceleration and load balancing, DNS, virtual private networks
- Knowledge of several middleware such as MySQL, Apache etc
- Responsible for managing network storage
- Disaster recovery and incident response planning
- Configuring/monitoring firewalls, routers, switches and other network devices
Responsibilities and Duties
- Support the globally distributed cloud development and teams by mainitaning the cloud infrastructure labs hosted in a hybrid cloud environment
- Contribute towards optimization of preformance and acost of running the labs