11+ IP PBX Jobs in Bangalore (Bengaluru) | IP PBX Job openings in Bangalore (Bengaluru)
Apply to 11+ IP PBX Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest IP PBX Job opportunities across top companies like Google, Amazon & Adobe.
Job Description
Experience: 5 - 9 years
Location: Bangalore/Pune/Hyderabad
Work Mode: Hybrid(3 Days WFO)
Senior Cloud Infrastructure Engineer for Data Platform
The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.
Key Responsibilities:
Cloud Infrastructure Design & Management
Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.
Optimize cloud costs and ensure high availability and disaster recovery for critical systems
Databricks Platform Management
Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.
Automate cluster management, job scheduling, and monitoring within Databricks.
Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.
CI/CD Pipeline Development
Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.
Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.
Monitoring & Incident Management
Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.
Security & Compliance
Enforce security best practices, including identity and access management (IAM), encryption, and network security.
Ensure compliance with organizational and regulatory standards for data protection and cloud operations.
Collaboration & Documentation
Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.
Maintain comprehensive documentation for infrastructure, processes, and configurations.
Required Qualifications
Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
Must Have Experience:
6+ years of experience in DevOps or Cloud Engineering roles.
Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.
Hands-on experience with Databricks for data engineering and analytics.
Technical Skills:
Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.
Strong scripting skills in Python, or Bash.
Experience with containerization and orchestration tools like Docker and Kubernetes.
Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).
Soft Skills:
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Profile: Big Data Engineer (System Design)
Experience: 5+ years
Location: Bangalore
Work Mode: Hybrid
About the Role
We're looking for an experienced Big Data Engineer with system design expertise to architect and build scalable data pipelines and optimize big data solutions.
Key Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using Python, Hive, and Spark
- Architect scalable big data solutions with strong system design principles
- Build and optimize workflows using Apache Airflow
- Implement data modeling, integration, and warehousing solutions
- Collaborate with cross-functional teams to deliver data solutions
Must-Have Skills
- 5+ years as a Data Engineer with Python, Hive, and Spark
- Strong hands-on experience with Java
- Advanced SQL and Hadoop experience
- Expertise in Apache Airflow
- Strong understanding of data modeling, integration, and warehousing
- Experience with relational databases (PostgreSQL, MySQL)
- System design knowledge
- Excellent problem-solving and communication skills
Good to Have
- Docker and containerization experience
- Knowledge of Apache Beam, Apache Flink, or similar frameworks
- Cloud platform experience.
Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.
About Company:
Nomiso is a product and services engineering company. We are a team of Software Engineers, Architects, Managers, and Cloud Experts with expertise in Technology and Delivery Management.
Our mission is to Empower and Enhance the lives of our customers, through efficient solutions for their complex business problems.
At Nomiso we encourage entrepreneurial spirit - to learn, grow and improve. A great workplace, thrives on ideas and opportunities. That is a part of our DNA. We’re in pursuit of colleagues who share similar passions, are nimble and thrive when challenged. We offer a positive, stimulating and fun environment – with opportunities to grow, a fast-paced approach to innovation, and a place where your views are valued and encouraged.
We invite you to push your boundaries and join us in fulfilling your career aspirations!
What You Can Expect from Us:
Here at NomiSo, we work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Let’s make your career great!
Responsibilities:
- Create, deploy, monitor, and maintain high performance and scalable microservices in the production.
- Design/develop Restful Services, data analysis, troubleshoot and resolve complex issues.
- Take end to end ownership for complex technical projects from planning through execution.
- Build, optimize, and manage ad solution Platform for the enterprise level.
- Perform code review and manage technical debt
- Handle release deployments and production issues
Required Skills:
- Overall 8+ years of experience in application development using Java with creating and deploying microservices using the Spring Boot framework.
- Strong experience in Maven.
- Good experience in unit(Junit) and integration testing.
- Experience in Microservices is a must.
- Experience in designing and developing REST based services / Microservice development.
- Experience with delivering projects in an agile environment using SCRUM methodologies.
- Candidate should have good communication skills (written and verbal)
- Excellent analytical and problem solving skills
- Any one of these database Mongo, Maria, RMQ, Postgres, or other NoSQL servers
- Experience in AWS and CI/CD
Good to have:
- Experience using container management tools such as Kubernetes, Docker and Rancher.
We are an IT recruitment service provider based out of Gurgaon. We are partners with various established names like Wipro, Aon, Infinite Computer Solutions, IBM, TCS, Fiserv, Accenture, and many more.
As discussed, Please find the Job Description for the position of Salesforce Project Manager.
JD for Salesforce Tech Lead
Exp level: 7+ yrs.
Location: Pune, Bangalore, Hyderabad, Chennai, Jaipur
NP: Immediate to 15 days
Skills :
Salesforce developer, LWC, Rest API, Lead experience
for the flight..
Now imagine the same technically multi order , next gen tracking ,indoor mapping, predicting security queues, coordinating and finally serving the customer on time, so he boards the flight with a smile. If you are geared to that excitement, its time you interview with us.
GrabbnGo is the first in the world to provide a multi order delivery at the gate service in airports.. and
we are working on being the best!
Your Competency Profile
® GraphQL API experience
® At least 3+ years of experience developing and deploying websites and applications
® Any of GIT, JS, RabbitMQ, Celery, Nginx, Google API, DevOps, GCP experience is an additional
strength
® In-depth knowledge of Python, Django (3+ years)
® SQL databases, NoSQL databases, Elasticsearch,
® Expertise in creating and maintaining restful API services (3+ years)
Role: Project Manager
Experience: 6+years
Education Qualification: Graduate/Postgraduate
Responsibilities:
• Manage technical aspects of projects, including planning, execution, and delivery
• Translate product strategy into detailed requirements for prototype construction and final product development
• Create Functional and Technical specification documents, translate application storyboards and use cases into functional applications
• Track project progress, identify and mitigate risks, and ensure project deliverables are completed on time and within budget
• Delivering new Bodhee features using AGILE delivery programs
• Ensure project quality and adherence to industry standards and best practices
• To act as a single point of contact from the team to Bodhee Stakeholders
• Identify and implement process improvements to increase efficiency and reduce costs
• Promote teamwork, motivate, mentor and develop subordinates
• Manage and take ownership of product including defining scope and developing requirements for product launch
Requirements
• Overall 6+ years’ experience with 3+ years of experience as Project Manager for any development projects and hands-on technical experience
• Proven experience as a technical project manager, with a track record of delivering projects on time and within budget
• Experience with agile software development methodologies.
• Strong experience with Java and UI design and development
• Strong experience leading development teams utilizing Java and UI technologies.
• Working experience with Data Analytics products is a plus
• Strong understanding of RDBMS and working experience with SQL
• Experience with developing Micro-services Based applications
• Excellent problem solving and analytical skills
• Excellent communication skills
Rapyuta Robotics is seeking talented, and ambitious individuals with a can-do attitude to help revolutionize robotics. We’re creating a whole new generation of multi-agent aerial- and ground-based mobile robotic platforms with access to an inexhaustible supply of data and processing capabilities, that is the Cloud. Our units will be capable of working autonomously and collaboratively, learning from their own collective experiences and continuously improving upon themselves.
Your tasks will include the following but not limited to:
- Software Quality Assurance Testing - including verification of functionality and validation of requirements
- Developing test harnesses, framework, and general troubleshooting
- Develop data-driven test automation pipelines
- Design and author test cases based on the functional specs
- Analyse and debug the test data to identify the root cause of failures
- Review product requirements, engineering specs to develop automation test plans and strategy
- Develop testing frameworks, testing tools, API tests, integration tests, performance tests, stress tests, functional, and End to End automation test suites
- Work with the development team to support testing, support web front-end, and back-end services testing
- Perform code analysis and look for ways to improve test coverage
- Lead quality production releases and be the point person for investigating any related issues
Requirements
Minimum qualifications:
- B.S. degree in Computer Science, similar technical field of study, or equivalent practical experience with an outstanding track record.
- At least 5 years of experience with automation testing
- Mastery of one or more of the following programming languages including but not limited to Java, C/C++, Python
- Must have experience in test automation, agile testing, continuous integration, functional testing API testing
- Familiarity with testing tools such as Selenium, Cucumber, etc.
- Experience with testing frameworks like TestNG, JUnit or something similar
- Experience in relational databases (MySQL etc.)
- Experienced in design and implementation of test scripts, test data & UI testing of web services
- Experienced in CI/CD development process and methodology
- Has excellent verbal and written English communication skills
- High degree of initiative and proven analytical problem-solving skills
Preferred qualifications:
- Start-up mindset
- Contributions to open-source projects
- Fundamental understanding and experience with one or more Agile methodologies
Location: Bangalore, India
Intro
Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.
What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.
• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.
- Create and maintain optimal data pipeline architecture and ETL processes
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Develop data pipeline and infrastructure to support real-time decisions
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale
- Proficiency in writing and debugging complex SQLs
- Experience working with AWS big data tools
• Ability to lead the project and implement best data practises and technology
Data Pipelining
- Strong command in building & optimizing data pipelines, architectures and data sets
- Strong command on relational SQL & noSQL databases including Postgres
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Big Data: Strong experience in big data tools & applications
- Tools: Hadoop, Spark, HDFS etc
- AWS cloud services: EC2, EMR, RDS, Redshift
- Stream-processing systems: Storm, Spark-Streaming, Flink etc.
- Message queuing: RabbitMQ, Spark etc
Software Development & Debugging
- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
- Strong hold on data structures & algorithms
What would be a bonus
- Prior experience working in a fast-growth Startup
- Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data






