11+ RIP Jobs in Bangalore (Bengaluru) | RIP Job openings in Bangalore (Bengaluru)
Apply to 11+ RIP Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest RIP Job opportunities across top companies like Google, Amazon & Adobe.
Experience: 2 to 8 Years
Technical skills:
> Experience in L2, L3 Networking / Datacom Protocols.
> Experience in Python Scripting is Must.
> Experience in QA Automation Testing.
Note: Direct & Permanent role with us & it’s a hybrid model work culture.
Please acknowledge the mail with the below details & also share your updated CV :
Total Experience :
Python Automation Testing Exp :
Networking Experience:
Current CTC:
Expected CTC:
Notice Period:
Fine with Hybrid model ( 3 days work from the office ) Yes / No :
Fine with the location – Madiwala happiest mind office – Yes / No :
As a Senior Backend & Infrastructure Engineer, you will take ownership of backend systems and cloud infrastructure. You’ll work closely with our CTO and cross-functional teams (hardware, AI, frontend) to design scalable, fault- tolerant architectures and ensure reliable deployment pipelines.
- What You’ll Do :
- Backend Development: Maintain and evolve our Node.js (TypeScript) and Python backend services with a focus on performance and scalability.
- Cloud Infrastructure: Manage our infrastructure on GCP and Firebase (Auth, Firestore, Storage, Functions, AppEngine, PubSub, Cloud Tasks).
- Database Management: Handle Firestore and other NoSQL DBs. Lead database schema design and migration strategies.
- Pipelines & Automation: Build robust real-time and batch data pipelines. Automate CI/CD and testing for backend and frontend services.
- Monitoring & Uptime: Deploy tools for observability (logging, alerts, debugging). Ensure 99.9% uptime of critical services.
- Dev Environments: Set up and manage developer and staging environments across teams.
- Quality & Security: Drive code reviews, implement backend best practices, and enforce security standards.
- Collaboration: Partner with other engineers (AI, frontend, hardware) to integrate backend capabilities seamlessly into our global system.
Must-Haves :
- 5+ years of experience in backend development and cloud infrastructure.
- Strong expertise in Node.js (TypeScript) and/or Python.
- Advanced skills in NoSQL databases (Firestore, MongoDB, DynamoDB...).
- Deep understanding of cloud platforms, preferably GCP and Firebase.
- Hands-on experience with CI/CD, DevOps tools, and automation.
- Solid knowledge of distributed systems and performance tuning.
- Experience setting up and managing development & staging environments.
• Proficiency in English and remote communication.
Good to have :
- Event-driven architecture experience (e.g., Pub/Sub, MQTT).
- Familiarity with observability tools (Prometheus, Grafana, Google Monitoring).
- Previous work on large-scale SaaS products.
- Knowledge of telecommunication protocols (MQTT, WebSockets, SNMP).
- Experience with edge computing on Nvidia Jetson devices.
What We Offer :
- Competitive salary for the Indian market (depending on experience).
- Remote-first culture with async-friendly communication.
- Autonomy and responsibility from day one.
- A modern stack and a fast-moving team working on cutting-edge AI and cloud infrastructure.
- A mission-driven company tackling real-world environmental challenges.
· Participate in analysis, design, and new development of Appian based applications
· Build applications: interfaces, process flows, expressions, data types, sites, integrations, etc.
· Proficient with SQL queries and with accessing data present in DB tables and views
· Experience in Analysis, Designing process models, Records, Reports, SAIL, forms, gateways, smart
services, integration services and web services
· Experience working with different Appian Object types, query rules, constant rules and expression
rules
Primary Responsibilities:
· Responsible for systems analysis for a designated set of applications
· Work closely with BA and System Architect & Delivery Manager. Own the accurate translation of
business requirements in the form of high-level design and system requirements specifications.
· Ensure sign-off of SRS and High-Level Design Specification
· Assist the PM in the estimation of effort to deliver the solution based on the SRS and the timelines.
· Liaise with Infra teams in the production of an infrastructure solution design and requirements as
and when the proposed solution involves infrastructure components
· Provide further clarity and detail to feasible options proposed by BA and help in selection of the
right option in consultation with the Design Authority.
· Work closely with the Application Development team (Tech Delivery Lead) and Testing teams (Test
Manager and Test Engineer) to ensure that the Low-level design, test plans and test cases are aligned to
the approved SRS.
· Participate in progress review meetings and review and sign-off deliverables produced by technical
delivery team and testing teams.
Qualifications
- B.Sc. (Computer Science), B.E
· Minimum 5 years of experience in Insurance domain
· At least 4 years of experience in Implementing BPM solutions using Appian 19.x or higher.
· Over 5 years in Implementing IT solutions using BPM or integration technologies.
· Experience in Scrum/Agile methodologies with Enterprise level application development projects
· Good understanding of database concepts and strong working knowledge any one of the major
databases e g Oracle SQL Server MySQL
Additional information
Skills Required
·
· Appian BPM application development and System Analysis
· 8-10 years of proven software System Analysis and design experience
· Ability to work on large and complex projects.
· Strong technical knowledge of existing Insurance/F&A application
· Excellent documentation, communication, and presentation skills
· Ability to understand business requirements, analyze and translate them into system
requirements
Role: Sr. Data Scientist
Exp: 4 -8 Years
CTC: up to 28 LPA
Technical Skills:
o Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
o Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
o Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
o Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
o Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
o Exposure to natural language processing (NLP) techniques is a plus.
Cloud & Infrastructure:
o Strong expertise in Azure cloud ecosystem,
o Experience working in UNIX/Linux environments and using command-line tools for automation and scripting.
If interested kindly share your updated resume at 82008 31681
- Designing, developing, testing, and deploying mobile applications using the Flutter framework and Dart language.
- Developing user interface components and ensuring cross-platform functionality on both Android and iOS.
- Communicating and collaborating with cross-functional teams to define, design, and ship new features.
- Maintaining and improving code quality, organization, and automatization.
- Staying updated with the latest Flutter and Dart developments and best practices.
Data Engineer- Senior
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What are you going to do?
Design & Develop high performance and scalable solutions that meet the needs of our customers.
Closely work with the Product Management, Architects and cross functional teams.
Build and deploy large-scale systems in Java/Python.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.
Follow best practices that can be adopted in Bigdata stack.
Use your engineering experience and technical skills to drive the features and mentor the engineers.
What are we looking for ( Competencies) :
Bachelor’s degree in computer science, computer engineering, or related technical discipline.
Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.
Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.
Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.
Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.
Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.
Ability to work with distributed teams in a collaborative and productive manner.
Benefits:
Competitive Salary Packages and benefits.
Collaborative, lively and an upbeat work environment with young professionals.
Job Category: Development
Job Type: Full Time
Job Location: Bangalore
We are looking for Senior Software Engineer with expertise and experience in designing and automating the testing framework of web applications. The role involves continuous collaboration with development team, partners.
Responsibilities:
write test cases for the application
automate the test cases using selennium
Create Integration tests to ensure the quality of code
Build the testing framework to automate the Web API testing using core Java
Build the framework using selenium to automate the UI testing
Address and improve any technical issues
Strong commitment to quality and delivery
Collaborate well with engineers, and specialists to design and create advanced, elegant and efficient system
Create Integration tests to ensure the quality of code
Implement Continuous integration and automate the regression test suite
Write code that is cross-platform and cross-device compatible
DeepSource is working on building tools that help developers ship good code. There are over 40 million developers in the world, and all of them write and review code in some form. The Language Engineering team works on the source code analyzers, including both programming languages and configuration-as-code systems. As a member of the Language Engineering team, you will work on building the best, most comprehensive, Ruby analyzer in the world. You will add new rules and Autofixes for finding more issues with code and automatically fixing them. You will be involved with the community to understand the problems with static analysis tools in the Ruby ecosystem.
As a member of the Language Engineering team, you will:
-
Identify bad code practices in Ruby and write new analyzers to detect them.
-
Improve the coverage of automatically fixable issues.
-
Ensure fewer false-positives are reported by the analyzer.
-
Work on the internal tools that support analysis runtimes.
-
Contribute to open-source static analysis tools.
We’re looking for someone who has:
-
Strong foundational knowledge in Computer Science.
-
At least 3 years of professional software development experience in Ruby.
-
Understanding of the nuances of execution of the source code (AST, data flow graphs, etc).
-
Familiarity with Ruby best practices followed in the industry.
-
Native experience with Linux/Unix environment.
-
A focus on delivering high-quality code through strong testing practices.
We offer competitive compensation with meaningful stock options, a generous vacation policy, and a workstation of your choice, to name a few of the perks.
- Assisting developers to write efficient code.
- Tuning and debugging customer installations.
- Maintaining internal development and test databases.
- Working closely with leading experts in Supply Chain management to support various E2open teams.
- Provides on-call support in a 24x7 environment.
The position requires night shift work to support the US time zone.
Required Experience/Skills:
- Bachelor's/Master's/PhD degree in Engineering, Computer Science, Mathematics, or other Science with a consistent academic record (Preferably more than 70%)
- Oracle-certified DBA with 3+ years of DBA experience
- Oracle 12c and 19c experience is a must.
- Experience in Performance Tuning and Query Optimization is a must.
- Expertise and development experience in PL/SQL and SQL
- Experience in Oracle installation, upgrade backup, and recovery methods.
- Experience in Unix/Linux shell scripting.
- Track record of delivering quality work on time and ability to expand own expertise.
- Self-motivated, detail and team-oriented
- Solid verbal and written communication skills
- Enjoys dynamic, result-oriented work culture.
- Excellent problem-solving and troubleshooting skills.
- Experience considered a plus:
- MySQL and PostgreSQL
- Knowledge of server, storage, and networking technology.
- Supply Chain background






