11+ RCP Jobs in Hyderabad | RCP Job openings in Hyderabad
Apply to 11+ RCP Jobs in Hyderabad on CutShort.io. Explore the latest RCP Job opportunities across top companies like Google, Amazon & Adobe.
Knowledge on Model to Code Generation
Ability to work independently, with minimal training and direct guidance
Ability to respond to customer inquiries quickly
Ability to quickly modify/setup routes
Familiarity with Rhapsody Secure transmission protocols (e.g. Secure File Transfer (SFT) and Secure Object Access Protocol (SOAP) routes process, etc.
Prior experience with protocols like OSLC, SOAP and REST APIs
Ability to identify and resolve exceptions with electronic data exchange between EMR data submitters, and data recipients.
Knowledge of HL7/XML/FHIR/EDI standards
Strong in building JUnit tests during development
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.
Role
Backend engineers at AssetSprout work on our products. They include software for Certified Financial Planners, their clients, and also internal admin tools. They work with the CTO, frontend engineers, and other backend engineers to deliver towards the company’s vision.
Responsibilities
- Develop and own product features end to end in a scalable, secure and maintainable way. The buck stops with you on whatever you own.
- Provide technical solutions through design, architecture and implementation. Wear multiple hats in delivering greenfield projects from concept to production.
- Establish, advocate and bring your experience on coding styles, best practices, and in scaling the product from MVP to production.
- Iterate fast. Display maturity in prioritizing towards velocity while balancing quality. As a startup, we break or make on how fast we deliver.
- Teach and mentor other backend engineers. Focus on providing technical expertise and solutions regardless of how long one has been working professionally.
Requirements
- We are language and framework agnostic as long as you can pick up new technologies.
- Proficient and expert level coding skills with any of the programming languages, preferably Java, Kotlin. Experience in Python, C++, Scala etc. is welcome.
- Develop web applications and services using Spring Boot. Experience with Akka, Play, Flask, Django is welcome.
- Write automated tests with any of the frameworks. We measure success on how well your code is unit tested and integration tested.
- Advanced level understanding of RDMS systems, preferably Postgres. Working knowledge of non-relational databases such as DynamoDB, Cassandra is helpful
- Able to use CI/CD tools such as CircleCI, GitLab, Jenkins etc. and create workflows and pipelines to release to production every other day.
- Expert level understanding of RESTful APIs, pagination, networking concepts around HTTP, thread pools, and other server-side concepts.
- Solid experience of how AWS services work. Some cloud services directly relevant are Lambda, EC2, S3, Dynamo, RDS, Eventbridge, SQS, ElastiCache Redis, Load Balancers etc.
Good-to-haves
- Early or mid-stage startup experience
- Eager to work in a flat organization with no corporate politics
- Positive energy with a get-it-done attitude.
- Worked in a remote environment and high trust and high responsibility role
- Working knowledge of build systems like Gradle, Maven, Bazel, Webpack etc. We use Gradle

at Altimetrik
Java with cloud
|
Core Java, SpringBoot, MicroServices |
|
- DB2 or any RDBMS database application development |
|
- Linux OS, shell scripting, Batch Processing |
|
- Troubleshooting Large Scale application |
|
- Experience in automation and unit test framework is a must |
|
- AWS Cloud experience desirable |
|
- Agile Development Experience |
|
- Complete Development Cycle ( Dev, QA, UAT, Staging) |
|
- Good Oral and Written Communication Skills |
Basic Qualifications:
- 2+ years of non-internship professional software development experience
- Programming experience with at least one modern language such as Java, C++, or C# including object-oriented design
- 1+ years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems.
Additional Job requirements
- BS/MS/PhD in Computer Science/Math or equivalent
- BS in computer science or related field.
- 2+ years of relevant professional software development experience: designing, developing, and delivering software written in Java, C#, or C++, web development.
- Knowledgeable in object-oriented design patterns.
- Experience building highly scalable and distributed systems
Preferred Qualifications
- Experience with service-oriented architecture and application/services development
- Passion for performance debugging and benchmarking
- Ability to clearly and concisely communicate with technical and non-technical stakeholders across all levels of the organization
An ideal candidate must possess excellent Logical & Analytical skills. You will be working in a team as well on diverse projects. The candidate must be able to deal smoothly and confidently with the Clients & Personnel.
Key roles and Responsibilities:
⦁ Able to design and build efficient, testable and reliable code.
⦁ Should be a team player sharing ideas with the team for continuous improvement and development process.
⦁ Good Knowledge on Spring Boot, Spring MVC, J2EE and SQL Queries.
⦁ Stay updated of new tools, libraries, and best practices.
⦁ Adaptable, Self-Motivated, must be willing to learn new things.
⦁ Sound Good knowledge on HTML, CSS, JavaScript.
Basic Requirements:
⦁ Bachelors' Degree in Computer Science Engineering / IT or related discipline with a good academic record.
⦁ Excellent communication skills and interpersonal skills.
⦁ Knowledge on SDLC flow from requirement analysis to deployment phase.
⦁ Should be able to design, develop and deploy applications.
⦁ Able to identify bugs and devise solutions to address and resolve the issues.

Client of Kavine Infoservices @ HYD
Position : Lead / Developer - Guidewire
Experience : 7 – 12 Yrs
Client : a Digital Transformation Company
Location : Hyderabad, IND
Key Skills : Java/J2EE, Gosu, GW plug-ins, Jenkins/Mavens/TFS, Guidewire BillingCenter Functional & Integration, GX model, BillingCenter data model, P&C Insurance domain, SQL, JMS, Message Queue
Skills Required :
- Hands on Experience of Document Integration in Policy Center and Billing Center.
- Hands On experience in Rating Management(Rate table, routines, Product Model etc)
- Hands on Experience with Billing Center Components like Invoice, Disbursement, Deliquency, Payment Plan, Billing Plan etc.
- Hands on Experience in Configuration and Entity customization.
- Should ready to relocate Hyderabad.
- Pre Covid WFH, post Covid need to work from office.
- Should ready to work in US Shifts (EST Timings)
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team



