
Job Title: Technical Program Manager
Location: Hyderabad
Type: Full-time
Experience: 5+ years
Company: RaptorX.ai
About RaptorX
RaptorX is a next-gen AI platform for real-time financial crime detection, founded by leaders from Microsoft, Zscaler, and Palo Alto Networks.
If you're excited by AI, security, graphs, and building systems that matter—join us.
What You’ll Do
As a Technical Program Manager (TPM) at RaptorX, you will:
Own and drive complex, cross-functional AI and fraud detection initiatives from concept to deployment.
Work closely with product managers, engineers, data scientists, and external stakeholders to define scope, success metrics, and timelines.
Create detailed execution plans and ensure on-time delivery while maintaining high quality.
Identify risks early, resolve blockers, and communicate progress clearly across teams and leadership.
Champion best practices in documentation, sprint planning, release tracking, and cross-team alignment.
Act as the glue between technical execution and strategic vision—keeping us agile and accountable as we scale.
What You Bring
5+ years of technical program/project management experience in fast-paced product or AI/ML environments.
Strong understanding of AI/ML pipelines, data infrastructure, APIs, or security/fraud detection systems.
Experience working with technical teams—especially engineering and data science.
A bias for action, ownership, and continuous improvement.
Excellent communication and stakeholder management skills.
Bonus: Exposure to graph databases, fraud/risk, LLMs, or fintech.
Why RaptorX?
Work on cutting-edge fraud detection problems with real-world impact.
Collaborate with a mission-driven, founder-led team that values autonomy, creativity, and speed.
Shape the future of a fast-growing startup backed by deep domain expertise.

About RaptorX.ai
About
RaptorX.ai is an innovative fraud prevention platform founded in 2023, leveraging advanced AI and machine learning to protect businesses against digital threats. Backed by prominent investors like PeakXV Spark, the company specializes in transaction security and identity verification solutions. Through their cutting-edge technology combining supervised and unsupervised ML with LLMs, RaptorX is transforming how businesses approach fraud prevention and digital trust.
Tech stack
Candid answers by the company
RaptorX.ai is a B2B fraud prevention platform that uses advanced AI and machine learning to protect businesses against transaction and identity fraud. They offer comprehensive solutions for:
- Transaction fortification
- User profiling
- Fraud prevention
- Dispute resolution
- Regulatory compliance
Similar jobs
Job Summary:
We are looking for a skilled and motivated Backend Engineer with 2 to 4 years of professional experience to join our dynamic engineering team. You will play a key role in designing, building, and maintaining the backend systems that power our products. You’ll work closely with cross-functional teams to deliver scalable, secure, and high-performance solutions that align with business and user needs.
This role is ideal for engineers ready to take ownership of systems, contribute to architectural decisions, and solve complex backend challenges.
Website: https://www.thealteroffice.com/about
Key Responsibilities:
- Design, build, and maintain robust backend systems and APIs that are scalable and maintainable.
- Collaborate with product, frontend, and DevOps teams to deliver seamless, end-to-end solutions.
- Model and manage data using SQL (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Redis), incorporating caching where needed.
- Implement and manage authentication, authorization, and data security practices.
- Write clean, well-documented, and well-tested code following best practices.
- Work with cloud platforms (AWS, GCP, or Azure) to deploy, monitor, and scale services effectively.
- Use tools like Docker (and optionally Kubernetes) for containerization and orchestration of backend services.
- Maintain and improve CI/CD pipelines for faster and safer deployments.
- Monitor and debug production issues, using observability tools (e.g., Prometheus, Grafana, ELK) for root cause analysis.
- Participate in code reviews, contribute to improving development standards, and provide support to less experienced engineers.
- Work with event-driven or microservices-based architecture, and optionally use technologies like GraphQL, WebSockets, or message brokers such as Kafka or RabbitMQ when suitable for the solution.
Requirements:
- 2 to 4 years of professional experience as a Backend Engineer or similar role.
- Proficiency in at least one backend programming language (e.g., Python, Java, Go, Ruby, etc.).
- Strong understanding of RESTful API design, asynchronous programming, and scalable architecture patterns.
- Solid experience with both relational and NoSQL databases, including designing and optimizing data models.
- Familiarity with Docker, Git, and modern CI/CD workflows.
- Hands-on experience with cloud infrastructure and deployment processes (AWS, GCP, or Azure).
- Exposure to monitoring, logging, and performance profiling practices in production environments.
- A good understanding of security best practices in backend systems.
- Strong problem-solving, debugging, and communication skills.
- Comfortable working in a fast-paced, agile environment with evolving priorities.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
We are hiring a Technical Business Analyst who wants to advance their knowledge and experience in Privacy & Security. The ideal candidate will have interest and experience in data privacy and application & data security. We will be providing training resources to ramp up on privacy & security, but candidates with technical background, and interest in privacy & security are preferred.
As the successful candidate, you will be required to:
- Gathering and documenting needs and use of data from internal and external stakeholders, including privacy requirements based on laws & regulations, document security best practices and requirements.
- Research, analyze, and document business requirements. Prepare high quality analysis such as summarizing security best practices, privacy laws, business requirements specifications, functional and technical specifications, traceability matrices, etc.
- Create user stories based on functional requirements gathered from business teams.
- Work with developers and support teams to resolve defects and environment issues
- Creating data and process workflows.
- Good communication and problem-solving skills.
- Engage and work closely with development team and stakeholders, including attending scrum meetings.
To be eligible for this role, you should possess the following:
- Bachelor’s or Engineering Degree in Computer Science or a related field.
- 4+ years of relevant experience as a Business Analyst.
- 2+ years of experience using JIRA and Confluence.
- Should have good communication and presentation skills with the ability to adapt to a range of audiences
- Must be self-driven, self-motivated with good soft skills.
- Must have good software engineering skills, thorough knowledge of SDLC, STLC.
- Excellent organizational and time management skills
- Quick learner and attention to detail
- Portray strong analytical and problem-solving skills and the ability to work within a collaborative team environment; Ability to multi-task and manage multiple projects
- Open for new challenges and learning and flexible to work and support the team as per the business need
- Understand and analyze Business, Functional, and technical requirements of the project
- Knowledge in using any of the bug tracking tool like Bugzilla/Jira
Mandatory Skills:-
• Total experience required 1 years in Java Development.
• Experience on any one of these databases like MySQL OR MongoDB OR Oracle OR
PostgreSQL will work
• Experience in Spring Framework, Hibernate, SpringBoot, REST Web-Services.
• Should have experience with designing and understanding DB schema as per the
business requirements.
• Should have experience with software design patterns.
• Proficient understanding of code versioning tools, such as Git.
Roles and Responsibilities:
• The role covers mostly Java Development with the use of SpringMVC and/or Spring
Boot.
• Analyse product requirements and design to develop efficient, reusable, reliable, and
scalable software with quality conformance.
• Collaborate with the team on architecture, design, code, and configuration reviews.
Desired Candidate Profile
• Good communication skills
• Good troubleshooting skills, analytical and logical skills
• Excellent team spirit and teamwork
• Collaboratively work with the team
• Good to Have: - Experience in Hibernate ORM, Designing Microservices, Experience
in Swagger and Postman.
- Assisting and coordinating with operations team
- supporting administrative staff
- conducting marketing research
- documentation and reporting to the operations department
- preparing reports on competitor product analysis
- ensuring if everything is working in a perfect manner
- using technology to keep the company updated behind the curtains
- Should be responsible for efficient dealing of complaints to completion and enabling satisfaction of customer.
- Responsible for rendering useful administrative support to other members of the customer care team.
- Responsible for documenting all calls with regards to participant inquires accurately using Call Tracking System.
- Responsible for monitoring Call tracking for responses from administrative team so call returns are done in a timely fashion.
- Responsibility is to Follow-up with participants within a 24-hour period in regards to the initial phone call even if it is to just touch base and let participant know inquiry is still be researched.
- Responsibility is to answer participant questions, as well as question participants to obtain full understanding of what information is being requested.
- Develop new client partnerships with company's key accounts and develop strategic partnerships with large companies across industries.
- Nurture and develop existing clients and generate incremental revenues within these accounts by selling additional products and services.
- Proactively identify & solve customer business problems by providing subject matter expertise and by using relevant product and services lines to create solutions.
- Key point of contact for large accounts.
- Ability to maintain senior level client relationships.
- Will be required to implement company's aggressive growth plans in the identified territory. Primary focus will be on new business while ensuring existing relationships are maintained.
- High adherence to internal CRM with an estimate of sales forecast.
- Liaising with the operations/products team for a smooth delivery of the end product and ensuring the service expectations of the customers are met.
- Should be comfortable with working in flexible time zones (Primarily US time zone)
Requirements:
- Minimum of 8 - 12 years experience in Key Accounts Management.
- Strategic thinking and analytical skills
- Excellent written, oral communication and presentation skills.
- Good negotiation skills to achieve desired results/meet customer needs.
Job Description
SDE-II (FE)
Responsibilities
- Gather functional requirements from product management/UX teams and translate requirements into technical specifications to build robust, scalable, supportable solutions
- Serve as technical lead throughout the full development lifecycle, end-to-end, from scoping, planning, conception, design, implementation and testing, to documentation, delivery and maintenance.
- Provide design reviews for other engineers, including feedback on architecture and design issues, as well as integration and performance.
- Manage resources on multiple technical projects and ensure schedules, milestones, and priorities are compatible with technology and business goals.
- Develop new user-facing features
- Build reusable code and libraries for future use
- Ensure the technical feasibility of UI/UX designs
- Optimize application for maximum speed and scalability
- Assure that all user input is validated before submitting to back-end
- Collaborate with other team members and stakeholders
Requirements
- Proficient understanding of web markup, including HTML5, CSS3
- Basic understanding of server-side CSS pre-processing platforms, such as LESS and SASS
- Proficient understanding of client-side scripting and JavaScript frameworks, including jQuery
- Good understanding of AngularJS, ReactJS
- Good understanding of asynchronous request handling, partial page updates, and AJAX
- Basic knowledge of image authoring tools, to be able to crop, resize, or perform small adjustments on an image. Familiarity with tools such as as Gimp or Photoshop is a plus.
- Proficient understanding of cross-browser compatibility issues and ways to work around them.
- Proficient understanding of git
- Good understanding of SEO principles and ensuring that the application will adhere to them.
- BTech/BE Equivalent.
- 3-5 years of experience








