
The shift hours for this role is as follows: 1-10pm, 2pm-11pm OR 3pm- 12am IST.
About the role:
We are hiring a Python Lead to provide technical and operational leadership across automation, backend systems, and LLM-driven intelligence for healthcare RCM platforms.
Key Responsibilities:
- Lead development of Python-based automation and backend systems
- Provide technical guidance on UI automation, APIs, and event-driven architectures
- Own telemetry, observability, and root cause analysis processes
- Drive learning systems and continuous improvement loops across production workflows
- Guide LLM usage, RAG pipelines, and governance practices
- Own escalations, systemic RCA, and long-term corrective actions
- Ensure delivery against SLAs and operational KPIs
- Maintain PHI-safe and HIPAA-compliant systems
- Review architecture, code quality, and framework reuse
- Lead CI/CD and Docker-based deployment practices
- Mentor engineers and build strong system owners
Required Skills & Experience:
- Strong experience with event-driven and distributed systems, including retries, idempotency, and fault tolerance
- Deep understanding of hybrid automation strategies combining UI automation and APIs
- Experience designing and enforcing observability standards including logging, metrics, tracing, and failure classification
- Proven experience owning production incident management, escalations, and systemic root cause analysis
- Experience building and governing learning systems and feedback loops for continuous platform improvement
- Strong understanding of RCM rule engines and knowledge systems, including eligibility rules, prior authorization rules, and denial handling
- Experience working with regulatory intelligence systems, including regulatory intake, interpretation, and audit readiness
- Ability to review and guide LLM-powered workflows such as document understanding, rule extraction, and semantic search in RCM contexts
- Hands-on experience with RAG pipelines, prompt templates, and payer/workflow isolation strategies
- Strong focus on validation, auditability, and traceability of automation and LLM driven decisions
- Experience enforcing PHI-safe design practices and HIPAA compliance across teams
- Strong understanding of CI/CD pipelines, containerization, and release discipline
- Ability to mentor senior engineers and build strong system owners
- Demonstrated systems-thinking mindset with automation-first and ownership-driven approach

About Jorie AI
About
Jorie Healthcare Partners is a pioneering HealthTech and FinTech company dedicated to transforming healthcare revenue cycle management through advanced AI and robotic process automation. With over four decades of experience developing, operating, and modernizing healthcare systems, the company has processed billions in claims and transactions with unmatched speed and accuracy.
Its AI-powered platform streamlines billing workflows, reduces costs, minimizes denials, and accelerates cash flow—empowering healthcare providers to reinvest more into patient care. Guided by a collaborative culture symbolized by their rallying cry “Go JT” (Go Jorie Team), Jorie blends cutting-edge technology with strategic consulting to deliver measurable financial outcomes and strengthen operational resilience.
Company video


Candid answers by the company
Jorie Healthcare Partners leverages AI and automation to optimize healthcare revenue cycle management, improving efficiency, accuracy, and financial outcomes for providers.
Connect with the team
Similar jobs
The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
You can contact me on nine three one six one two zero one three two
About OptimizeGEO
OptimizeGEO.ai is our flagship product that helps brands stay visible and discoverable in AI-powered answers. Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes GEO principles ensuring brands are mentioned, cited, and trusted by generative systems (ChatGPT, Gemini, Perplexity, Claude, etc.) and answer engines (featured snippets, voice search, and AI answer boxes).
Founded by industry veterans Kirthiga Reddy (ex-Meta, MD Facebook India) and Saurabh Doshi (ex-Meta, ex-Viacom), Company is backed by Micron Ventures, Better Capital, FalconX, and leading angels including Randi Zuckerberg, Vani Kola and Harsh Jain.
Role Overview
We are hiring a Senior Backend Engineer to build the data and services layer that powers OptimizeGEO’s analytics, scoring, and reporting. This role partners closely with our GEO/AEO domain experts and data teams to translate framework gap analysis, share-of-voice, entity/knowledge-graph coverage, trust signals into scalable backend systems and APIs.
You will design secure, reliable, and observable services that ingest heterogeneous web and third-party data, compute metrics, and surface actionable insights to customers via dashboards and reports.
Key Responsibilities
- Own backend services for data ingestion, processing, and aggregation across crawlers, public APIs, search consoles, analytics tools, and third-party datasets.
- Operationalize GEO/AEO metrics (visibility scores, coverage maps, entity health, citation/trust signals, competitor benchmarks) as versioned, testable algorithms.
- Data Scaping using various tools and working on volume estimates for accurate consumer insights for brands
- Design & implement APIs for internal use (data science, frontend) and external consumption (partner/export endpoints), with clear SLAs and quotas.
- Data pipelines & orchestration: batch and incremental jobs, queueing, retries/backoff, idempotency, and cost-aware scaling.
- Storage & modeling: choose fit-for-purpose datastores (OLTP/OLAP), schema design, indexing/partitioning, lineage, and retention.
- Observability & reliability: logging, tracing, metrics, alerting; SLOs for freshness and accuracy; incident response playbooks.
- Security & compliance: authN/authZ, secrets management, encryption, PII governance, vendor integrations.
- Collaborate cross-functionally with domain experts to convert research into productized features and executive-grade reports.
Required Qualification (Must Have)
- Familiarity with technical SEO artifacts: schema.org/structured data, E-E-A-T, entity/knowledge-graph concepts, and crawl budgets.
- Exposure to AEO/GEO and how LLMs weigh sources, citations, and trust; awareness of hallucination risks and mitigation.
- Experience integrating SEO/analytics tools (Google Search Console, Ahrefs, SEMrush, Similarweb, Screaming Frog) and interpreting their data models.
- Background in digital PR/reputation signals and local/international SEO considerations.
- Comfort working with analysts to co-define KPIs and build executive-level reporting.
Expected Qualifications
- 4+ years of experience building backend systems in production (startups or high-growth product teams preferred).
- Proficiency in one or more of: Python, Node.js/TypeScript, Go, or Java.
- Experience with cloud platforms (AWS/GCP/Azure) and containerized deployment (Docker, Kubernetes).
- Hands-on with data pipelines (Airflow/Prefect, Kafka/PubSub, Spark/Flink or equivalent) and REST/GraphQL API design.
- Strong grounding in systems design, scalability, reliability, and cost/performance trade-offs.
Tooling & Stack (Illustrative)
- Runtime: Python/TypeScript/Go
- Data: Postgres/BigQuery + object storage (S3/GCS)
- Pipelines: Airflow/Prefect, Kafka/PubSub
- Infra: AWS/GCP, Docker, Kubernetes, Terraform
- Observability: OpenTelemetry, Prometheus/Grafana, ELK/Cloud Logging
- Collab: GitHub, Linear/Jira, Notion, Looker/Metabase
Working Model
- Hybrid-remote within India with limited periodic in-person collaboration
- Startup velocity with pragmatic processes; bias to shipping, measurement, and iteration.
Equal Opportunity
OptimizeGEO is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Backend Engineer (MongoDB / API Integrations / AWS / Vectorization)
Position Summary
We are hiring a Backend Engineer with expertise in MongoDB, data vectorization, and advanced AI/LLM integrations. The ideal candidate will have hands-on experience developing backend systems that power intelligent data-driven applications, including robust API integrations with major social media platforms (Meta, Instagram, Facebook, with expansion to TikTok, Snapchat, etc.). In addition, this role requires deep AWS experience (Lambda, S3, EventBridge) to manage serverless workflows, automate cron jobs, and execute both scheduled and manual data pulls. You will collaborate closely with frontend developers and AI engineers to deliver scalable, resilient APIs that power our platform.
Key Responsibilities
- Design, implement, and maintain backend services with MongoDB and scalable data models.
- Build pipelines to vectorize data for retrieval-augmented generation (RAG) and other AI-driven features.
- Develop robust API integrations with major social platforms (Meta, Instagram Graph API, Facebook API; expand to TikTok, Snapchat, etc.).
- Implement and maintain AWS Lambda serverless functions for scalable backend processes.
- Use AWS EventBridge to schedule cron jobs and manage event-driven workflows.
- Leverage AWS S3 for structured and unstructured data storage, retrieval, and processing.
- Build workflows for manual and automated data pulls from external APIs.
- Optimize backend systems for performance, scalability, and reliability at high data volumes.
- Collaborate with frontend engineers to ensure smooth integration into Next.js applications.
- Ensure security, compliance, and best practices in API authentication (OAuth, tokens, etc.).
- Contribute to architecture planning, documentation, and system design reviews.
Required Skills/Qualifications
- Strong expertise with MongoDB (including Atlas) and schema design.
- Experience with data vectorization and embeddings (OpenAI, Pinecone, MongoDB Atlas Vector Search, etc.).
- Proven track record of social media API integrations (Meta, Instagram, Facebook; additional platforms a plus).
- Proficiency in Node.js, Python, or other backend languages for API development.
- Deep understanding of AWS services:
- Lambda for serverless functions.
- S3 for structured/unstructured data storage.
- EventBridge for cron jobs, scheduled tasks, and event-driven workflows.
- Strong understanding of REST and GraphQL API design.
- Experience with data optimization, caching, and large-scale API performance.
Preferred Skills/Experience
- Experience with real-time data pipelines (Kafka, Kinesis, or similar).
- Familiarity with CI/CD pipelines and automated deployments on AWS.
- Knowledge of serverless architecture best practices.
- Background in SaaS platform development or data analytics systems.
We are looking for a skilled Backend Developer with strong experience in building, scaling, and optimizing server-side systems. The ideal candidate is proficient in Node.js, FastAPI (Python), and database design, with hands-on experience in cloud infrastructure on AWS or GCP.
Responsibilities:
Design, develop, and maintain scalable backend services and APIs using Node.js and FastAPI
Build robust data models and optimize performance for SQL and NoSQL databases
Architect and deploy backend services on GCP/AWS, leveraging managed cloud services.
Implement clean, modular, and testable code with proper CI/CD and observability (logs, metrics, alerts)
Ensure system reliability, security, and high availability for production environments
Requirements:
2–5 years of backend development experience
Strong proficiency in Node.js, FastAPI, REST APIs, and microservice architecture
Solid understanding of PostgreSQL/MySQL, MongoDB/Redis or similar NoSQL systems
Hands-on experience with AWS or GCP, Docker, and modern DevOps workflows
Experience with caching, queueing, authentication, and API performance optimization
Good to Have:
Experience with event-driven architecture, WebSockets, or serverless functions
Familiarity with Kubernetes or Terraform
Job Location: Gurugram, Haryana, India
Industry: Artificial Intelligence
Requirements
- Senior SD4 engineer/Principal Engineer/ Architect who has led or been part of platform teams inside a large/mid tech company
- Deep expertise working with any of the programming languages to write maintainable, scalable, unit-tested code.Python experience is a bonus - but not required
- Good understanding of REST APIs and the web in general and ability to build a feature from scratch & drive it to completion
- Experience with Kubernetes, Prometheus, Grafana, ELK stack, Load Balancing
- Experience in building a platform team on AWS for Product based companies/start-ups
What You’ll Do
- Build the core technology platform, including Automated provisioning of infrastructure, deployment, and monitoring of services
- Build a central services directory within an organization that shows the cost and health of all services and makes the management of microservices extremely easy
- This involves working with core infrastructure and understanding them in great detail - like Kubernetes, ServiceMesh, Service security
- Mentor/coach engineers to facilitate their development and provide technical leadership
Our growing software technology business is looking for a Mid Level Full Stack Developer to join our Development Team. We're looking for a talented, team-oriented, highly motivated, smart individual with a passion for software engineering, a strong desire to learn and an interest in providing mentorship to peers. We desire self-starting developers with strong experience developing sophisticated web applications leveraging the latest technologies. The successful candidate for this role must be an outstanding problem-solver with a great database and software architectural skills.
KEY REQUIRED SKILLS
• Node.js
• GIT
• MySQL
POSITION DESCRIPTION
We have an immediate need for a highly motivated Mid Level Node Software Developer to provide software development expertise and hands-on implementation using the latest open source server-side JavaScript technologies on the Node platform and other related open source products.
REQUIRED
• Strong software development experience with Node.js in addition to detailed understanding of user interface frameworks, back-end software architecture interactions and node module capabilities.
• One or more years' experience with one or more JavaScript frameworks/technologies such as Express.js, Angular.js, React.js, MobX or Flux.js.
• Strong development experience using Node.js.
• Strong communication and collaborative skills
• Portfolio of application(s)
DESIRED SKILLS
• Experience with HTML5, CSS3.
• Experience with source code versioning and Pull Requests with Git repositories.
• Standards & Protocols knowledge including JSON.
• Complex programming, program debugging, automated program testing, data analysis, problem analysis and resolution of issues within open source applications.
• Experience in other languages such as .net, including VB and C# are a plus
• Operating System and Infrastructure experience with Ubuntu Linux and Windows Server.
EDUCATION
• Bachelor’s degree from an accredited college in a related discipline, with minimum 2-3 years of relevant professional experience.
• Exact skill match may allow flexibility with education and experience requirements
• Certifications are a plus.
⦁ 2+ years of working experience as a software engineer.
⦁ Engineer capable of designing solutions, writing code, and deployment
⦁ Demonstrated skills, knowledge, and expertise with statistical computer languages (Python, Django) and the related ecosystem (PyCharm, Jupyter)
⦁ Good knowledge of Python.
⦁ Creative problem-solving ability and sound judgment ability to use own initiative and take responsibility for decisions.
Candidate must have
⦁ Good knowledge of Python
⦁ Django, Flask, FastApi
⦁ Candidates must have knowledge about Product Development.
⦁ Candidates must have Technical Degree.
Skills
⦁ Developing back-end components.
⦁ Design and develop web applications in Python, and Django.
⦁ Good knowledge of Django, Flask, FastApi
⦁ Must be proficient in writing python codes
⦁ Must have worked on back-end development
⦁ Integrate Python application with other internal applications.
⦁ Strong spoken and written communication skills in English
● You’ve been building scalable backend solutions for web applications.
● You have experience with any of these backend programming languages -- Python,
NodeJS or Java.
● You write an understandable, very high quality, testable code with an eye towards
maintainability.
● You are a strong communicator. Explaining complex technical concepts to designers,
support, and other engineers is no problem for you.
● You possess strong computer science fundamentals: data structures, algorithms,
programming languages, distributed systems, and information retrieval.
● You have completed a bachelor's degree in Computer Science, Engineering or related
field, or equivalent training, fellowship, or work experience.
We are looking for a Python Developer to join our engineering team and help us
Python Developer responsibilities include writing and testing code, debugging programs
Responsibilities :
Requirements :











