11+ Graph Databases Jobs in Bangalore (Bengaluru) | Graph Databases Job openings in Bangalore (Bengaluru)
Apply to 11+ Graph Databases Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Graph Databases Job opportunities across top companies like Google, Amazon & Adobe.
Role and Responsibilities
The candidate for the role will be responsible for enabling single view for the data from multiple sources.
- Work on creating data pipelines to graph database from data lake
- Design graph database
- Write Graph Database queries for front end team to use for visualization
- Enable machine learning algorithms on graph databases
- Guide and enable junior team members
Qualifications and Education Requirements
B.Tech with 2-7 years of experience
Preferred Skills
Must Have
Hands-on exposure to Graph Databases like Neo4J, Janus etc..
- Hands-on exposure to programming and scripting language like Python and PySpark
- Knowledge of working on cloud platforms like GCP, AWS etc.
- Knowledge of Graph Query languages like CQL, Gremlin etc.
- Knowledge and experience of Machine Learning
Good to Have
- Knowledge of working on Hadoop environment
- Knowledge of graph algorithms
- Ability to work on tight deadlines
Founding Engineer (Bangalore)
The problem:
Business enterprises overpay vendors - on every batch of invoices, on every month because the data that would catch lives in different systems. We are building an AI agent that processes invoices end-to-end, reasons across all the relevant sources, flags genuine discrepancies, and acts - without a human having to investigate each one.
What you will own
Everything engineering. Schema design to deployment to the 2am fix when something breaks in production. There is no tech lead above you. There is no platform team. There is the architecture, you, and the founders. Concretely, this means building:
- A multi-stage agentic pipeline that takes a vendor invoice and produces a structured decision - fully autonomous for clear cases, escalating to human review for genuinely ambiguous ones. We use LangGraph, but if you've built equivalent systems with Temporal, Prefect, or custom state machines with LLM orchestration, that works
- An LLM-powered extraction layer that handles real invoices - scanned PDFs, stamped documents, inconsistent layouts - and returns structured output
- A graph data model that connects invoices to various sources and can traverse those relationships to detect discrepancies
- ERP connectors, GST validation logic, and a write-back layer that closes the loop
What we need
- Strong Python. Async FastAPI, clean service boundaries, tests that actually catch bugs. You have shipped Python backends that handled real production load
- Solid Postgres. Complex queries, schema design, migrations without downtime, row-level security for multi-tenant data. pgvector is a plus - if not, you pick it up fast
- LLM API experience in production. You have called an LLM API for something that real users depended on. You know about structured output, retry logic, cost management, prompt versioning. A side project counts if it was genuinely deployed
- Comfort with graph data models. You understand when a graph is the right structure and when it is not. You do not need deep Neo4j production experience - you need to understand graph relationships conceptually and be willing to learn Cypher. It is a 2-day ramp for the right person
- Working knowledge of deployment. Deployed and operated production workloads on GCP. Cloud Run, Cloud SQL, Cloud Storage, Redis — you're comfortable across the stack. If you've done it on AWS, the translation isn't hard, but GCP is where we are
- You own things. Not "I contributed to" - you designed it, shipped it, and fixed it when it broke. That pattern needs to be visible in your history
Good to have, not mandatory
- Built an agentic pipeline with multiple stages
- Any fintech, P2P domain experience - even tangential
- Worked at a startup with under 20 people
- Has a GitHub, blog, or writeup that shows how you think about a hard technical problem
What you get
- The hardest engineering problem you would have worked on. This is not CRUD with an LLM bolted on
- Real ownership. First engineering hire. Your architectural decisions will be in this product five years from now
- Equity that matters. ESOP - Open to discussion. We are pre-seed - this is a bet, not a guarantee. We will not pretend otherwise
- No meetings tax. You work directly with the founders. The product is specified clearly. You know what you are building and why
Honest about stage: We do not have a production ready infra yet. We have a complete architecture specification and a working prototype. If you need the stability of an established engineering org, this is not the right moment. If you want to build something real from zero and own a meaningful piece of it, it is.
The founders
One of us has spent 20 years building revenue and operational engines at companies where there was no playbook - part of the pilot team that established the world's largest search company's direct sales operations in India, managed global operations for a global mobile advertising platform, scaled a B2C platform to become one of India’s leading edtech platforms and most recently worked on building an enterprise Agentic Voice AI platform. The other has spent 15 years taking AI from demo to production in domains where failure is expensive - voice, lending, and conversational systems across a Series D conversational AI company, a major telco, a Big 4, and a leading NBFC.
Two IIT/IIM alumni who have both watched AI work in enterprise, and know exactly what it takes to get it there. We are not building this product because it sounds interesting. We are building it because we have both sat across the table from CFOs who know they are losing margin and have no tool capable of doing anything about it.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Role overview
- Overall 5 to 7 years of experience. Node.js experience is must.
- At least 3+ years of experience or couple of large-scale products delivered on microservices.
- Strong design skills on microservices and AWS platform infrastructure.
- Excellent programming skill in Python, Node.js and Java.
- Hands on development in rest API’s.
- Good understanding of nuances of distributed systems, scalability, and availability.
What would you do here
- To Work as a Backend Developer in developing Cloud Web Applications
- To be part of the team working on various types of web applications related to Mortgage Finance.
- Experience in solving a real-world problem of Implementing, Designing and helping develop a new Enterprise-class Product from ground-up.
- You have expertise in the AWS Cloud Infrastructure and Micro-services architecture around the AWS Service stack like Lambdas, SQS, SNS, MySQL Databases along with Dockers and containerized solutions/applications.
- Experienced in Relational and No-SQL databases and scalable design.
- Experience in solving challenging problems by developing elegant, maintainable code.
- Delivered rapid iterations of software based on user feedback and metrics.
- Help the team make key decisions on our product and technology direction.
- You actively contribute to the adoption of frameworks, standards, and new technologies.
Golang Developer
Experience: Minimum 4 years
Job Location: Delhi/Pune/Bangalore-Hybrid
Job Description
- Must: Minimum 2.+ years of experience in Golang programming language, paradigms, constructs, and idioms.
- Knowledge of common Goroutine and channel patterns.
- Experience with the full site of Go frameworks and tools.
- Preferred: Minimum 2+ years of experience in aws.
- Must: Cloud environment (e.g. AWS, VMware, etc.).
- Must: Working Knowledge in Mysql.
- Popular Go web frameworks.
- Familiarity with code versioning tools such as Github / Gitlab.
Key Skills
- Golang, , Javascript, MySQL, Postgresql.
- Responsibilities : The job requries Golang experience with MySQL database.
- Ability to work on a small on minimal supervision Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality.
You will break down business opportunities and problems into software solutions. You will work closely with the CTO to help product and marketing stakeholders distill the product vision and roadmap, into a technology vision. You will be responsible for the evolution of an already bleeding-edge highly distributed systems at the back-end, and would need to work withfront-end architects to come up with the best user experience for the gaming platform. You will be required to make decisions quickly, and work under strict timelines. You would lead technology strategy through analysis of market trends and product requirements. You would help set coding guidelines and help bring the most modern tools to keep engineering processes most efficient.
Requirements:
● B.E/MS in Computer Science or equivalent.
● 10+ years or more of progressive software technology experience with at least 3 years
in an architectural role.
● Completely hands on with technology and architecture. Start-up experience is a huge
plus.
● Ability to quickly prototype and demonstrate technology value adds and educate + drive
adoption within the extended technology team
● Excellent and robust understanding of scalable product system architecture(s),
platforms and core technologies
● Demonstrated problem-solving and leadership skills to pursue correct engineering
process in adverse conditions. Ability to embrace and demonstrate leadership beyond
ownership
● Work with engineering leadership to setup and manage processes.
● Track record of thought leadership and out of the box thinking in the technology arena.
● Ability to work efficiently in an entrepreneurial and in a start-up environment
● A Java/Spring/Akka, Javascript or Golang rockstar.
● Deep and hands on knowledge of some of these technologies - MySQL, NodeJS,
message brokers such as Kafka/RabbitMQ, NoSQL datastore such as Mongo, Cassandra,
Arango, distributed caches such as Redis/memcached, container technology such as
Docker and Kubernetes etc.
● Great proficiency in distributed systems design, with an ability to make the right
trade-offs for creating future-proof solutions.
● Building quick PoCs and full-fledged solutions with various AWS managed services would
be a big plus.
Title : .Net Developer with Cloud
Locations: Hyderabad, Chennai, Bangalore, Pune and new Delhi (Remote).
Job Type: Full Time
.Net Job Description:
Required experience on below skills:
. Experience with MS Azure: App Service, Functions, Cosmos DB and Active Directory
· Deep understanding of C#, .NET Core, ASP.NET Web API 2, MVC
· Experience with MS SQL Server
· Strong understanding of object-oriented programming
· Experience working in an Agile environment.
· Strong understanding of code versioning tools such as Git or Subversion
· Usage of automated build and/or unit testing and continuous integration systems
· Excellent communication, presentation, influencing, and reasoning skills.
· Capable of building relationships with colleagues and key individuals.
. Must have capability of learning new technologies.
- 3+ years of overall experience
- Understanding about AWS - EC2, S3, RDS etc
- Extensive experience building and refactoring Java applications
- Good work experience with Message Que - Ka a, Rabbit MQ, etc
- Understanding and experience building high-performance, scalable algorithms.
- Understanding of Agile or Kanban / Lean so ware development methodologies- Experience using modern build
tools such as Maven, Jenkins, Github, etc. a plus - Be hands-on, willing to dig in and crank out code.
- Be a learner, able to explore new areas, learn new things, and quickly apply them to solve new problems.
- Be a spark, bring energy, passion and creativity to work every day.
Good to have
- No-sql experience ( dynamo , mongo , Cassandra etc. )
- Datadog or similar monitoring tool
- Docker exposure
Why you should join us
- You will join the mission to create positive impact on millions of peoples lives
- You get to work on the latest technologies in a culture which encourages experimentation - You get to work with super humans (Psst: Look up these super human1, super human2, super human3, super human4)
- You get to work in an accelerated learning environment
What you will do
- You will provide deep technical expertise to your team in building future ready systems.
- You will collaborate in a highly cross functional team, providing engineering perspective to non technical members of the team
- You will help develop a robust roadmap for ensuring engineering operational excellence
- You will establish clean and optimised coding standards that are well documented
- You will author efficient, reliable and performant systems
- You will design systems, frameworks and libraries that work at scale, are easy to maintain and provide a great developer experience
- You will be agile and curious about customer problems and business objectives
- You will actively mentor and participate in knowledge sharing forums
- You will work in an exciting startup environment where you can be ambitious and try new things :)
You should apply if
- You have a strong foundation in Computer Science concepts and programming fundamentals
- You have been working on backend web technologies since 8+ years
- You have built and maintained reliable systems that operate at high scale
- You’re experienced in building and running cloud-native platforms & distributed systems
- Extensive experience in any web stack (we use Typescript / AWS / DynamoDB / PostgreSQL)
- You understand the hustle of a startup and are good with handling ambiguity
- You are data driven, have customer empathy and enjoy building delightful applications
- You are curious, a quick learner and someone who loves to experiment
- You insist on highest standards of quality, maintainability and performance
- You work well in a team to enhance your impact
Job Responsibilities
Design and build from cloud-based products and services with massive scale and reliability Write clean and modular code primarily in Python to create multi tenant microservices Terabyte scale data per month with SLA end to end latency and tenant fairness
Build CICD based software development model with end-to-end ownership of code delivery - starting from design/architecture, coding, automated functional/integration testing and operating/monitoring the service in production.
Use relevant technologies and cloud services like Kafka, Redis, Mongo, RDS, Spark Streaming, Redshift, Airflow to build highly performant and scalable distributed systems
Design and develop data schema and access layer to optimally store and retrieve data
Stay up to date with the latest developments in cloud computing and incorporate relevant learnings to both product features and product architecture.
Preferred Qualifications
BS/Btech (Btech Preferred) in Computer Science, Computer Engineering, Information Technology
Preferred Technical Skills:
2- 6 years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services Expert coding skills in Python, Django
In Depth experience in AWS is mandatory
Expertise in building scalable event based asynchronous systems based on the microservices architecture
Experience working with docker and kubernetes
Experience with databases such as MongoDB, Redis, RDS, RDF, Graph DB, SPARQL etc. Experience with messaging technologies such as Kafka, Pulsar, SQS
Must have expertise in building REST APIs
Strong object-oriented designing and programming experience Experience in cloud object stores: S3, Cloud Storage, Blobs, etc. Desired Technical Skills:
Open source committer in related areas like cloud technologies, kubernetes, database etc Additional Skills
Great written and verbal communication
Ability to work geo distributed cross functional group
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team





