Key Responsibilities
Performance Optimization & Query Tuning
• Tune query performance for PostgreSQL, EdgeDB, MongoDB Atlas, and Snowflake.
• Analyze execution plans, identify bottlenecks, and implement indexing & caching strategies.
• Work with engineering teams to optimize schema design for OLTP (Postgres/EdgeDB) and metadata storage (MongoDB).
Database Architecture & Scalability
• Design multi-tenant scaling strategies for PostgreSQL and MongoDB Atlas (schema, sharding, partitioning).
• Implement high-availability, replication, and clustering configurations.
• Optimize Snowflake warehouse configurations for query speed and cost control.
Operational Excellence
• Plan and execute zero-downtime database upgrades and schema migrations.
• Set up proactive monitoring, alerting, and anomaly detection across all database systems.
• Manage capacity planning for storage and compute resources across Postgres, MongoDB Atlas, and Snowflake.
Storage & Cost Optimization
• Reduce storage costs via archiving, partitioning, compression, and lifecycle policies.
• Optimize Snowflake compute and storage usage with warehouse tuning and data pruning.
• Implement tiered storage strategies for cold vs. hot data.
Security, Compliance & Governance
• Enforce encryption, access controls, and audit logging across all databases.
• Ensure compliance with GDPR, SOC 2, and other relevant regulations.
Collaboration & Knowledge Sharing
• Partner with backend, platform, and data engineering teams to ensure efficient database usage.
• Provide training and documentation on query best practices and schema design.
Qualifications
Required:
• 7+ years DBA experience, with deep expertise in PostgreSQL and MongoDB Atlas.
• Strong understanding of multi-tenant architectures in production.
• Experience with Snowflake query optimization, warehouse tuning, and cost management.
• Proven success in executing zero-downtime upgrades and large-scale migrations.
• Strong skills in query optimization, indexing, partitioning, and sharding.
• Proficiency in scripting (Python, Bash, SQL) for automation.
• Hands-on experience with monitoring tools (pg_stat_statements, Atlas monitoring, Snowflake Resource Monitors, Prometheus, Grafana).
Nice to Have:
• Experience with EdgeDB or graph/relational hybrid databases.
• Familiarity with Kubernetes-based database deployments (StatefulSets, Operators).
• Background in distributed caching (Redis, Memcached).

About Founding Minds
About
Connect with the team
Similar jobs
Job Title : React Native Developer
Experience : 3+ Years
Location : Gurgaon
Working Days : 6 Days (Monday to Saturday)
Job Summary :
We are looking for a skilled React Native Developer with experience in converting Figma designs into high-performance mobile applications.
The ideal candidate should have some exposure to Blockchain/Web3 technologies and be capable of converting mobile applications into SDKs.
Key Responsibilities :
✅ Develop and maintain React Native applications.
✅ Convert Figma designs into pixel-perfect UI.
✅ Optimize app performance and ensure smooth user experience.
✅ Work with Blockchain/Web3 integrations (preferred).
✅ Convert mobile applications into SDKs for seamless integration.
✅ Collaborate with designers, backend developers, and blockchain engineers.
Technical Skills :
🔹 Strong proficiency in React Native, JavaScript, TypeScript.
🔹 Experience with Redux, Context API, Hooks.
🔹 Familiarity with Blockchain/Web3 (Ethereum, Solidity, Wallet integrations).
🔹 Understanding of mobile SDK development.
🔹 Knowledge of REST APIs, GraphQL, and third-party integrations.
Key skills : Python, Numpy, Panda, SQL, ETL
Roles and Responsibilities:
- The work will involve the development of workflows triggered by events from other systems
- Design, develop, test, and deliver software solutions in the FX Derivatives group
- Analyse requirements for the solutions they deliver, to ensure that they provide the right solution
- Develop easy to use documentation for the frameworks and tools developed for adaption by other teams
Familiarity with event-driven programming in Python
- Must have unit testing and debugging skills
- Good problem solving and analytical skills
- Python packages such as NumPy, Scikit learn
- Testing and debugging applications.
- Developing back-end components.
Airflow developer:
Exp: 5 to 10yrs & Relevant exp must be above 4 Years.
Work location: Hyderabad (Hybrid Model)
Job description:
· Experience in working on Airflow.
· Experience in SQL, Python, and Object-oriented programming.
· Experience in the data warehouse, database concepts, and ETL tools (Informatica, DataStage, Pentaho, etc.).
· Azure experience and exposure to Kubernetes.
· Experience in Azure data factory, Azure Databricks, and Snowflake.
Required Skills: Azure Databricks/Data Factory, Kubernetes/Dockers, DAG Development, Hands-on Python coding.
• Hands-on experience in tasks automation experience via scripting
• Hands-on experience in implementing auto-scaling, ELBs, Lamdba functions, and other auto-scaling technologies
• Experience in vulnerability management and security.
• Ability to proactively and effectively communicate and influence stakeholders
• Experience in virtual, cross-functional teamwork
• Strong customer and service management focus and mindset
• Solid and technical hands-on experience with administrating public and private cloud systems (compute, storage, networks, security, hardware, software, etc)
• AWS Associate, Professional or Specialist certification
Minimum of 7+ years of hands-on development experience in one or more of the following languages/frameworks: React/Angular, Node/Next.js,python/Django
● Hands-on development and deployment experience in AWS or Azure using serverless functions or EC2
● Experience with RESTful & GraphQL APIs
● Familiarity with microservices architectures
● Familiarity with dev tools such as Github, Jira, CFT, Terraform
● DB experience with relational or noSQL such as PostgreSQL, , MySQL, Redshift, MongoDB, Redis
● Familiarity with cloud-based logging, monitoring, and security tools
Project Description: Build a Software Solution which integrate with AWS Service to facilitate supply chain management for logistic companies
Experience range: 0.6 to 5 yrs
Mandatory Skills
AWS Lambda, NodeJS, AWS API Gateway
Job Description: As a full stack developer should be able develop API service using Node JS and deploy on AWS with complete unit and integration testing.
Mandatory Technical skills:
- AWS Lambda – create lambda function with all the security in place.
 - Proficiency in Node JS (should have developed services, developed unit and integration testing)
 - Swagger hub – defined the services on swagger hub
 - Strong notions of security best practices (e.g. using IAM Roles, KMS, etc.).
 - Serverless approaches using AWS Lambda. For example, the Serverless Application Model (AWS SAM).
 
Experience and Education 
• Bachelor’s degree in engineering or equivalent.
Work experience
• 4+ years of infrastructure and operations management
Experience at a global scale. 
• 4+ years of experience in operations management, including monitoring, configuration management, automation, backup, and recovery.
• Broad experience in the data center, networking, storage, server, Linux, and cloud technologies.
• Broad knowledge of release engineering: build, integration, deployment, and provisioning, including familiarity with different upgrade models.
• Demonstratable experience with executing, or being involved of, a complete end-to-end project lifecycle.
Skills 
• Excellent communication and teamwork skills – both oral and written.
• Skilled at collaborating effectively with both Operations and Engineering teams.
• Process and documentation oriented.
• Attention to details. Excellent problem-solving skills.
• Ability to simplify complex situations and lead calmly through periods of crisis.
• Experience implementing and optimizing operational processes.
• Ability to lead small teams: provide technical direction, prioritize tasks to achieve goals, identify dependencies, report on progress.
Technical Skills 
• Strong fluency in Linux environments is a must.
• Good SQL skills. 
• Demonstratable scripting/programming skills (bash, python, ruby, or go) and the ability to develop custom tool integrations between multiple systems using their published API’s / CLI’s.
• L3, load balancer, routing, and VPN configuration.
• Kubernetes configuration and management.
• Expertise using version control systems such as Git.
• Configuration and maintenance of database technologies such as Cassandra, MariaDB, Elastic.
• Designing and configuration of open-source monitoring systems such as Nagios, Grafana, or Prometheus.
• Designing and configuration of log pipeline technologies such as ELK (Elastic Search Logstash Kibana), FluentD, GROK, rsyslog, Google Stackdriver.
• Using and writing modules for Infrastructure as Code tools such as Ansible, Terraform, helm, customize.
• Strong understanding of virtualization and containerization technologies such as VMware, Docker, and Kubernetes.
• Specific experience with Google Cloud Platform or Amazon EC2 deployments and virtual machines.c









