
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.

Similar jobs
Role & Responsibilities:
We are seeking a Software Developer with 5-10 year’s experience with strong foundations in Python, databases, and AI technologies.
The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows.
This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities :
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling.
• Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features.
• Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Role: DevOps Engineer
Experience: 2–3+ years
Location: Pune
Work Mode: Hybrid (3 days Work from office)
Mandatory Skills:
- Strong hands-on experience with CI/CD tools like Jenkins, GitHub Actions, or AWS CodePipeline
- Proficiency in scripting languages (Bash, Python, PowerShell)
- Hands-on experience with containerization (Docker) and container management
- Proven experience managing infrastructure (On-premise or AWS/VMware)
- Experience with version control systems (Git/Bitbucket/GitHub)
- Familiarity with monitoring and logging tools for system performance tracking
- Knowledge of security best practices and compliance standards
- Bachelor's degree in Computer Science, Engineering, or related field
- Willingness to support production issues during odd hours when required
Preferred Qualifications:
- Certifications in AWS, Docker, or VMware
- Experience with configuration management tools like Ansible
- Exposure to Agile and DevOps methodologies
- Hands-on experience with Virtual Machines and Container orchestration
Contract Review and Lifecycle management is no longer a niche idea. It is one of the fastest growing sectors within legal operations automation with a market size of $10B growing at 15% YoY. InkPaper helps corporations and law firms optimize their contract workflow and lifecycle management by providing workflow automation, process transparency, efficiency, and speed. Automation and Blockchain have the power to transform legal contracts as we know of today; if you are interested in being part of that journey, keep reading!
InkPaper.AI is looking for passionate DevOps Engineer who can drive and build next generation AI-powered products in Legal Technology: Document Workflow Management and E-signature platforms. You will be a part of the product engineering team based out of Gurugram, India working closely with our team in Austin, USA.
If you are a highly skilled DevOps Engineer with expertise in GCP, Azure, AWS ecosystems, and Cybersecurity, and you are passionate about designing and maintaining secure cloud infrastructure, we would love to hear from you. Join our team and play a critical role in driving our success while ensuring the highest standards of security.
Responsibilities:
- Solid experience in building enterprise-level cloud solutions on one of the big-3(AWS/Azure/GCP)
- Collaborate with development teams to automate software delivery pipelines, utilizing CI/CD tools and technologies.
- Responsible for configuring and overseeing cloud services, including virtual machines, containers, serverless functions, databases, and networking components, ensuring their effective management and operation.
- Responsible for implementing robust monitoring, logging, and alerting solutions to ensure optimal system health and performance
- Develop and maintain documentation for infrastructure, deployment processes, and security procedures.
- Troubleshoot and resolve infrastructure and deployment issues, ensuring system availability and reliability.
- Conduct regular security assessments, vulnerability scans, and penetration tests to identify and address potential threats.
- Implement security controls and best practices to protect systems, data, and applications in compliance with industry standards and regulations
- Stay updated on emerging trends and technologies in DevOps, cloud, and cybersecurity. Recommend improvements to enhance system efficiency and security.
An ideal candidate would credibly demonstrate various aspects of the InkPaper Culture code –
- We solve for the customer
- We practice good judgment
- We are action-oriented
- We value deep work over shallow work
- We reward work over words
- We value character over only skills
- We believe the best perk is amazing peers
- We favor autonomy
- We value contrarian ideas
- We strive for long-term impact
You Have:
- B.Tech in Computer Science.
- 2 to 4 years of relevant experience in DevOps.
- Proficiency in GCP, Azure, AWS ecosystems, and Cybersecurity
- Experience with: CI/CD automation, cloud service configuration, monitoring, troubleshooting, security implementation.
- Familiarity with Blockchain will be an edge.
- Excellent verbal communication skills.
- Good problem-solving skills.
- Attention to detail
At InkPaper, we hire people who will help us change the future of legal services. Even if you do not think you check off every bullet point on this list, we still encourage you to apply! We value both current experience and future potential.
Benefits
- Hybrid environment to work from our Gurgaon Office and from the comfort of your home.
- Great compensation package!
- Tools you need on us!
- Our insurance plan offers medical, dental, vision, short- and long-term disability coverage, plus supplemental for all employees and dependents
- 15 planned leaves + 10 Casual Leaves + Company holidays as per government norms
InkPaper is committed to creating a welcoming and inclusive workplace for everyone. We value and celebrate our differences because those differences are what make our team shine. We hire great people from diverse backgrounds, not just because it is the right thing to do, but because it makes us stronger. We are an equal opportunity employer and does not discriminate against candidates based on race, ethnicity, religion, sex, gender, sexual orientation, gender identity, or disability
Location: Gurugram or remote
Job description
The ideal candidate must have extensive development experience of microservices using Java Spring Boot. In addition, they should know the installation, configuration, platform operations, and troubleshooting of API products. Besides, they should also have experience in API design, BaaS, Advanced proxies, Analytics, Developer Portal, and creating RESTful API patterns.
Responsibility
Develop and design RESTful microservices and APIs
Be involved in the development life cycle and would be doing definition and feasibility analysis
Apply latest software design techniques and contribute to the technical design of new solutions
Troubleshoot issues and solve problems where needed
Write object-oriented and maintainable code
Deliver quality results on time with minimal supervision
Work with multiple stakeholders involved in the project
Deliver APIs with standard documentation as per Open API standards
Skillset
Bachelor’s/Master’s degree in computer science, information technology, or engineering
At least 3-5+ years of experience in developing APIs and microservices using Spring Boot
Expertise in Java
Hands-on Experience in OOPS concepts, Spring Boot, Spring 3.x, Spring Dependency Injection (IOC, MVC, JDBC, JMS, etc.), and hibernate or any other ORMs
Hands-on experience in web services-RESTful
Knowledge of Restful API design patterns
Working knowledge in Databases (SQL Server, Oracle) and SQL
Hands-on experience in NGINX (Webserver and Reverse proxy)
Hands-on experience in setting up MTLS between microservices
Hands-on experience in docker implementation
Hands-on experience in consuming SOAP services from microservices
Hands-on experience in application servers like Tomcat and WebLogic
Hands-on experience in any of Java IDE (Eclipse, IntelliJ)
Hands-on experience in dependency and build management tools preferably Maven
Familiarity with code versioning tools, like Git, SVN, and Mercurial
Familiarity with APIGEE API gateway
Familiarity with Node.js
Onsite - Bahrain
Our client is a 5-year-old travel and hostel space providing platform, a Youth Hotel chain, where the spaces are designed for work and leisure. Their mission is to provide quality spaces for youths looking for hostel options and to other travelers across locations in India. They provide clean and hygienic dorms and private rooms along with different facilities like Wi-fi, travel helpdesk, fully stocked kitchens, lockers and laundry services, etc.
They are currently providing 800 beds in 13 locations, and plan to add another 1600 in the next few months, with the aim to add at least a lac bed in the near future. The spaces offered by them are highly rated by their customers and are preferred over other competitors for the location and services offered within a decent budget. Invested by some very well known names across Asia, it provides their team with a work culture that thrives on modern yet social and explorative values.
What you will do:
- Taking brand ownership and managing brand strategy, PR activities
- Collaborating with external stakeholders and internal teams to establish superior brand positioning and a strong value proposition to our target group, defining our corporate identity
- Strategizing and drafting all external communication material for corporate partnerships, website, public relations and media
- Translating brand strategy into the brand plan and go-to-market strategy
- Working closely with the Marketing team
- Ensuring content meets business objectives while adhering to brand guidelines
Desired Candidate Profile
What you need to have:- Master's/ Bachelor's degree in journalism, marketing, public relations, or communications
- 1-3 years of experience in content writing; experience in digital content would be a plus
- Superior writing, editing, and verbal communication skills
- Solid written and verbal communication skills
- Ability to craft and deliver key messages to a variety of audiences
- Reputation for being a grammar Nazi
- Ability to identify and leverage opportunities to advance the team's communications objectives in creative and memorable ways that utilize a variety of media
- Ability to manage multiple projects and engagements simultaneously
About the job
👉 TL; DR: We at Sarva Labs Inc., are looking for Site Reliability Engineers with experience to join our team. As a Protocol Developer, you will handle assets in data centers across Asia, Europe and Americas for the World’s First Context-Aware Peer-to-Peer Network enabling Web4.0. We are looking for that person who will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams and hustle through the Main Net launch.
About Us 🚀
Imagine if each user had their own chain with each transaction being settled by a dynamic group of nodes who come together and settle that interaction with near immediate finality without a volatile gas cost. That’s MOI for you, Anon.
Visit https://www.sarva.ai/ to know more about who we are as a company
Visit https://www.moi.technology/ to know more about the technology and team!
Visit https://www.moi-id.life/ , https://www.moibit.io/ , https://www.moiverse.io/ to know more
Read our developer documentation at https://apidocs.moinet.io/
What you'll do 🛠
- You will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams to ensure an appropriate degree of automation for component assembly, deployment, and rollback strategies in medium to large scale environments
- Monitor components to proactively prevent system component failure, and enable the engineering team on system characteristics that require improvement
- You will ensure the uninterrupted operation of components through proactive resource management and activities such as security/OS/Storage/application upgrades
You'd fit in 💯 if you...
- Familiar with any of these providers: AWS, GCP, DO, Azure, RedSwitches, Contabo, Redswitches, Hetzner, Server4you, Velia, Psychz, Tier and so on
- Experience in virtualizing bare metals using Openstack / VMWare / Similar is a PLUS
- Seasoned in building and managing VMs, Containers and clusters across the continents
- Confident in making best use of Docker, Kubernetes with stateful set deployment, autoscaling, rolling update, UI dashboard, replications, persistent volume, ingress
- Must have experience deploying in multi-cloud environments
- Working knowledge on automation tools such as Terraform, Travis, Packer, Chef, etc.
- Working knowledge on Scalability in a distributed and decentralised environment
- Familiar with Apache, Rancher, Nginx, SELinux/Ubuntu 18.04 LTS/CentOS 7 and RHEL
- Monitoring tools like PM2, Grafana and so on
- Hands-on with ELK stack/similar for log analytics
🌱 Join Us
- Flexible work timings
- We’ll set you up with your workspace. Work out of our Villa which has a lake view!
- Competitive salary/stipend
- Generous equity options (for full-time employees)
Expected Skills :-
- Knowledge/Application of data structures and algorithms.-
- Problem-solving attitude : This means when you face a problem (not necessarily technical), your natural reaction is "How can I solve it best" rather than "How can I get out of it/avoid it/overlook it"-
- Working knowledge in Java(SpringFramework) is a must.
- Working knowledge in ReactJs is a plus.- Working knowledge in Python, MongoDB is a plus.
- The founding team has extensive experience in launching and scaling up fintech products & New business verticals. The founders have rich leadership experience across Consulting, fintech and payment companies
-You have experience of testing a range of mobile applications and defining mobile testing strategies
-Proficient in end-to-end testing for both Android and iOS applications (React Native)
-Experience in creating test automation flows(UI/API), preferably Mobile.
-You have a good eye for performance, usability and pixel perfect displays
-Has experience working in product teams, more specifically mobile and web consumer facing products
-Experience using CI/CD tools is a plus
Verification:
- Design and development of Verification IPs and models for ADAS and AV customers.
- Understand the Customers’ verification pain points, translate them into technical terms, and propose new features for the product
- Work with the customers on adopting the product, training, helping them address issues
Deployment
- Deploy verification solution (composed of a product, language, and methodology) for customers
- Support installation, integrations, and assimilation of new versions, and/or SW packages at the Customers’ sites
- Develop Application Notes, Methodology documents as well as technical presentations









.png&w=256&q=75)