

Peak Hire Solutions
About
Peak Hire Solutions is a leading Recruitment Firm that provides our clients with innovative IT / Non-IT Recruitment Solutions. We pride ourselves on our creativity, quality, and professionalism. Join our team and be a part of shaping the future of Recruitment.
Jobs at Peak Hire Solutions
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills

JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune

JOB DETAILS:
* Job Title: Head of Engineering/Senior Product Manager
* Industry: Digital transformation excellence provider
* Salary: Best in Industry
* Experience: 12-20 years
* Location: Mumbai
Job Description
Role Overview
The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.
This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.
Roles and Responsibilities:
Technology Execution & Architecture Leadership
· Own and execute the technology roadmap aligned with business goals.
· Build and maintain scalable architecture supporting multiple verticals.
· Enforce engineering best practices, code quality, performance, and security.
· Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.
Product & Engineering Delivery
· Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.
· Own the full SDLC — requirements, design, development, testing, deployment, support.
· Implement Agile, DevOps, CI/CD for faster releases and improved reliability.
· Oversee product/platform interoperability across all company systems.
Vertical-Specific Technology Leadership
Procurement Tech:
· Lead architecture and enhancements of procurement and indirect spend platforms.
· Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.
eCommerce:
· Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.
Sustainability/ESG:
· Support development of GHG tracking, reporting systems, and sustainability analytics platforms.
Business Services:
· Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.
Data, Cloud, Security & Infrastructure
· Own cloud infrastructure strategy (Azure/AWS/GCP).
· Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).
· Lead cybersecurity policies, monitoring, threat detection, and recovery planning.
· Drive observability, cost optimization, and system scalability.
AI, Automation & Innovation
· Integrate AI/ML, analytics, and automation into product platforms and service delivery.
· Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.
· Lead R&D for emerging tech aligned to business needs.
Leadership & Team Management
· Lead and mentor engineering managers, architects, developers, QA, and DevOps.
· Drive a culture of ownership, innovation, continuous learning, and performance accountability.
· Build capability development frameworks and internal talent pipelines.
Stakeholder Collaboration
· Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.
· Ensure transparent reporting on project status, risks, and technology KPIs.
· Manage vendor relationships, technology partnerships, and external consultants.
Education, Training, Skills, and Experience Requirements:
Experience & Background
· 16+ years in technology execution roles, including 5–7 years in senior leadership.
· Strong background in multi-product engineering for B2B platforms or enterprise systems.
· Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.
Technical Skills
· Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.
· Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.
· Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.
· Strong understanding of security, compliance, scalability, performance engineering.
Leadership Competencies
· Execution-focused technology leadership.
· Strong communication and stakeholder management skills.
· Ability to lead distributed teams, manage complexity, and drive measurable outcomes.
· Innovation mindset with practical implementation capability.
Education
· Bachelor’s or Master’s in Computer Science/Engineering or equivalent.
· Additional leadership education (MBA or similar) is a plus, not mandatory.
Travel Requirements
· Occasional travel for client meetings, technology reviews, or global delivery coordination.
Must-Haves
· 10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.
· Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain
· Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.
· Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).
· Hands-on leadership experience in Security & Compliance.
· Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation
· Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.
· Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.
· Strong product management exposure
· Proven experience in leading end-to-end team operations
· Relevant experience in product-driven organizations or platforms
· Strong Subject Matter Expertise (SME)
Education: - Master degree.
**************
Joining time / Notice Period: Immediate - 45days.
Location: - Andheri,
5 days working (3 - 2 days’ work from office)

JOB DETAILS:
* Job Title: Head - Visual Communication (Consumer Electronics)
* Industry: ECommerce and Electronics Industry
* Salary: Best in Industry
* Experience: 10-15 years
* Location: Gurugram
Role & Responsibilities
- Head and manage work intake and the overall design project assignment process.
- Interpreting abstract business concepts and turning them into creative ideas.
- Head and direct the team, providing key ideas, methods, and brand positioning.
- Developing strategic design plans with projected timelines.
- Pitching ideas and the creative vision and communicating the project outline to the design team.
- Choosing the design elements for different projects.
- Overseeing the design projects, from start to finish, and monitoring the team members.
- Analyzing market research to create more effective designs.
Ideal Candidate
- Strong Creative Director or Design Lead profiles
- Must have minimum 10+ YOE in Visual / Graphic Design, Branding and Marketing Campaigns
- Must have strong experience in Brand campaigns for Consumer Electronics / Durable products (like Smartphones, Smart Watch, Consumer Electronics) Or Automobile Brands - Read clients / brands worked for
- Must be a Design focused profile, not copywriting focused
- Must be managing a Creative team currently (Lead or Above in Current role)
- (Portfolio) - Very Strong portfolio of Visual Design / branding works for Physical Consumer Products (Candidate should demonstrate strong portfolio evidence of creative direction, with competency in ideation, visualization, and Design)
- Candidates with international exposure and experience on global brands are highly preferred.
- Must be managing a Creative team currently (Lead or Above in Current role)
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.

JOB DETAILS:
* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka
* Industry: Global digital transformation solutions provider
* Salary: Best in Industry
* Experience: 5-8 years
* Location: Hyderabad
Job Summary
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.
Key Responsibilities
ETL Pipeline Development & Optimization
- Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
- Optimize data pipelines for performance, scalability, fault tolerance, and reliability.
Big Data Processing
- Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
- Ensure fault-tolerant, scalable, and high-performance data processing systems.
Cloud Infrastructure Development
- Build and manage scalable, cloud-native data infrastructure on AWS.
- Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.
Real-Time & Batch Data Integration
- Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
- Ensure consistency, data quality, and a unified view across multiple data sources and formats.
Data Analysis & Insights
- Partner with business teams and data scientists to understand data requirements.
- Perform in-depth data analysis to identify trends, patterns, and anomalies.
- Deliver high-quality datasets and present actionable insights to stakeholders.
CI/CD & Automation
- Implement and maintain CI/CD pipelines using Jenkins or similar tools.
- Automate testing, deployment, and monitoring to ensure smooth production releases.
Data Security & Compliance
- Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
- Implement data governance practices ensuring data integrity, security, and traceability.
Troubleshooting & Performance Tuning
- Identify and resolve performance bottlenecks in data pipelines.
- Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.
Collaboration & Cross-Functional Work
- Work closely with engineers, data scientists, product managers, and business stakeholders.
- Participate in agile ceremonies, sprint planning, and architectural discussions.
Skills & Qualifications
Mandatory (Must-Have) Skills
- AWS Expertise
- Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
- Strong understanding of cloud-native data architectures.
- Big Data Technologies
- Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
- Experience with Apache Spark and Apache Kafka in production environments.
- Data Frameworks
- Strong knowledge of Spark DataFrames and Datasets.
- ETL Pipeline Development
- Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
- Database Modeling & Data Warehousing
- Expertise in designing scalable data models for OLAP and OLTP systems.
- Data Analysis & Insights
- Ability to perform complex data analysis and extract actionable business insights.
- Strong analytical and problem-solving skills with a data-driven mindset.
- CI/CD & Automation
- Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
- Familiarity with automated testing and deployment workflows.
Good-to-Have (Preferred) Skills
- Knowledge of Java for data processing applications.
- Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
- Familiarity with data governance frameworks and compliance tooling.
- Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
- Exposure to cost optimization strategies for large-scale cloud data platforms.
Skills: big data, scala spark, apache spark, ETL pipeline development
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer
F2F Interview: 14th Feb 2026
3 days in office, Hybrid model.

Role & Responsibilities:
As a Product Designer (Growth), you'll be the design force behind our growth initiatives—obsessing over conversion funnels, user onboarding, engagement mechanics, and monetization touchpoints. You'll combine deep UX expertise with growth hacking mindset, leveraging data and experimentation to design experiences that drive measurable business impact. We're looking for a metrics-driven, hypothesis-oriented designer who thrives on turning insights into scalable growth levers.
What You’ll Do-
- Drive growth-focused design initiatives across acquisition, activation, retention, and revenue optimization
- Design and optimize conversion funnels from first touch to long-term engagement, focusing on key growth metrics (CAC, LTV, retention rates, conversion rates)
- Lead experimentation through design—create A/B tests, landing page variants, onboarding flows, and feature experiments that drive statistically significant improvements
- Own critical growth touchpoints including signup flows, profile completion, subscription conversions, re-engagement campaigns, and referral mechanisms
- Translate growth insights into design solutions—turn user behavior data, cohort analyses, and business requirements into high-impact user experiences
- Design scalable growth systems—create templates, frameworks, and design patterns that enable rapid testing and iteration across growth initiatives
- Collaborate with Growth PMs, Data Analysts, and Engineers to identify growth opportunities, prioritize experiments, and measure design impact on key business metrics
- Optimize post-launch performance through continuous testing, user feedback integration, and data-driven design iterations
- Establish growth design culture by championing experimentation, sharing learnings, and mentoring team members on growth-focused design thinking
Growth-Specific Focus Areas:
- User acquisition and conversion optimization
- Onboarding and activation experience design
- Subscription and monetization flow optimization
- Retention and re-engagement campaign design
- Referral and viral growth mechanism design
- Personalization and recommendation interface design
Ideal Candidate
- Masters/Bachelor's from premier institutions (IIT, BITS, NID)
- 5 -8 years of full-stack product design experience with proven growth impact
- Strong portfolio demonstrating conversion optimization and growth-focused design solutions
- Experience with growth experimentation, A/B testing, and data-informed design decisions
- Proficiency in Figma, prototyping tools, analytics platforms, and growth measurement tools
- Understanding of subscription business models, pricing psychology, and monetization strategies
- Experience designing for both web and mobile platforms at scale
- Strong analytical mindset with ability to translate business metrics into design solutions
- Knowledge of user acquisition channels and performance marketing touchpoints
Bonus Traits:
- Experience in subscription-based or marketplace products
- Background in conversion rate optimization (CRO)
- Understanding of international market expansion and localization
- Experience with AI-powered personalization and recommendation systems
Similar companies
About the company
OpsTree Global is a digital transformation and platform engineering partner that helps organizations build scalable, secure, and high-impact technology foundations. With expertise across cloud modernization, Data & AI, Observability & SRE, DevSecOps, security, quality engineering, and end-to-end software delivery, OpsTree enables faster, outcome-driven digital transformation.
As an AWS Advanced Tier Services Partner and App Modernization specialist, OpsTree blends cloud-native practices with AI-driven innovation to deliver resilient, high-performing platforms. Its in-house DevSecOps platform, BuildPiper, helps enterprises standardize and accelerate software delivery at scale.
Trusted by 250+ organizations—from startups to Fortune 100 enterprises—OpsTree is known for making software delivery lean, nimble, and highly productive. Driven by a culture of continuous learning, strong ethics, and thought leadership, OpsTree fosters a transparent and growth-oriented environment that empowers teams to build the next generation of cloud-native solutions.
Jobs
1
About the company
The Story
Founded in 2011, Poshmark started with a simple yet powerful idea in Manish Chandra's garage. Along with co-founders Tracy Sun, Gautam Golwala, and Chetan Pungaliya, Chandra envisioned a platform that would revolutionize how people buy and sell fashion. The inspiration came from seeing the potential of the iPhone 4 to create meaningful connections in the shopping experience.
What They Do
Poshmark is not just another e-commerce platform – it's a social marketplace that brings together shopping and community. Think of it as a place where social media meets shopping, allowing users to buy, sell, and discover fashion, home goods, and accessories. With over 80 million users , the platform has become a go-to destination for both casual sellers and entrepreneurial individuals looking to build their own digital boutiques.
Growth & Achievements
- Started as a fashion-only platform and successfully expanded into home goods and electronics
- Raised over $160 million in venture funding
- Achieved a successful IPO in January 2021
- Recently launched innovative features like AI-powered visual search and livestream shopping
- Built a vibrant community of millions of buyers and sellers across the country
Today, under CEO Manish Chandra's leadership, Poshmark continues to innovate in the social commerce space, blending technology with community to create a unique shopping experience. The platform's success story is a testament to how combining social connections with commerce can create a powerful marketplace that resonates with modern consumers.
Jobs
7
About the company
We are Hiver. We are a bunch of folks with borderline devotional love for email, committed to the idea of making it noise-free and collaborative for teams. Why email you may ask? Email is one of the most common ways to communicate at work, is far less distracting than its chat counterparts, and lets you respond at your own pace. But it hasn’t evolved over the years to meet the changing collaboration needs of businesses. Teams have to rely on inefficient methods like Forwards and CCs to share information, often leading to messy threads and missed emails. Not to mention the toll it takes on your productivity, ability to concentrate and all things zen. We at Hiver are trying to solve this by giving email collaborative superpowers and, in the process, giving you back the most important currency there is - time.
Jobs
4
About the company
This is our Vision statement. For everything we undertake, this is at the heart. We constantly ask “what kind of experience will this provide?” and then take our actions accordingly.
We are a firm grounded in strong engineering and product craftsmanship. Through value-driven, goal-oriented, and problem-solving practices, we leverage technology to build modern solutions that help our teams deliver the outcomes our clients truly care about. We believe outcomes matter more than output, and this philosophy guides every decision we take. Founded in 2015, we are stepping into our next chapter of growth through long-term, trusted collaborations.
Our services include:
- End-to-end product delivery
- Software Project Rescue
- Paratrooper Engagments
- Product and Technology Consulting
- Process Consulting
As Technology Partners to our clients, we are committed to delivering clean, lean, robust, usable, reusable, maintainable, scalable, and testable software solutions.
Therefore, we follow and champion the principles of software craftsmanship.
We are AWS Select Tier Partner helping SaaS teams build scalable, secure cloud platforms
For more follow us on:
Twitter - https://twitter.com/technogise
Blog - https://medium.com/technogise
Meetup - https://www.meetup.com/technowise
YouTube - https://youtube.com/c/technogiseprivatelimited
Instagram - https://www.instagram.com/weattechnogise/
Jobs
1
About the company
About Us
HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names.
Our People
With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home.
Our Impact
As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen.
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
Jobs
8
About the company
Why Explore a Career at Voiceoc:
1) Be a part of the process of scaling a young startup to a global company.
2) Work directly with co-founders & take up challenging tasks daily that would uplift your own persona.
3) Work on exciting projects of global leading companies
4) Competitive salary & perks.
5) Flexible work schedule & complete ownership of responsibilities.
6) A cool working environment that will make each day at Voiceoc full of fun & new learning.
Jobs
5
About the company
Optimo Capital is a newly established NBFC with a mission to serve the underserved MSME businesses with their credit needs in India. Less than 15% of MSMEs have access to formal credit. We aim to bridge this credit gap by employing a phygital model (physical branches + digital decision-making).
Being a technology and data-first company, tech and data enthusiasts play a crucial role in building the tech & infra to help the company thrive.
Jobs
1
About the company
Jobs
11
About the company
Jobs
2








