Cutshort logo
Windows azure jobs

50+ Windows Azure Jobs in India

Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!

icon
Remote only
0 - 0 yrs
₹1.8L - ₹2.4L / yr
ASP.NET
ASP.NET MVC
Microsoft SQL Server
Microsoft Visual C#
skill iconJavascript
+6 more

Job Title: Technology Intern

Location: Remote (India)

Shift Timings:

  • 5:00 PM – 2:00 AM
  • 6:00 PM – 3:00 AM

Compensation: Stipend


Job Summary

ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for final-year students (2026 pass-outs) who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.


Eligibility & Qualifications

  • Education:
  • B.Tech (Computer Science) / M.Tech (Computer Science)
  • Tier 1 colleges preferred
  • Final semester students (2026 pass-outs) or recent interns
  • Experience Level: Fresher
  • Communication: Excellent English communication skills (verbal & written)


Skills Required

1. Technical Skills (Must Have)

  • Experience with .NET Core (.NET 6 / 7 / 8)
  • Strong knowledge of C#, including:
  • Object-Oriented Programming (OOP) concepts
  • async/await
  • LINQ
  • ASP.NET Core (Web API / MVC)

2. Database Skills

  • SQL Server (preferred)
  • Writing complex SQL queries, joins, and subqueries
  • Stored Procedures, Functions, and Indexes
  • Database design and performance tuning
  • Entity Framework Core
  • Migrations and transaction handling

3. Frontend Skills (Required)

  • JavaScript (ES5 / ES6+)
  • jQuery
  • DOM manipulation
  • AJAX calls
  • Event handling
  • HTML5 & CSS3
  • Client-side form validation

4. Security & Performance

  • Data validation and exception handling
  • Caching concepts (In-memory / Redis – good to have)

5. Tools & Environment

  • Visual Studio / VS Code
  • Git (GitHub / Azure DevOps)
  • Basic knowledge of server deployment

6. Good to Have (Optional)

  • Azure or AWS deployment experience
  • CI/CD pipelines
  • Docker
  • Experience with data handling

Additional Requirements (Work-from-Home Setup)

This role supports remote work. Candidates must ensure the following minimum infrastructure requirements:

  • Laptop/Desktop: Windows-based system
  • Operating System: Windows
  • Screen Size: Minimum 14 inches
  • Screen Resolution: Full HD (1920 × 1080)
  • Processor: Intel i5 or higher
  • RAM: Minimum 8 GB (Mandatory)
  • Software: AnyDesk
  • Internet Speed: 100 Mbps or higher


About ARDEM


ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. For over 20 years, ARDEM has successfully delivered high-quality outsourcing and automation services to clients across the USA and Canada.

We are growing rapidly and continuously innovating to become a better service provider for our customers. Our mission is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company in the industry.

Read more
aXtrLabs

at aXtrLabs

2 candid answers
3 recruiters
AXtrLabs Careers Careers
Posted by AXtrLabs Careers Careers
Coimbatore
1 - 3 yrs
₹3L - ₹4.8L / yr
Microsoft Windows Azure
Cloud Computing
Policies and procedures
IaaS
Platform as a Service (PaaS)
+10 more

Role Overview

The Azure Presales Engineer is responsible for engaging with customers to understand their business and technical requirements and translating them into well-architected Microsoft Azure solutions. This role plays a key part in cloud transformation initiatives by supporting presales activities, building solution proposals, responding to RFPs, and ensuring a smooth transition from presales to delivery.

Key Responsibilities

  • Participate in customer discovery sessions to gather technical and business requirements
  • Design Azure cloud architectures across IaaS, PaaS, and hybrid environments following best practices
  • Prepare technical solution proposals, architectures, BOMs, and presales documentation
  • Support RFP and RFQ responses with detailed technical inputs and cost estimations
  • Deliver Azure solution demonstrations, workshops, and technical presentations to customers
  • Collaborate closely with sales and delivery teams to ensure accurate solution design and handover
  • Stay updated with Azure services, licensing models, pricing, and new feature releases
  • Work with Microsoft account teams for co-selling opportunities, funding programs, and alignment
  • Contribute to reusable presales assets, templates, and solution accelerators

Required Qualifications

  • 2–3+ years of experience in Azure cloud engineering or presales roles
  • Strong hands-on understanding of Azure core services including compute, storage, networking, security, IAM, monitoring, backup, and disaster recovery
  • Experience in preparing technical proposals, SOWs, and solution designs
  • Strong communication, presentation, and customer-facing skills
  • Ability to translate business needs into effective cloud solutions
  • Experience working with or for a Microsoft Partner is a strong plus

Preferred Certifications

  • AZ-104, AZ-305, AZ-900, AZ-700, AZ-500 (any relevant Azure certifications)


Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Hyderabad, Indore, Ahmedabad
5 - 8 yrs
Upto ₹22L / yr (Varies
)
Windows Azure
skill iconAmazon Web Services (AWS)
Terraform
skill iconDocker
skill iconKubernetes

About Kanerika:

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.

We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.


Awards and Recognitions:

Kanerika has won several awards over the years, including:

1. Best Place to Work 2023 by Great Place to Work®

2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today

3. NASSCOM Emerge 50 Award in 2014

4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture

5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.


Working for us:

Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.


Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.


About the role:

As a DevOps Engineer, you will play a critical role in bridging the gap between development, operations, and security teams to enable fast, secure, and reliable software delivery. With 5+ years of hands-on experience, the engineer is responsible for designing, implementing, and maintaining scalable, automated, and cloud-native infrastructure solutions.


Key Responsibilities:

  • 5+ years of hands-on experience in DevOps or Cloud Engineering roles.
  • Strong expertise in at least one public cloud provider (AWS / Azure / GCP).
  • Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
  • Solid experience with Kubernetes and containerized applications.
  • Strong knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD).
  • Scripting/programming skills in Python, Shell, or Go for automation.
  • Hands-on experience with monitoring, logging, and incident management.
  • Familiarity with security practices in DevOps (secrets management, IAM, vulnerability scanning). 

Employee Benefits:

1. Culture:

  1. Open Door Policy: Encourages open communication and accessibility to management.
  2. Open Office Floor Plan: Fosters a collaborative and interactive work environment.
  3. Flexible Working Hours: Allows employees to have flexibility in their work schedules.
  4. Employee Referral Bonus: Rewards employees for referring qualified candidates.
  5. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.


2. Inclusivity and Diversity:

  1. Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
  2. Mandatory POSH training: Promotes a safe and respectful work environment.


3. Health Insurance and Wellness Benefits:

  1. GMC and Term Insurance: Offers medical coverage and financial protection.
  2. Health Insurance: Provides coverage for medical expenses.
  3. Disability Insurance: Offers financial support in case of disability.


4. Child Care & Parental Leave Benefits:

  1. Company-sponsored family events: Creates opportunities for employees and their       families to bond.
  2. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
  3. Family Medical Leave: Offers leave for employees to take care of family members' medical needs.


5. Perks and Time-Off Benefits:

  1. Company-sponsored outings: Organizes recreational activities for employees.
  2. Gratuity: Provides a monetary benefit as a token of appreciation.
  3. Provident Fund: Helps employees save for retirement.
  4. Generous PTO: Offers more than the industry standard for paid time off.
  5. Paid sick days: Allows employees to take paid time off when they are unwell.
  6. Paid holidays: Gives employees paid time off for designated holidays.
  7. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.


6. Professional Development Benefits:

  1. L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
  2. Mentorship Program: Offers guidance and support from experienced professionals.
  3. Job Training: Provides training to enhance job-related skills.
  4. Professional Certification Reimbursements: Assists employees in obtaining professional      certifications.
  5. Promote from Within: Encourages internal growth and advancement opportunities.


Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
4 - 9 yrs
₹10L - ₹16L / yr
Jasmine (Javascript Testing Framework)
skill iconNodeJS (Node.js)
Google Cloud Platform (GCP)
Windows Azure
cicd

Role: Software Development (Senior and Associate)

Experience Level: 4 to 9 Years

Work location: Remote


What you’ll do:

We are seeking a Mid-Level Node.js Developer to join our development team as an individual contributor. You will design, develop, and maintain scalable microservices for diverse client projects, working on enterprise applications that require high performance, reliability, and seamless deployment in containerized environments.


Key Responsibilities:

● Develop and maintain scalable Node.js microservices for diverse client projects

● Implement robust REST APIs with proper error handling and validation

● Write comprehensive unit and integration tests ensuring high code quality

● Design portable, efficient solutions deployable across different client environments

● Collaborate with cross-functional teams and client stakeholders

● Optimize application performance for high-concurrency scenarios

● Implement security best practices for enterprise applications

● Participate in code reviews and maintain coding standards

● Support deployment and troubleshooting in client environments


Must have skills:

Core Technical Expertise:

● Node.js: 4+ years of production experience with Node.js (ES6+, Async/Await, Promises, Event Loop understanding)

● Frameworks: Strong hands-on experience with Express.js, Fastify, or NestJS

● REST API Development: Proven experience designing and implementing RESTful web services, middleware

implementation

● JavaScript/TypeScript: Proficient in modern JavaScript (ES6+) and TypeScript for type-safe development

● Testing: Experience with testing frameworks (Jest, Mocha, Chai), unit testing, integration testing, mocking

Microservices & Deployment:

● Containerization: Hands-on Docker experience for packaging and deploying Node.js applications

● Microservices Architecture: Understanding of service decomposition, inter-service communication, event-driven

architecture

● Abstraction & Portability: Environment-agnostic design, configuration management (dotenv, config modules)

● Build Tools: NPM/Yarn for dependency management, understanding of package.json


Good to have have skills:

Advanced Technical:

● Advanced Frameworks: NestJS, Koa.js, Hapi.js

● Orchestration: Kubernetes, Docker

● Cloud Platforms: Alibaba, Azure, or GCP services and deployment

● Message Brokers: Apache Kafka, RabbitMQ for asynchronous communication

● Databases: Both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra)

● API Gateway: Express Gateway, Kong API Gateway

Development & Operations:

● CI/CD pipelines (Jenkins, GitLab CI/CD)

● Monitoring & Observability (Winston, Morgan, Prometheus, New Relic)

● GraphQL with Apollo Server or similar

● Security best practices (Helmet.js, authentication, authorization)

Client-Facing Experience:

● Experience working in service-based organizations

● Adaptability to different domain requirements

● Understanding of various industry standards and compliance requirements


Why Join Quantiphi?

● Be part of an award-winning Google Cloud partner recognized for innovation and impact.

● Work on cutting-edge GCP-based data engineering and AI projects.

● Collaborate with a global team of data scientists, engineers, and AI experts.

● Access continuous learning, certifications, and leadership development opportunities.

Read more
Navi Mumbai
6 - 10 yrs
₹12L - ₹18L / yr
DevOps
Microsoft SQL Server
Windows Azure

About Us:

Teknobuilt is an innovative construction technology company accelerating Digital and AI platform to help all aspects of program management and execution for workflow automation, collaborative manual tasks and siloed systems. Our platform has received innovation awards and grants in Canada, UK and S. Korea and we are at the frontiers of solving key challenges in the built environment and digital health, safety and quality.

Teknobuilt's vision is helping the world build better- safely, smartly and sustainably. We are on a mission to modernize construction by bringing Digitally Integrated Project Execution System - PACE and expert services for midsize to large construction and infrastructure projects. PACE is an end-to-end digital solution that helps in Real Time Project Execution, Health and Safety, Quality and Field management for greater visibility and cost savings. PACE enables digital workflows, remote working, AI based analytics to bring speed, flow and surety in project delivery. Our platform has received recognition globally for innovation and we are experiencing a period of significant growth for our solutions.


Job description:

IT Infrastructure & System Administration:

Manage Windows/Linux servers, desktop systems, Active Directory, DNS, DHCP, and virtual environments (VMware/Hyper-V). Monitor system performance and implement improvements for efficiency and availability.

Oversee patch management, backups, disaster recovery, and security configurations. Ensure IT compliance, conduct audits, and maintain detailed documentation

DevOps & Cloud Operations:

Design, implement, and manage CI/CD pipelines using Jenkins, GitHub Actions, or similar tools Manage container orchestration using Kubernetes; deploy infrastructure using Terraform Administer and optimize AWS cloud infrastructure Automate deployment, monitoring, and alerting solutions for production environments

Security, Maintenance & Support Define and enforce IT and DevOps security policies and procedures Perform root cause analysis (RCA) for system failures and outages Provide Tier 2/3 support and resolve complex system and production issues.

Collaboration & Communication Coordinate IT projects (e.g., upgrades, migrations, cloud implementations) Collaborate with engineering and product teams for release cycles and production deployments.

Maintain clear communication with internal stakeholders and provide regular reporting.


Qualification:

8+ years of experience in IT systems administration and/or DevOps roles

Minimum of 8-10 years of experience as a Windows Administrator or in a similar role.

Strong knowledge of Windows Server (2016/2019/2022) and Windows operating systems.

Experience with Active Directory, Group Policy, DNS, DHCP, and other Windows-based services.

Familiarity with virtualization technologies (e.g., VMware, Hyper-V). Proficiency in scripting languages (e.g., PowerShell).

Strong understanding of networking principles and protocols.

Relevant certifications (e.g., MCSA, MCSE) are a plus.


Salary Range: Competitive

Employment Type: Full Time

Location: Mumbai / Navi Mumbai

Qualification: Any graduate or master’s degree in science, engineering or technology

Read more
Kochi (Cochin), trivandrum, Thiruvananthapuram
8 - 15 yrs
₹16L - ₹19L / yr
MS SQLServer
skill iconJavascript
skill iconjQuery
API
skill icon.NET
+3 more

Job Role Senior Dot Net Developer

Experience 8+ years

Notice period Immediate

Location Trivandrum / Kochi


Details Job Description

Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure

DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client, and

the resource should be hands-on - experience in coding and Azure Cloud.

Working hours - 8 hours, with 4 hours of overlap during EST Time zone. (12 PM - 9 PM) This overlap hours is

mandatory as meetings happen during this overlap hours.

Responsibilities


Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-

SQL, and modern JavaScript/jQuery


☑ Integrate and support third-party APIs and external services

☑ Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack

☑ Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC)

☑ Participate in Agile/Scrum ceremonies and manage tasks using Jira

☑ Understand technical priorities, architectural dependencies, risks, and implementation challenges

☑ Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability.

Primary Skills

8+ years of hands-on development experience with:

☑ C#, .NET Core 6/8+, Entity Framework / EF Core

☑ JavaScript, jQuery, REST APIs

☑ Expertise in MS SQL Server, including:

☑ Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types

☑ Skilled in unit testing with XUnit, MSTest

☑ Strong in software design patterns, system architecture, and scalable solution design

☑ Ability to lead and inspire teams through clear communication, technical mentorship, and ownership

☑ Strong problem-solving and debugging capabilities

☑ Ability to write reusable, testable, and efficient code

☑ Develop and maintain frameworks and shared libraries to support large-scale applications

☑ Excellent technical documentation, communication, and leadership


skills

☑ Microservices and Service-Oriented Architecture (SOA)

☑ Experience in API Integrations


2+ years of hands with Azure Cloud Services, including:

☑Azure Functions

☑Azure Durable Functions

☑Azure Service Bus, Event Grid, Storage Queues

☑Blob Storage, Azure Key Vault, SQL Azure

☑Application Insights, Azure Monitoring.

Secondary Skills

☑Familiarity with AngularJS, ReactJS, and other front-end frameworks

☑Experience with Azure API Management (APIM)

☑Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes)

☑Experience with Azure Data Factory (ADF) and Logic Apps

☑Exposure to Application Support and operational monitoring

☑Azure DevOps - CI/CD pipelines (Classic / YAML).

Certifications Required (IF Any)

Microsoft Certified: Azure Fundamentals

☑Microsoft Certified: Azure Developer Associate

☑Other relevant certifications in Azure, .NET, or Cloud technologies.

Read more
ARDEM Incorporated
Alka Yadav
Posted by Alka Yadav
Remote only
0 - 0 yrs
₹1L - ₹1.5L / yr
IT infrastructure
Disaster recovery
IT operations
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

📍 Position: IT Intern

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using hashtag

hashtag

#Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting hashtag

hashtag

#AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed


Read more
Oddr Inc
Deepika Madgunki
Posted by Deepika Madgunki
Remote only
2.5 - 5 yrs
₹5L - ₹20L / yr
skill icon.NET
Windows Azure
Microsoft Windows Azure
skill iconC#
Object Oriented Programming (OOPs)
+1 more

Job Overview

As a software Engineer, you will play a crucial role in leading our development efforts, ensuring best practices, and supporting the team on a day-to-day basis. This role requires deep technical knowledge, a proactive mindset, and a commitment to guiding the team in tackling challenging issues. You will work primarily with .NET Core on the backend while also keeping a strategic focus on product security, DevOps, quality assurance, and cloud infrastructure.


Responsibilities

• Forward-Looking Product Development:

o Collaborate with product and engineering teams to align on the technical

direction, scalability, and maintainability of the product.

o Proactively consider and address security, performance, and scalability

requirements during development.

  • Cloud and Infrastructure: Leverage Microsoft Azure for cloud infrastructure,
  • ensuring efficient and secure use of cloud services. Work closely with DevOps to
  • improve deployment processes.
  • DevOps & CI/CD: Support the setup and maintenance of CI/CD pipelines, enabling
  • smooth and frequent deployments. Collaborate with the DevOps team to automate and
  • optimize the development process.
  • Technical Mentorship: Provide technical guidance and support to team members,
  • helping them solve day-to-day challenges, enhance code quality, and adopt best
  • practices.
  • Quality Assurance: Collaborate with QA to ensure thorough testing, automated testing
  • coverage, and overall product quality.
  • Product Security: Actively implement and promote security best practices to protect
  • data and ensure compliance with industry standards.
  • Documentation & Code Reviews: Promote good coding practices, conduct code
  • reviews, and maintain clear documentation.
  • Qualifications

• Technical Skills:

o Strong experience with .NET Core for backend development and RESTful API

design.

o Hands-on experience with Microsoft Azure services, including but not limited

to VMs, databases, application gateways, and user management.

o Familiarity with DevOps practices and tools, particularly CI/CD pipeline

configuration and deployment automation.

o Strong knowledge of product security best practices and experience implementing secure coding practices.

o Familiarity with QA processes and automated testing tools is a plus.

o Ability to support team members in solving technical challenges and sharing

knowledge effectively.

Preferred Qualifications

  • 3+ years of experience in software development, with a strong focus on .NET Core
  • Previous experience as a Staff SE, tech lead, or in a similar hands-on tech role.
  • Strong problem-solving skills and ability to work in a fast-paced, startup environment.
  • What We Offer
  • Opportunity to lead and grow within a dynamic and ambitious team.
  • Challenging projects that focus on innovation and cutting-edge technology.
  • Collaborative work environment with a focus on learning, mentorship, and growth.
  • Competitive compensation, benefits, and stock options.
  • If you’re a proactive, forward-thinking technology leader with a passion for .NET Core and you’re ready to make an impact, we’d love to meet you!


Read more
JobTwine

at JobTwine

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4 - 5 yrs
Upto ₹25L / yr (Varies
)
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconDocker
skill iconKubernetes
Shell Scripting
+2 more

About JobTwine

JobTwine is an AI-powered platform offering Interview as a Service, helping companies hire 50% faster while doubling the quality of hire. AI Interviews, Human Decisions, Zero Compromises We leverage AI with human expertise to discover, assess, and hire top talent. JobTwine automates scheduling, uses an AI Copilot to guide human interviewers for consistency and generates structured, high-quality automated feedback.


Role Overview

We are looking for a Senior DevOps Engineer with 4–5 years of experience, a product-based mindset, and the ability to thrive in a startup environment


Key Skills & Requirements

  • 4–5 years of hands-on DevOps experience
  • Experience in product-based companies and startups
  • Strong expertise in CI/CD pipelines
  • Hands-on experience with AWS / GCP / Azure
  • Experience with Docker & Kubernetes
  • Strong knowledge of Linux and Shell scripting
  • Infrastructure as Code: Terraform / CloudFormation
  • Monitoring & logging: Prometheus, Grafana, ELK stack
  • Experience in scalability, reliability and automation


What You Will Do

  • Work closely with Sandip, CTO of JobTwine on Gen AI DevOps initiatives
  • Build, optimize, and scale infrastructure supporting AI-driven products
  • Ensure high availability, security and performance of production systems
  • Collaborate with engineering teams to improve deployment and release processes


Why Join JobTwine ?

  • Direct exposure to leadership and real product decision-making
  • Steep learning curve with high ownership and accountability
  • Opportunity to build and scale a core B2B SaaS produc
Read more
MIC Global

at MIC Global

3 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
10yrs+
Upto ₹50L / yr (Varies
)
SQL
skill iconPython
PowerBI
Stakeholder management
skill iconData Analytics
+4 more

About Us

MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.

We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.


About the Team 

As a Lead Data Specialist at MIC Global, you will play a key role in transforming data into actionable insights that inform strategic and operational decisions. You will work closely with Product, Engineering, and Business teams to analyze trends, build dashboards, and ensure that data pipelines and reporting structures are accurate, automated, and scalable.

This is a hands-on, analytical, and technically focused role ideal for someone experienced in data analytics and engineering practices. You will use SQL, Python, and modern BI tools to interpret large datasets, support pricing models, and help shape the data-driven culture across MIC Global


Key Roles and Responsibilities 

Data Analytics & Insights

  • Analyze complex datasets to identify trends, patterns, and insights that support business and product decisions.
  • Partner with Product, Operations, and Finance teams to generate actionable intelligence on customer behavior, product performance, and risk modeling.
  • Contribute to the development of pricing models, ensuring accuracy and commercial relevance.
  • Deliver clear, concise data stories and visualizations that drive executive and operational understanding.
  • Develop analytical toolkits for underwriting, pricing and claims 

Data Engineering & Pipeline Management

  • Design, implement, and maintain reliable data pipelines and ETL workflows.
  • Write clean, efficient scripts in Python for data cleaning, transformation, and automation.
  • Ensure data quality, integrity, and accessibility across multiple systems and environments.
  • Work with Azure data services to store, process, and manage large datasets efficiently.

Business Intelligence & Reporting

  • Develop, maintain, and optimize dashboards and reports using Power BI (or similar tools).
  • Automate data refreshes and streamline reporting processes for cross-functional teams.
  • Track and communicate key business metrics, providing proactive recommendations.

Collaboration & Innovation

  • Collaborate with engineers, product managers, and business leads to align analytical outputs with company goals.
  • Support the adoption of modern data tools and agentic AI frameworks to improve insight generation and automation.
  • Continuously identify opportunities to enhance data-driven decision-making across the organization.

Ideal Candidate Profile

  • 10+ years of relevant experience in data analysis or business intelligence, ideally
  • within product-based SaaS, fintech, or insurance environments.
  • Proven expertise in SQL for data querying, manipulation, and optimization.
  • Hands-on experience with Python for data analytics, automation, and scripting.
  • Strong proficiency in Power BI, Tableau, or equivalent BI tools.
  • Experience working in Azure or other cloud-based data ecosystems.
  • Solid understanding of data modeling, ETL processes, and data governance.
  • Ability to translate business questions into technical analysis and communicate findings effectively.

Preferred Attributes

  • Experience in insurance or fintech environments, especially operations, and claims analytics.
  • Exposure to agentic AI and modern data stack tools (e.g., dbt, Snowflake, Databricks).
  • Strong attention to detail, analytical curiosity, and business acumen.
  • Collaborative mindset with a passion for driving measurable impact through data.

Benefits

  • 33 days of paid holiday
  • Competitive compensation well above market average
  • Work in a high-growth, high-impact environment with passionate, talented peers
  • Clear path for personal growth and leadership development
Read more
MIC Global

at MIC Global

3 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Best in industry
skill iconPython
SQL
ETL
DBA
Windows Azure
+1 more

About Us

MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.

We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.


About the Team 

We're seeking a mid-level Data Engineer with strong DBA experience to join our insurtech data analytics team. This role focuses on supporting various teams including infrastructure, reporting, and analytics. You'll be responsible for SQL performance optimization, building data pipelines, implementing data quality checks, and helping teams with database-related challenges. You'll work closely with the infrastructure team on production support, assist the reporting team with complex queries, and support the analytics team in building visualizations and dashboards.


Key Roles and Responsibilities 

Database Administration & Optimization

  • Support infrastructure team with production database issues and troubleshooting
  • Debug and resolve SQL performance issues, identify bottlenecks, and optimize queries
  • Optimize stored procedures, functions, and views for better performance
  • Perform query tuning, index optimization, and execution plan analysis
  • Design and develop complex stored procedures, functions, and views
  • Support the reporting team with complex SQL queries and database design

Data Engineering & Pipelines

  • Design and build ETL/ELT pipelines using Azure Data Factory and Python
  • Implement data quality checks and validation rules before data enters pipelines
  • Develop data integration solutions to connect various data sources and systems
  • Create automated data validation, quality monitoring, and alerting mechanisms
  • Develop Python scripts for data processing, transformation, and automation
  • Build and maintain data models to support reporting and analytics requirements

Support & Collaboration

  • Help data analytics team build visualizations and dashboards by providing data models and queries
  • Support reporting team with data extraction, transformation, and complex reporting queries
  • Collaborate with development teams to support application database requirements
  • Provide technical guidance and best practices for database design and query optimization

Azure & Cloud

  • Work with Azure services including Azure SQL Database, Azure Data Factory, Azure Storage, Azure Functions, and Azure ML
  • Implement cloud-based data solutions following Azure best practices
  • Support cloud database migrations and optimizations
  • Work with Agentic AI concepts and tools to build intelligent data solutions

Ideal Candidate Profile

Essential

  • 5-8 years of experience in data engineering and database administration
  • Strong expertise in MS SQL Server (2016+) administration and development
  • Proficient in writing complex SQL queries, stored procedures, functions, and views
  • Hands-on experience with Microsoft Azure services (Azure SQL Database, Azure Data Factory, Azure Storage)
  • Strong Python scripting skills for data processing and automation
  • Experience with ETL/ELT design and implementation
  • Knowledge of database performance tuning, query optimization, and indexing strategies
  • Experience with SQL performance debugging tools (XEvents, Profiler, or similar)
  • Understanding of data modeling and dimensional design concepts
  • Knowledge of Agile methodology and experience working in Agile teams
  • Strong problem-solving and analytical skills
  • Understanding of Agentic AI concepts and tools
  • Excellent communication skills and ability to work with cross-functional teams

Desirable

  • Knowledge of insurance or financial services domain
  • Experience with Azure ML and machine learning pipelines
  • Experience with Azure DevOps and CI/CD pipelines
  • Familiarity with data visualization tools (Power BI, Tableau)
  • Experience with NoSQL databases (Cosmos DB, MongoDB)
  • Knowledge of Spark, Databricks, or other big data technologies
  • Azure certifications (Azure Data Engineer Associate, Azure Database Administrator Associate)
  • Experience with version control systems (Git, Azure Repos)

Tech Stack

  • MS SQL Server 2016+, Azure SQL Database, Azure Data Factory, Azure ML, Azure Storage, Azure Functions, Python, T-SQL, Stored Procedures, ETL/ELT, SQL Performance Tools (XEvents, Profiler), Agentic AI Tools, Azure DevOps, Power BI, Agile, Git

Benefits

  • 33 days of paid holiday
  • Competitive compensation well above market average
  • Work in a high-growth, high-impact environment with passionate, talented peers
  • Clear path for personal growth and leadership development
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
6 - 8 yrs
₹16L - ₹22L / yr
skill iconC#
skill icon.NET
ASP.NET
SQL
SQL server
+17 more

JOB DETAILS:

Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering

Industry: Global digital transformation solutions provider

Work Mode: Hybrid

Salary: Best in Industry

Experience: 6-8 years

Location: Hyderabad 


Job Description:

 Experience in Microsoft Web development technologies such as Web API, SOAP XML 

• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure

• Knowledge of cloud architecture and technologies

• Support/Incident management experience in a 24/7 environment

• SQL Server and SSIS experience

• DevOps experience of Github and Jenkins CI/CD pipelines or similar

• Windows Server 2016/2019+ and SQL Server 2019+ experience

• Experience of the full software development lifecycle

• You will write clean, scalable code, with a view towards design patterns and security best practices

• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge


Must-Haves

C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)

.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)

Mandatory skills: Net core with Azure or AWS experience

Notice period - 0 to 15 days only

Location: Hyderabad

Virtual Drive - 17th Jan

Read more
Borderless Access

at Borderless Access

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
13yrs+
₹32L - ₹35L / yr
skill iconPython
skill iconJava
skill iconNodeJS (Node.js)
skill iconSpring Boot
skill iconJavascript
+14 more

About Borderless Access

Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.

We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.

Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.

The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.


Key Responsibilities

  • Lead, mentor, and grow a cross-functional team of engineers specializing.
  • Foster a culture of collaboration, accountability, and continuous learning.
  • Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
  • Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
  • Promote clean, maintainable, and well-documented code across the team.
  • Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
  • Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
  • Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
  • Ensure timely delivery of high-quality software aligned with business goals.
  • Work closely with DevOps to ensure platform reliability, scalability, and observability.
  • Conduct regular 1:1s, performance reviews, and career development planning.
  • Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
  • Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.


Added Responsibilities

  • Defining and adhering to the development process.
  • Taking part in regular external audits and maintaining artifacts.
  • Identify opportunities for automation to reduce repetitive tasks.
  • Mentor and coach team members in the teams.
  • Continuously optimize application performance and scalability.
  • Collaborate with the Marketing team to understand different user journeys.


Growth and Development

The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:

  • Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
  • Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
  • Drive business objectives – Become part of defining and taking actions to meet the business objectives.


About You

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in software development.
  • Experience with microservices architecture and container orchestration.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
  • Solid understanding of data structures, algorithms, and software design patterns.
  • Solid understanding of enterprise system architecture patterns.
  • Experience in managing a small to medium-sized team with varied experiences.
  • Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
  • Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
  • Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
  • Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
  • Knowledge of containerization technologies Docker and Kubernetes.


Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Hyderabad, Ahmedabad, Indore
3 - 5 yrs
Upto ₹15L / yr (Varies
)
ETL
databricks
Windows Azure
Cloud Computing
Microsoft Office
+2 more

About Kanerika:

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.

We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.


Awards and Recognitions:

Kanerika has won several awards over the years, including:

1. Best Place to Work 2023 by Great Place to Work®

2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today

3. NASSCOM Emerge 50 Award in 2014

4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture

5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.


Working for us:

Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.


Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.


Role Responsibilities: 

Following are high level responsibilities that you will play but not limited to: 

  • Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
  • Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
  • Enable business analytics and self-service reporting through Power BI and other visualization tools.
  • Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
  • Implement and enforce best practices for data governance, data quality, and security.
  • Mentor and guide junior data engineers; establish coding and design standards.
  • Evaluate emerging technologies and tools to continuously improve the data ecosystem.


Required Qualifications:

  • Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
  • Bachelor’s/Master’s degree in Computer Science, Information Technology, Engineering, or related field.
  • 3-5 years of experience in data engineering or data platform development
  • Strong hands-on experience in one or more of the following:
  • Microsoft Fabric (Data Factory, Lakehouse, Data Warehouse)
  • Databricks (Spark, Delta Lake, PySpark, MLflow)
  • Snowflake (Data Warehousing, Snowpipe, Performance Optimization)
  • Power BI (Data Modeling, DAX, Report Development)
  • Proficiency in SQL and programming languages like Python or Scala.
  • Experience with Azure, AWS, or GCP cloud data services.
  • Solid understanding of data modeling, data governance, security, and CI/CD practices.


Preferred Qualifications:

  • Familiarity with data modeling techniques and practices for Power BI.
  • Knowledge of Azure Databricks or other data processing frameworks.
  • Knowledge of Microsoft Fabric or other Cloud Platforms.


What we need?

· B. Tech computer science or equivalent.


Why join us?

  • Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
  • Gain hands-on experience in content marketing with exposure to real-world projects.
  • Opportunity to learn from experienced professionals and enhance your marketing skills.
  • Contribute to exciting initiatives and make an impact from day one.
  • Competitive stipend and potential for growth within the company.
  • Recognized for excellence in data and AI solutions with industry awards and accolades.


Employee Benefits:

1. Culture:

  • Open Door Policy: Encourages open communication and accessibility to management.
  • Open Office Floor Plan: Fosters a collaborative and interactive work environment.
  • Flexible Working Hours: Allows employees to have flexibility in their work schedules.
  • Employee Referral Bonus: Rewards employees for referring qualified candidates.
  • Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.


2. Inclusivity and Diversity:

  • Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
  • Mandatory POSH training: Promotes a safe and respectful work environment.


3. Health Insurance and Wellness Benefits:

  • GMC and Term Insurance: Offers medical coverage and financial protection.
  • Health Insurance: Provides coverage for medical expenses.
  • Disability Insurance: Offers financial support in case of disability.


4. Child Care & Parental Leave Benefits:

  • Company-sponsored family events: Creates opportunities for employees and their families to bond.
  • Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
  • Family Medical Leave: Offers leave for employees to take care of family members' medical needs.


5. Perks and Time-Off Benefits:

  • Company-sponsored outings: Organizes recreational activities for employees.
  • Gratuity: Provides a monetary benefit as a token of appreciation.
  • Provident Fund: Helps employees save for retirement.
  • Generous PTO: Offers more than the industry standard for paid time off.
  • Paid sick days: Allows employees to take paid time off when they are unwell.
  • Paid holidays: Gives employees paid time off for designated holidays.
  • Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.


6. Professional Development Benefits:

  • L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
  • Mentorship Program: Offers guidance and support from experienced professionals.
  • Job Training: Provides training to enhance job-related skills.
  • Professional Certification Reimbursements: Assists employees in obtaining professional      certifications.
  • Promote from Within: Encourages internal growth and advancement opportunities.
Read more
Zenius IT Services Pvt Ltd
Chennai
5 - 10 yrs
₹12L - ₹25L / yr
MySQL
SQL server
Troubleshooting
Performance tuning
skill iconAmazon Web Services (AWS)
+2 more

Key Responsibilities

  • Administer and optimize PostgreSQL databases on AWS RDS
  • Monitor database performance, health, and alerts using CloudWatch
  • Manage backups, restores, upgrades, and high availability
  • Support CDC pipelines using Debezium with Kafka & Zookeeper
  • Troubleshoot database and replication/streaming issues
  • Ensure database security, access control, and compliance
  • Work with developers and DevOps teams for production support
  • Use tools like DBeaver for database management and analysis


Required Skills

  • Strong experience with PostgreSQL DBA activities
  • Hands-on experience with AWS RDS
  • Knowledge of Debezium, Kafka, and Zookeeper
  • Monitoring using AWS CloudWatch
  • Proficiency in DBeaver and SQL
  • Experience supporting production environments
Read more
Zolvit (formerly Vakilsearch)

at Zolvit (formerly Vakilsearch)

1 video
2 recruiters
Lakshmi J
Posted by Lakshmi J
Chennai
2 - 4 yrs
₹10L - ₹16L / yr
DevOps
Linux administration
Unix administration
Shell Scripting
CI/CD
+5 more

We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters



Responsibilities and Accountabilities:

  • As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
  • Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
  • Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env 
  • Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
  • Resolve incidents as escalated from Monitoring tools and Business Development Team
  • Implement and follow security guidelines, both policy and technology to protect our data
  • Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
  • Strong in performing production operation activities even at night times if required
  • Ability to automate [Scripts] recurring tasks to increase velocity and quality
  • Ability to manage and deliver multiple project phases at the same time

I Qualification(s): 

  • Experience in working with Linux Server, DevOps tools, and Orchestration tools 
  • Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add 

II Experience Required in DevOps Aspects:

  • Length of Experience: Minimum 1-4 years of experience
  • Nature of Experience: 
  • Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
  • Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
  • Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
  • Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
  • Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
  • Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
  • Good at Version Control & source code management systems like GitHub, GIT
  • Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
  • Experience in Web Server Nginx, and Apache
  • Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
  • Knowledge in Puma, Unicorn, Gunicorn & Yarn
  • Hands-on VMWare ESXi/Xencenter deployments is a value add
  • Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
  • Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
  • Code Quality like SonarQube is a value-add
  • Test Automation like Selenium, JMeter, and JUnit is a value-add
  • Experience in Heroku and OpenStack is a value-add 
  • Experience in Identifying Inbound and Outbound Threats and resolving it
  • Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages  
  • Documenting the Security fix for future use
  • Establish cross-team collaboration with security built into the software development lifecycle 
  • Forensics and Root Cause Analysis skills are mandatory 
  • Weekly Sanity Checks of the on-prem and off-prem environment 

 

III Skill Set & Personality Traits required:

  • An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
  • Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers


IV Age Group: 21 – 36 Years


V Cost to the Company: As per industry standards


Read more
Heaven Designs

at Heaven Designs

1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
2yrs+
Upto ₹12L / yr (Varies
)
skill iconPython
skill iconDjango
RESTful APIs
DevOps
CI/CD
+8 more

Backend Engineer (Python / Django + DevOps)


Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)


About SurgePV

SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.

Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.

As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.


Role Overview

We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.

This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.


Key Responsibilities

  • Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
  • Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
  • Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
  • Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
  • Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
  • Implement caching strategies and performance optimizations where required.
  • Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
  • Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.

Required Skills & Qualifications (Must-Have)

  • 2–5 years of experience as a Backend Engineer.
  • Strong proficiency in Python and Django / Django REST Framework.
  • Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
  • Proven experience designing and maintaining REST APIs in production environments.
  • Hands-on DevOps experience, including:
  • Docker and containerized services
  • CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
  • Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
  • Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
  • Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
  • Ownership mindset with the ability to take systems from spec → implementation → production → iteration.

Good-to-Have Skills

  • Experience working in early-stage startups or building 0→1 products.
  • Familiarity with Kubernetes or other container orchestration tools.
  • Experience with Infrastructure as Code (Terraform, Pulumi).
  • Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
  • Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.

What We Offer

  • Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
  • Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
  • A mission-driven, fast-growing product focused on sustainability and clean energy.
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Remote only
4 - 6 yrs
₹4.5L - ₹15L / yr
skill icon.NET
ASP.NET
skill iconC#
SQL
Microservices
+3 more

𝐇𝐢 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧𝐬! 👋 𝐖𝐞𝐥𝐜𝐨𝐦𝐞 𝐭𝐨 2026! 🎉

Starting the new year with an exciting opportunity!

Deqode 𝐈𝐒 𝐇𝐈𝐑𝐈𝐍𝐆! 💻


Hiring: .Net Developer

⭐ Experience: 4+ Years

⭐ Work Mode: Remote

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


🔧 Role Overview

We are looking for passionate .NET Developers to design, develop, and maintain scalable microservices for enterprise-grade applications. You’ll work closely with cross-functional teams and clients on high-performance, cloud-native solutions.


🛠️ Key Responsibilities

✅Build and maintain scalable .NET microservices

✅Develop secure, high-quality RESTful Web APIs

✅Write unit and integration tests to ensure code quality

✅Optimize performance and implement caching strategies


💫 Must-Have Skills

✅ 4+ years of experience with .NET Core / .NET 5+ & C#

✅Strong hands-on experience with ASP.NET Core Web API & EF Core

✅REST API development & middleware implementation

✅Solid understanding of SOLID principles & design patterns

✅Unit testing experience (xUnit, NUnit, MSTest, Moq)


Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
10 - 15 yrs
₹30L - ₹40L / yr
Microsoft Office
Microsoft Office Live
Windows Azure
Microsoft Windows Azure
SQL Azure
+15 more

Review Criteria:

Mandatory:

  • Strong IT Infrastructure Lead Profile
  • Must have 10+ years of hands-on experience in global IT Infrastructure management, including administration of Azure Entra ID, Office 365 Suite (Outlook, SharePoint, OneDrive), Azure Exchange, Microsoft Teams, Intune, and Windows Autopilot
  • Must have strong expertise in Azure/Office 365 compliance and governance, including audit readiness, data governance policies, and global regulatory frameworks (e.g., GDPR, HIPAA)
  • Must have solid experience managing IT operations end-to-end: user onboarding/offboarding, identity & access management, SAML/SSO integrations, and enterprise-wide provisioning/deprovisioning
  • Must have strong knowledge and hands-on experience with FortiGate Firewalls, FortiGate WiFi, VPN, routing, subnetting, and overall network administration
  • Must have proven capability in endpoint and device management: ManageEngine Endpoint Central, Assets Explorer, Antivirus Endpoint Security, JAMF (macOS), and multi-OS troubleshooting (Windows, Linux, Mac)
  • Must have strong Jira/Confluence administration experience for global teams, including configuration, access control, and workflow governance
  • Must have experience supporting, patching, updating, and troubleshooting multi-OS environments (Windows, Linux, macOS) with strong focus on security hardening and vulnerability fixes
  • Must have strong hands-on experience in shell scripting / bash / PowerShell for automation, system tasks, and operational efficiency
  • Must have experience in configuration and troubleshooting of Cisco/Polycom audio-video solutions and collaboration tools


Preferred:

  • Experience with Highspot, HubSpot, Gong, or similar platforms for basic administration
  • Strong background in cybersecurity frameworks, risk management, IT governance, incident response, and GRC practices
  • Bachelor’s or master’s degree in information technology, Computer Science, or related field
  • Candidates from NCR/Noida preferred


Role & Responsibilities:

The incumbent will be responsible for managing and enhancing the company’s IT infrastructure, cybersecurity, and IT operations globally. This role will require a strategic leader with a hands-on approach to overseeing infrastructure design, network security, data privacy, and compliance. The IT Head will drive initiatives to maintain a secure, efficient, and scalable technology environment that aligns with company’s business goals.


Key Responsibilities-

IT Infrastructure Management:

  • Lead the design, implementation, and management of the IT infrastructure across company’s global offices.
  • Oversee IT systems, network architecture, hardware, and software procurement, and ensure optimal performance and uptime.
  • Plan and execute IT modernization and digital transformation initiatives to support business growth.


Cybersecurity and Risk Management:

  • Establish and maintain robust cybersecurity policies, frameworks, and controls to protect the company’s data, systems, and intellectual property.
  • Monitor, detect, and respond to cybersecurity threats, vulnerabilities, and breaches.
  • Implement secure access controls, multi-factor authentication, and endpoint security measures to safeguard global IT environments.


Compliance and Data Privacy:

  • Ensure compliance with global data privacy regulations, such as GDPR, HIPAA, and other applicable data protection laws.
  • Support internal and external audits, ensuring adherence to regulatory and industry standards.


IT Governance and Strategy:

  • Develop and execute the IT strategy in alignment with company’s business objectives.
  • Create and enforce IT policies, procedures, and best practices for global operations.
  • Prepare and manage the IT budget, ensuring cost-effective solutions for infrastructure and security investments.


Vendor Management and Contract Negotiations:

  • Build and manage relationships with technology vendors, service providers, and consultants.
  • Negotiate contracts to achieve favorable pricing and terms for the company.


Team Leadership and Development:

  • Lead, mentor, and develop a high-performing IT team across multiple geographies.
  • Foster a culture of innovation, collaboration, and continuous learning.


Ideal Candidate:

  • Bachelor’s or master’s degree in information technology, Computer Science, or a related field.
  • 10+ years of progressive experience in IT infrastructure, security, and operations, with at least 7 years in a senior leadership role.
  • Strong experience in managing global IT environments, distributed teams, and multi-office setups.
  • Administer and manage Azure Entra ID, Office 365 suite (Outlook, SharePoint, OneDrive), Azure Exchange, Microsoft Teams, Microsoft Intune, Windows Autopilot, and related services.
  • Configure and manage SAML/Azure SSO integrations across enterprise applications.
  • Ensure Office 365 compliance management, including audit readiness and data governance policies.
  • Handle user onboarding and offboarding, ensuring secure and efficient account provisioning and deprovisioning.
  • Oversee IT compliance frameworks, audit processes, and IT asset inventory management, attendance systems.
  • Administer Jira, FortiGate firewalls and Wi-Fi, FortiGate EMS, antivirus solutions, and endpoint management systems.
  • Provide network administration: routing, subnetting, VPNs, and firewall configurations.
  • Support, patch, update, and troubleshoot Windows, Linux, and macOS environments, including applying vulnerability fixes and ensuring system security.
  • Manage JAMF, ManageEngine Endpoint Central, and Assets Explorer for device and asset management.
  • Provide configuration and basic administration knowledge for Highspot, HubSpot, and Gong platforms.
  • Set up, manage, and troubleshoot Cisco and Polycom audio/video conferencing systems.
  • Provide remote support for end-users, ensuring quick resolution of technical issues.
  • Monitor IT systems and network for performance, security, and reliability, ensuring high availability.
  • Collaborate with internal teams and external vendors to resolve issues and optimize systems.
  • Working Knowledge of data privacy regulations (GDPR, HIPAA) and experience driving regulatory compliance.
  • Strong project management, problem-solving, and stakeholder management skills.
  • Document configurations, processes, and troubleshooting procedures for compliance and knowledge sharing.
  • Ability to influence cross-functional teams and present technical information to non-technical stakeholders.
  • Good Experience in driving GRC


Perks, Benefits and Work Culture:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
STOCKHOLM (Sweden), Bengaluru (Bangalore)
8yrs+
Best in industry
DevOps
Microsoft Windows Server
Microsoft IIS administration
Windows Azure
Powershell
+2 more

About Tarento:

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Scope of Work:

  • Support the migration of applications from Windows Server 2008 to Windows Server 2019 or 2022 in an IaaS environment.
  • Migrate IIS websites, Windows Services, and related application components.
  • Assist with migration considerations for SQL Server connections, instances, and basic data-related dependencies.
  • Evaluate and migrate message queues (MSMQ or equivalent technologies).
  • Document the existing environment, migration steps, and post-migration state.
  • Work closely with DevOps, development, and infrastructure teams throughout the project.


Required Skills & Experience:

  • Strong hands-on experience with IIS administration, configuration, and application migration.
  • Proven experience migrating workloads between Windows Server versions, ideally legacy to modern.
  • Knowledge of Windows Services setup, configuration, and troubleshooting.
  • Practical understanding of SQL Server (connection strings, service accounts, permissions).
  • Experience with queues IBM/MSMQ or similar) and their migration considerations.
  • Ability to identify migration risks, compatibility constraints, and remediation options.
  • Strong troubleshooting and analytical skills.
  • Familiarity with Microsoft technologies (.Net, etc)
  • Networking and Active Directory related knowledge

Desirable / Nice-to-Have

  • Exposure to CI/CD tools, especially TeamCity and Octopus Deploy.
  • Familiarity with Azure services and related tools (Terraform, etc)
  • PowerShell scripting for automation or configuration tasks.
  • Understanding enterprise change management and documentation practices.
  • Security

Soft Skills

  • Clear written and verbal communication.
  • Ability to work independently while collaborating with cross-functional teams.
  • Strong attention to detail and a structured approach to execution.
  • Troubleshooting
  • Willingness to learn.


Location & Engagement Details

We are looking for a Senior DevOps Consultant for an onsite role in Stockholm (Sundbyberg office). This opportunity is open to candidates currently based in Bengaluru who are willing to relocate to Sweden for the assignment.

The role will start with an initial 6-month onsite engagement, with the possibility of extension based on project requirements and performance.

Read more
Ekloud INC
Remote only
7 - 15 yrs
₹6L - ₹25L / yr
skill icon.NET
Fullstack Developer
skill iconReact.js
cloud platforms
Windows Azure
+11 more

Hiring :.NET Full Stack Developer with ReactJS

Designation: Team Lead

Location: Bidadi, Bengaluru (Karnataka) ,Hybrid mode

Relevant Experience: 7-10 years

Preferred Qualifications

• Bachelors in CSE with minimum 7-10 years of relevant experience

• Exposure to cloud platforms (Azure) and API Gateway.

• Knowledge of microservices architecture.

• Experience with unit testing frameworks (xUnit, NUnit).

Required Skills & Qualifications

• Strong hands-on experience in .NET Core, C#, .Net framework (.NET Core, .NET 5+) and API development.

• Experience with RESTful API design and development.

• Strong experience on ReactJS for front-end development.

• Expertise in SQL Server (queries, stored procedures, performance tuning).

• Experience in system integration, especially with SAP.

• Ability to manage and mentor a team effectively.

• Strong requirement gathering and client communication skills.

• Familiarity with Git, CI/CD pipelines, and Agile methodologies.

Role Overview

• Design, develop, and maintain scalable backend services using .NET technologies.

• Work on ReactJS components as well as UI integration and ensure seamless communication between front-end and back-end.

• Write clean, efficient, and well-documented code.

• Lead and mentor a team of developers/Testing, ensuring adherence to best practices and timely delivery.

• Good exposure to Agile and scrum methodology

• Design and implement secure RESTful APIs using .NET Core.

• Apply best practices for authentication, authorization, and data security.

• Develop and maintain integrations with multiple systems, including SAP.

• Design and optimize SQL Server queries, stored procedures, and schemas.

• Gather requirements from clients and translate them into technical specifications.

• Implement Excel file uploaders and data processing workflows.

• Coordinate with stakeholders, manage timelines, and ensure quality deliverables.

• Troubleshoot and debug issues, ensuring smooth operation of backend systems.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+7 more

About Tarento:

 

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Job Summary:

We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.


Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.


Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD tools, DevOps processes, and Git-based workflows.


Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.


Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Bisman Gill
Posted by Bisman Gill
Remote, BLR
12yrs+
Upto ₹65L / yr (Varies
)
skill iconAmazon Web Services (AWS)
DevOps
Cloud Computing
Security Information and Event Management (SIEM)
Databases
+2 more

Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals

Responsibilities:

  • Cloud Architecture & Strategy
  • Define and evolve the company’s cloud architecture, with AWS as the primary platform.
  • Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
  • Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
  • Partner with engineering and product to convert custom solutions into productised capabilities.
  • Security & Compliance Enablement
  • Act as a foundational partner in building out the company’s security andcompliance functions.
  • Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
  • Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
  • Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
  • Customer & Solutions Enablement
  • Work with Solutions Engineering and customers to design and validate complex deployments.
  • Contribute to processes that productise custom implementations into scalable platform features.
  • Leadership & Influence
  • Serve as a technical thought leader across cloud, data, and security domains.
  • Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
  • Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
  • Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
  • Data Platforms & Governance
  • Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
  • Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
  • Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
  • Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
  • Developer Experience & DevOps
  • Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
  • Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
  • Embed security, compliance, and reliability standards into the development lifecycle.

Requirements:

  • 12+ years of experience in cloud engineering or architecture roles.
  • Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
  • Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
  • Strong foundation in data management and governance, including lifecycle and compliance.
  • Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
  • Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
  • Strong foundation in networking, security, observability, and performance engineering.
  • Excellent communication and influencing skills, with the ability to partner across technical and business functions.

Good to Have:

  • Exposure to Azure, GCP, or other cloud environments.
  • Experience working in SaaS/PaaS at enterprise scale.
  • Background in product engineering, with experience shaping technical direction in collaboration with product teams.
  • Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).

About Albert Invent

Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.


Why Join Albert Invent

  • Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
  • You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
  • The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
  • You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.


Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Mumbai
4 - 7 yrs
Upto ₹21L / yr (Varies
)
Data architecture
Data modeling
ETL
ELT
Spark
+3 more

About The Role

  • As a Data Platform Lead, you will utilize your strong technical background and hands-on development skills to design, develop, and maintain data platforms.
  • Leading a team of skilled data engineers, you will create scalable and robust data solutions that enhance business intelligence and decision-making.
  • You will ensure the reliability, efficiency, and scalability of data systems while mentoring your team to achieve excellence.
  • Collaborating closely with our client’s CXO-level stakeholders, you will oversee pre-sales activities, solution architecture, and project execution.
  • Your ability to stay ahead of industry trends and integrate the latest technologies will be crucial in maintaining our competitive edge.

Key Responsibilities

  • Client-Centric Approach: Understand client requirements deeply and translate them into robust technical specifications, ensuring solutions meet their business needs.
  • Architect for Success: Design scalable, reliable, and high-performance systems that exceed client expectations and drive business success.
  • Lead with Innovation: Provide technical guidance, support, and mentorship to the development team, driving the adoption of cutting-edge technologies and best practices.
  • Champion Best Practices: Ensure excellence in software development and IT service delivery, constantly assessing and evaluating new technologies, tools, and platforms for project suitability.
  • Be the Go-To Expert: Serve as the primary point of contact for clients throughout the project lifecycle, ensuring clear communication and high levels of satisfaction.
  • Build Strong Relationships: Cultivate and manage relationships with CxO/VP level stakeholders, positioning yourself as a trusted advisor.
  • Deliver Excellence: Manage end-to-end delivery of multiple projects, ensuring timely and high-quality outcomes that align with business goals.
  • Report with Clarity: Prepare and present regular project status reports to stakeholders, ensuring transparency and alignment.
  • Collaborate Seamlessly: Coordinate with cross-functional teams to ensure smooth and efficient project execution, breaking down silos and fostering collaboration.
  • Grow the Team: Provide timely and constructive feedback to support the professional growth of team members, creating a high-performance culture.

Qualifications

  • Master’s (M. Tech., M.S.) in Computer Science or equivalent from reputed institutes like IIT, NIT preferred
  • Overall 6–8 years of experience with minimum 2 years of relevant experience and a strong technical background
  • Experience working in mid size IT Services company is preferred

Preferred Certification

  • AWS Certified Data Analytics Specialty
  • AWS Solution Architect Professional
  • Azure Data Engineer + Solution Architect
  • Databricks Certified Data Engineer / ML Professional

Technical Expertise

  • Advanced knowledge of distributed architectures and data modeling practices.
  • Extensive experience with Data Lakehouse systems like Databricks and data warehousing solutions such as Redshift and Snowflake.
  • Hands-on experience with data technologies such as Apache Spark, SQL, Airflow, Kafka, Jenkins, Hadoop, Flink, Hive, Pig, HBase, Presto, and Cassandra.
  • Knowledge in BI tools including PowerBi, Tableau, Quicksight and open source equivalent like Superset and Metabase is good to have.
  • Strong knowledge of data storage formats including Iceberg, Hudi, and Delta.
  • Proficient programming skills in Python, Scala, Go, or Java.
  • Ability to architect end-to-end solutions from data ingestion to insights, including designing data integrations using ETL and other data integration patterns.
  • Experience working with multi-cloud environments, particularly AWS and Azure.
  • Excellent teamwork and communication skills, with the ability to thrive in a fast-paced, agile environment.


Read more
MyYogaTeacher

at MyYogaTeacher

1 video
7 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
5yrs+
Upto ₹32L / yr (Varies
)
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconPostgreSQL
skill iconMongoDB
+1 more

Senior Backend Engineer

As a Senior Backend Engineer, you will play a critical role in designing and building highly scalable, reliable

backend systems that power our global Yoga & Fitness platform. You will own backend architecture, make

key technical decisions, and ensure our systems perform reliably at scale in production environments.


Responsibilities

● Design, build, and own scalable, high-performance backend systems and APIs.

● Make key architectural and technical decisions with long-term scalability and reliability in mind.

● Lead backend development from system design to production rollout.

● Own production systems including monitoring, performance tuning, and incident response.

● Handle scale, performance, and availability challenges across distributed systems.

● Ensure strong security, data protection, and compliance standards.

● Collaborate closely with product and frontend teams to deliver impactful features.

● Mentor engineers and lead knowledge-sharing sessions.

● Champion AI-assisted development and automation to improve engineering productivity.


Qualifications

● 5+ years of experience in backend software engineering with production-scale systems.

● Strong expertise in Go (Golang) and backend service development.

● Experience designing scalable, distributed system architectures.

● Strong experience with SQL and NoSQL databases and performance optimization.

● Hands-on experience with AWS, GCP, or Azure and cloud-native architectures.

● Strong understanding of scalability, performance, and reliability engineering.

● Excellent communication and technical decision-making skills.


Required Skills

● Go (Golang)

● Backend system design and architecture

● Distributed systems and scalability

● Database design and optimization

● Cloud-native development

● Production debugging and performance tuning

Preferred Skills

● Experience with AI-assisted development tools and workflows

● Knowledge of agentic workflows or MCP

● Experience scaling consumer-facing or marketplace platforms

● Strong DevOps and observability experience


About the Company

● MyYogaTeacher is a fast-growing health-tech startup focused on improving physical and mental

well-being worldwide.

● We connect highly qualified Yoga and Fitness coaches with customers globally through personalized

1-on-1 live sessions.

● 200,000+ customers, 335,000+ 5-star reviews, and 95% sessions rated 5 stars.

● Headquartered in California with operations in Bangalore.

Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
8 - 12 yrs
₹18L - ₹25L / yr
Dot Net
Windows Azure
SQL
skill iconC#
Web api
+2 more

Skills required:

  • Strong expertise in .NET Core / ASP.NET MVC
  • Candidate must have 8+ years of experience in Dot Net.
  • Candidate must have experience with Angular.
  • Hands-on experience with Entity Framework & LINQ
  • Experience with SQL Server (performance tuning, stored procedures, indexing)
  • Understanding of multi-tenancy architecture
  • Experience with Microservices / API development (REST, GraphQL)
  • Hands-on experience in Azure Services (App Services, Azure SQL, Blob Storage, Key Vault, Functions, etc.)
  • Experience in CI/CD pipelines with Azure DevOps
  • Knowledge of security best practices in cloud-based applications
  • Familiarity with Agile/Scrum methodologies
  • Flexible to use copilot or any other AI tool to write automated test cases and faster code writing

Roles and Responsibilities:

- Good communication Skills is must.

- Develop features across multiple subsystems within our applications, including collaboration in requirements definition, prototyping, design, coding, testing, and deployment.

- Understand how our applications operate, are structured, and how customers use them

- Provide engineering support (when necessary) to our technical operations staff when they are building, deploying, configuring, and supporting systems for customers.

Read more
Appiness Interactive
Bengaluru (Bangalore)
8 - 14 yrs
₹14L - ₹20L / yr
DevOps
Windows Azure
Powershell
cicd
yaml
+1 more

Job Description :


We are looking for an experienced DevOps Engineer with strong expertise in Azure DevOps, CI/CD pipelines, and PowerShell scripting, who has worked extensively with .NET-based applications in a Windows environment.


Mandatory Skills

  • Strong hands-on experience with Azure DevOps
  • GIT version control
  • CI/CD pipelines (Classic & YAML)
  • Excellent experience in PowerShell scripting
  • Experience working with .NET-based applications
  • Understanding of Solutions, Project files, MSBuild
  • Experience using Visual Studio / MSBuild tools
  • Strong experience in Windows environment
  • End-to-end experience in build, release, and deployment pipelines


Good to Have Skills

  • Terraform (optional / good to have)
  • Experience with JFrog Artifactory
  • SonarQube integration knowledge


Read more
Top MNC

Top MNC

Agency job
Bengaluru (Bangalore)
7 - 12 yrs
₹10L - ₹25L / yr
Windows Azure
DevOps
MLFlow
MLOps


JD :


• Master’s degree in Computer Science, Computational Sciences, Data Science, Machine Learning, Statistics , Mathematics any quantitative field

• Expertise with object-oriented programming (Python, C++)

• Strong expertise in Python libraries like NumPy, Pandas, PyTorch, TensorFlow, and Scikit-learn

• Proven experience in designing and deploying ML systems on cloud platforms (AWS, GCP, or Azure).

• Hands-on experience with MLOps frameworks, model deployment pipelines, and model monitoring tools.

• Track record of scaling machine learning solutions from prototype to production. 

• Experience building scalable ML systems in fast-paced, collaborative environments.

• Working knowledge of adversarial machine learning techniques and their mitigation

• Agile and Waterfall methodologies.

• Personally invested in continuous improvement and innovation.

• Motivated, self-directed individual that works well with minimal supervision.

Read more
Arcitech
Navi Mumbai
5 - 7 yrs
₹12L - ₹14L / yr
Cyber Security
VAPT
Cloud Computing
CI/CD
skill iconJenkins
+4 more

Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI



Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.

Full-time

Navi Mumbai, Maharashtra, India

5+ Years Experience

1200000 - 1400000

Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)

Location: Vashi, Navi Mumbai (On-site)

Shift: 10:00 AM - 7:00 PM

Experience: 5+ years

Salary : INR 12,00,000 - 14,00,000


Job Summary

Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.


Key Responsibilities

Cloud & Infrastructure

  • Manage deployments on AWS/Azure
  • Maintain Linux servers & cloud environments
  • Ensure uptime, performance, and scalability


CI/CD & Automation

  • Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
  • Automate tasks using Bash/Python
  • Implement IaC (Terraform/CloudFormation)


Containerization

  • Build and run Docker containers
  • Work with basic Kubernetes concepts


Cybersecurity & VAPT

  • Perform Vulnerability Assessment & Penetration Testing
  • Identify, track, and mitigate security vulnerabilities
  • Implement hardening and support DevSecOps practices
  • Assist with firewall/security policy management


Monitoring & Troubleshooting

  • Use ELK, Prometheus, Grafana, CloudWatch
  • Resolve cloud, deployment, and infra issues


Cross-Team Collaboration

  • Work with Dev, QA, and Security for secure releases
  • Maintain documentation and best practices


Required Skills

  • AWS/Azure, Linux, Docker
  • CI/CD tools: Jenkins, GitHub Actions, GitLab
  • Terraform / IaC
  • VAPT experience + understanding of OWASP, cloud security
  • Bash/Python scripting
  • Monitoring tools (ELK, Prometheus, Grafana)
  • Strong troubleshooting & communication
Read more
Inteliment Technologies

at Inteliment Technologies

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
10 - 20 yrs
Upto ₹25L / yr (Varies
)
Project Management
PowerBI
skill iconAmazon Web Services (AWS)
Windows Azure
Informatica

About the company:

Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.


About the role:

As a Technical Project Manager, you will lead the planning, execution, and delivery of complex technical projects while ensuring alignment with business objectives and timelines. You will act as a bridge between technical teams and stakeholders, managing resources, risks, and communications to deliver high-quality solutions. This role demands strong leadership, project management expertise, and technical acumen to drive project success in a dynamic and collaborative environment.


Qualifications:

  • Education Background: Any ME / M Tech / BE / B Tech 


Key Competencies:

Technical Skills

1. Data & BI Technologies-

  • Proficiency in SQL & PL/SQL for database querying and optimization.
  • Understanding of data warehousing concepts, dimensional modeling, and data lake/lakehouse architectures.
  • Experience with BI tools such as Power BI, Tableau, Qlik Sense/View.
  • Familiarity with traditional platforms like Oracle, Informatica, SAP BO, BODS, BW.

2. Cloud & Data Engineering :

  • Strong knowledge of AWS (EC2, S3, Lambda, Glue, Redshift), Azure (Data Factory, Synapse, Databricks, ADLS),
  • Snowflake (warehouse architecture, performance tuning), and Databricks (Delta Lake, Spark).
  • Experience with cloud-based ETL/ELT pipelines, data ingestion, orchestration, and workflow automation.

3. Programming

  • Hands-on experience in Python or similar scripting languages for data processing and automation.

Soft Skills

  • Strong leadership and team management skills.
  • Excellent verbal and written communication for stakeholder alignment.
  • Structured problem-solving and decision-making capability.
  • Ability to manage ambiguity and handle multiple priorities.

Tools & Platforms

  • Cloud: AWS, Azure
  • Data Platforms: Snowflake, Databricks
  • BI Tools: Power BI, Tableau, Qlik
  • Data Management: Oracle, Informatica, SAP BO
  • Project Tools: JIRA, MS Project, Confluence (recommended additions if you want)


Key Responsibilities:

  • End-to-End Project Management: Lead the team through the full project lifecycle, delivering techno-functional solutions.
  • Methodology Expertise: Apply Agile, PMP, and other frameworks to ensure effective project execution and resource management.
  • Technology Integration: Oversee technology integration and ensure alignment with business goals.
  • Stakeholder & Conflict Management: Manage relationships with customers, partners, and vendors, addressing expectations and conflicts proactively.
  • Technical Guidance: Provide expertise in software design, architecture, and ensure project feasibility.
  • Change Management: Analyse new requirements/change requests, ensuring alignment with project goals.
  • Effort & Cost Estimation: Estimate project efforts and costs and identify potential risks early.
  • Risk Mitigation: Proactively identify risks and develop mitigation strategies, escalating issues in advance.
  • Hands-On Contribution: Participate in coding, code reviews, testing, and documentation as needed.
  • Project Planning & Monitoring: Develop detailed project plans, track progress, and monitor task dependencies.
  • Scope Management: Manage project scope, deliverables, and exclusions, ensuring technical feasibility.
  • Effective Communication: Communicate with stakeholders to ensure agreement on scope, timelines, and objectives.
  • Reporting: Provide status and RAG reports, proactively addressing risks and issues.
  • Change Control: Manage changes in project scope, schedule, and costs using appropriate verification techniques.
  • Performance Measurement: Measure project performance with tools and techniques to ensure progress.
  • Operational Process Management: Oversee operational tasks like timesheet approvals, leave, appraisals, and invoicing.
Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+5 more

Job Summary:

We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.

Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD toolsDevOps processes, and Git-based workflows.

Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.

Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad, Chennai, Kochi (Cochin), Bengaluru (Bangalore), Trivandrum, Thiruvananthapuram
12 - 15 yrs
₹20L - ₹40L / yr
skill iconJava
DevOps
CI/CD
ReAct (Reason + Act)
skill iconReact.js
+6 more

Role Proficiency:

Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.


Knowledge Examples:

  • Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
  1. Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
  2. Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
  3. Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
  4. Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
  5. Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
  6. Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
  7. Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
  8. Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
  9. Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
  10. Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
  11. Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
  12. Solution Structuring: Demonstrates working knowledge of service offering and products


Additional Comments:

Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:

• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.

• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.

• Expertise in cloud-based applications on Azure, leveraging key Azure services.

• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.

• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.

• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.

• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.

• Excellent communication skills

• Mentor team members providing guidance on technical challenges and helping them grow their skill set.

• Good to have experience in GCP and retail domain.

 

Skills: Devops, Azure, Java


Must-Haves

Java (12+ years), React, Azure, DevOps, Cloud Architecture

Strong Java architecture and design experience.

Expertise in Azure cloud services.

Hands-on experience with React and front-end integration.

Proven track record in DevOps practices (CI/CD, automation).

Notice period - 0 to 15days only

Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum

Excellent communication and leadership skills.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹20L / yr
Microsoft Dynamics
skill iconC#
Office 365
skill iconGit
Microsoft Dynamics CRM
+13 more

Job Description

We are seeking a skilled Microsoft Dynamics 365 Developer with 4–7 years of hands-on experience in designing, customizing, and developing solutions within the Dynamics 365 ecosystem. The ideal candidate should have strong technical expertise, solid understanding of CRM concepts, and experience integrating Dynamics 365 with external systems.

 

Key Responsibilities

  • Design, develop, and customize solutions within Microsoft Dynamics 365 CE.
  • Work on entity schema, relationships, form customizations, and business logic components.
  • Develop custom plugins, workflow activities, and automation.
  • Build and enhance integrations using APIs, Postman, and related tools.
  • Implement and maintain security models across roles, privileges, and access levels.
  • Troubleshoot issues, optimize performance, and support deployments.
  • Collaborate with cross-functional teams and communicate effectively with stakeholders.
  • Participate in version control practices using GIT.

 

Must-Have Skills

Core Dynamics 365 Skills

  • Dynamics Concepts (Schema, Relationships, Form Customization): Advanced
  • Plugin Development: Advanced (writing and optimizing plugins, calling actions, updating related entities)
  • Actions & Custom Workflows: Intermediate
  • Security Model: Intermediate
  • Integrations: Intermediate (API handling, Postman, error handling, authorization & authentication, DLL merging)

 

Coding & Versioning

  • C# Coding Skills: Intermediate (Able to write logic using if-else, switch, loops, error handling)
  • GIT: Basic

 

Communication

  • Communication Skills: Intermediate (Ability to clearly explain technical concepts and work with business users)

 

Good-to-Have Skills (Any 3 or More)

Azure & Monitoring

  • Azure Functions: Basic (development, debugging, deployment)
  • Azure Application Insights: Intermediate (querying logs, pushing logs)

 

Reporting & Data

  • Power BI: Basic (building basic reports)
  • Data Migration: Basic (data import with lookups, awareness of migration tools)

 

Power Platform

  • Canvas Apps: Basic (building basic apps using Power Automate connector)
  • Power Automate: Intermediate (flows & automation)
  • PCF (PowerApps Component Framework): Basic

 

Skills: Microsoft Dynamics, Javascript, Plugins


Must-Haves

Microsoft Dynamics 365 (4-7 years), Plugin Development (Advanced), C# (Intermediate), Integrations (Intermediate), GIT (Basic)

Core Dynamics 365 Skills

Dynamics Concepts (Schema, Relationships, Form Customization): Advanced

Plugin Development: Advanced (writing and optimizing plugins, calling actions, updating related entities)

Actions & Custom Workflows: Intermediate

Security Model: Intermediate

Integrations: Intermediate

(API handling, Postman, error handling, authorization & authentication, DLL merging)

Coding & Versioning

C# Coding Skills: Intermediate

(Able to write logic using if-else, switch, loops, error handling)

GIT: Basic


Notice period - Immediate to 15 days

Locations: Bangalore only

(Ability to clearly explain technical concepts and work with business users)


Nice to Haves

(Any 3 or More)

Azure & Monitoring

Azure Functions: Basic (development, debugging, deployment)

Azure Application Insights: Intermediate (querying logs, pushing logs)

Reporting & Data

Power BI: Basic (building basic reports)

Data Migration: Basic

(data import with lookups, awareness of migration tools)

Power Platform

Canvas Apps: Basic (building basic apps using Power Automate connector)

Power Automate: Intermediate (flows & automation)

PCF (PowerApps Component Framework): Basic

Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
BlogVault

at BlogVault

3 candid answers
1 recruiter
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 6 yrs
Upto ₹35L / yr (Varies
)
skill iconRuby
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconReact.js
skill iconAngular (2+)
+3 more

We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.


What You’ll Do

  • Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
  • Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
  • Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
  • Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
  • Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
  • Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.


What Makes You a Strong Fit

  • You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
  • You think in systems and second-order effects—not just in ticket-by-ticket outputs.
  • You prefer well-reasoned defaults over overengineering.
  • You take ownership—not just of code, but of the outcomes it enables.
  • You work cleanly, write clear code, and make life easier for those who come after you.
  • You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.


Bonus if You Have Experience With

  • Building tools or workflows that accelerate other developers.
  • Working with AI coding tools and integrating them meaningfully into your workflow.
  • Building for SaaS products, especially those with large user bases or self-serve motions.
  • Working in small, fast-moving product teams with a high bar for ownership.


Why Join Us

  • A small team that values craftsmanship, curiosity, and momentum.
  • A product-driven culture where engineering decisions are informed by customer outcomes.
  • A chance to work on multiple zero-to-one opportunities with strong PMF.
  • No vanity perks—just meaningful work with people who care.
Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
8 - 13 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconAmazon Web Services (AWS)
skill iconSpring Boot
skill iconGo Programming (Golang)
+13 more

Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud. 

As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.

 

Position Overview:

As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.



Work Location: Pune


Job Type: Hybrid

 

Key Responsibilities:

  • Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
  • Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
  • Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
  • Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
  • Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
  • Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
  • Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
  • Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
  • Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
  • Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
  • Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
  • Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
  • Solid experience designing and delivering features with high quality on aggressive schedules.
  • Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
  • Familiarity with performance optimization techniques and principles for backend systems.
  • Excellent problem-solving and critical-thinking abilities.
  • Outstanding communication and collaboration skills.


Why Join Us:

  • Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
  • Collaborative and innovative work environment.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Chance to work on cutting-edge technology and products that make a real impact.


If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Read more
AI company

AI company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data architecture
Data engineering
SQL
Data modeling
GCS
+21 more

Review Criteria

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred

  • Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Job Specific Criteria

  • CV Attachment is mandatory
  • How many years of experience you have with Dremio?
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate

  • Bachelor’s or master’s in computer science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.
Read more
Tecblic Private LImited
Ahmedabad
5 - 6 yrs
₹5L - ₹15L / yr
Windows Azure
skill iconPython
SQL
Data Warehouse (DWH)
Data modeling
+5 more

Job Description: Data Engineer

Location: Ahmedabad

Experience: 5 to 6 years

Employment Type: Full-Time



We are looking for a highly motivated and experienced Data Engineer to join our  team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.



Responsibilities


● Design and optimize data pipelines for various data sources


● Design and implement efficient data storage and retrieval mechanisms


● Develop data modelling solutions and data validation mechanisms


● Troubleshoot data-related issues and recommend process improvements


● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions


● Coach and mentor junior data engineers in the team




Skills Required: 


● Minimum 4 years of experience in data engineering or related field


● Proficient in designing and optimizing data pipelines and data modeling


● Strong programming expertise in Python


● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive


● Extensive experience with cloud data services such as AWS, Azure, and GCP


● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing


● Knowledge of distributed computing and storage systems


● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage


● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities




Qualifications


  • Bachelor's degree in Computer Science, Data Science, or a Computer related field


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Remote only
5 - 7 yrs
₹10L - ₹25L / yr
Windows Azure
Data engineering
SQL
CI/CD
databricks

Role: Senior Data Engineer (Azure)

Experience: 5+ Years

Location: Anywhere in india

Work Mode: Remote

Notice Period - Immediate joiners or Serving notice period

𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

  • Data processing on Azure using ADF, Streaming Analytics, Event Hubs, Azure Databricks, Data Migration Services, and Data Pipelines
  • Provisioning, configuring, and developing Azure solutions (ADB, ADF, ADW, etc.)
  • Designing and implementing scalable data models and migration strategies
  • Working on distributed big data batch or streaming pipelines (Kafka or similar)
  • Developing data integration & transformation solutions for structured and unstructured data
  • Collaborating with cross-functional teams for performance tuning and optimization
  • Monitoring data workflows and ensuring compliance with governance and quality standards
  • Driving continuous improvement through automation and DevOps practices

𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞:

  • 5–10 years of experience as a Data Engineer
  • Strong proficiency in Azure Databricks, PySpark, Python, SQL, and Azure Data Factory
  • Experience in Data Modelling, Data Migration, and Data Warehousing
  • Good understanding of database structure principles and schema design
  • Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms
  • Experience with DevOps tools (Azure DevOps, Jenkins, Airflow, Azure Monitor) — good to have
  • Knowledge of distributed data processing and real-time streaming (Kafka/Event Hub)
  • Familiarity with visualization tools like Power BI or Tableau
  • Strong analytical, problem-solving, and debugging skills
  • Self-motivated, detail-oriented, and capable of managing priorities effectively


Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
4 - 8 yrs
₹10L - ₹13L / yr
SQL
databricks
PowerBI
Windows Azure
Data engineering
+9 more

Review Criteria

  • Strong Senior Data Engineer profile
  • 4+ years of hands-on Data Engineering experience
  • Must have experience owning end-to-end data architecture and complex pipelines
  • Must have advanced SQL capability (complex queries, large datasets, optimization)
  • Must have strong Databricks hands-on experience
  • Must be able to architect solutions, troubleshoot complex data issues, and work independently
  • Must have Power BI integration experience
  • CTC has 80% fixed and 20% variable in their ctc structure


Preferred

  • Worked on Call center data, understand nuances of data generated in call centers
  • Experience implementing data governance, quality checks, or lineage frameworks
  • Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture


Job Specific Criteria

  • CV Attachment is mandatory
  • Are you Comfortable integrating with Power BI datasets?
  • We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?


Role & Responsibilities

We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.

 

Key Responsibilities-

  • Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
  • Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
  • Architect and deliver high-performance ETL/ELT processes across cloud platforms.
  • Implement and enforce data governance standards, including data quality, lineage, and access control.
  • Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
  • Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
  • Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
  • Mentor junior engineers and contribute to engineering best practices, standards, and documentation.


Ideal Candidate

  • Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
  • Advanced SQL skills with experience handling large, complex datasets.
  • Strong expertise with Databricks for data engineering workloads.
  • Hands-on experience with major cloud platforms — AWS and Azure.
  • Deep understanding of data architecture, data modelling, and optimisation techniques.
  • Familiarity with BI and reporting environments such as Power BI.
  • Strong analytical and problem-solving abilities with a focus on data quality and governance
  • Proficiency in python or another programming language in a plus.
Read more
Product Innovation Company

Product Innovation Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹10L / yr
Project Management
Program Management
Stakeholder management
IT program management
Software project management
+25 more

ROLES AND RESPONSIBILITIES:

Standardization and Governance:

  • Establishing and maintaining project management standards, processes, and methodologies.
  • Ensuring consistent application of project management policies and procedures.
  • Implementing and managing project governance processes.


Resource Management:

  • Facilitating the sharing of resources, tools, and methodologies across projects.
  • Planning and allocating resources effectively.
  • Managing resource capacity and forecasting future needs.


Communication and Reporting:

  • Ensuring effective communication and information flow among project teams and stakeholders.
  • Monitoring project progress and reporting on performance.
  • Communicating strategic work progress, including risks and benefits.


Project Portfolio Management:

  • Supporting strategic decision-making by aligning projects with organizational goals.
  • Selecting and prioritizing projects based on business objectives.
  • Managing project portfolios and ensuring efficient resource allocation across projects.


Process Improvement:

  • Identifying and implementing industry best practices into workflows.
  • Improving project management processes and methodologies.
  • Optimizing project delivery and resource utilization.


Training and Support:

  • Providing training and support to project managers and team members.
  • Offering project management tools, best practices, and reporting templates.


Other Responsibilities:

  • Managing documentation of project history for future reference.
  • Coaching project teams on implementing project management steps.
  • Analysing financial data and managing project costs.
  • Interfacing with functional units (Domain, Delivery, Support, Devops, HR etc).
  • Advising and supporting senior management.


IDEAL CANDIDATE:

  • 3+ years of proven experience in Project Management roles with strong exposure to PMO processes, standards, and governance frameworks.
  • Demonstrated ability to manage project status tracking, risk assessments, budgeting, variance analysis, and defect tracking across multiple projects.
  • Proficient in Project Planning and Scheduling using tools like MS Project and Advanced Excel (e.g., Gantt charts, pivot tables, macros).
  • Experienced in developing project dashboards, reports, and executive summaries for senior management and stakeholders.
  • Active participant in Agile environments, attending and contributing to Scrum calls, sprint planning, and retrospectives.
  • Holds a Bachelor’s degree in a relevant field (e.g., Engineering, Business, IT, etc.).
  • Preferably familiar with Jira, Azure DevOps, and Power BI for tracking and visualization of project data.
  • Exposure to working in product-based companies or fast-paced, innovation-driven environments is a strong advantage.
Read more
CT Nova
Apurv M
Posted by Apurv M
Remote only
3 - 15 yrs
₹25L - ₹50L / yr
skill icon.NET
Windows Azure
SQL
skill iconReact.js
Microservices

Experience: 3+ years (Backend/Full-Stack)


Note: You will be the 3rd engineer on the team. If you are comfortable with Java and Springboot plus Cloud, then you will easily be able to pick up the following stack.


Key Requirements —

  • Primary Stack: Experience with .NET
  • Cloud: Solid understanding of cloud platforms (preferably Azure)
  • Frontend/DevOps: Familiarity with React and DevOps practices
  • Architecture: Strong grasp of microservices
  • Technical Skills: Basic proficiency in scripting, databases, and Git


Compensation: competitive salary, based on experience and fit

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
4 - 8 yrs
₹20L - ₹30L / yr
Office 365
Network Security
Microsoft Windows Azure
Microsoft Office
Windows Azure
+26 more

Review Criteria

  • Strong IT Engineer Profile
  • 4+ years of hands-on experience in Azure/Office 365 compliance and management, including policy enforcement, audit readiness, DLP, security configurations, and overall governance.
  • Must have strong experience handling user onboarding/offboarding, identity & access provisioning, MFA, SSO configurations, and lifecycle management across Windows/Mac/Linux environments.
  • Must have proven expertise in IT Inventory Management, including asset tracking, device lifecycle, CMDB updates, and hardware/software allocation with complete documentation.
  • Hands-on experience configuring and managing FortiGate Firewalls, including routing, VPN setups, policies, NAT, and overall network security.
  • Must have practical experience with FortiGate WiFi, AP configurations, SSID management, troubleshooting connectivity issues, and securing wireless environments.
  • Must have strong knowledge and hands-on experience with Antivirus Endpoint Central (or equivalent) for patching, endpoint protection, compliance, and threat remediation.
  • Must have solid understanding of Networking, including routing, switching, subnetting, DHCP, DNS, VPN, LAN/WAN troubleshooting.
  • Must have strong troubleshooting experience across Windows, Linux, and macOS environments for system issues, updates, performance, and configurations.
  • Must have expertise in Cisco/Polycom A/V solutions, including setup, configuration, video conferencing troubleshooting, and meeting room infrastructure support.
  • Must have hands-on experience in Shell Scripting / Bash / PowerShell for automation of routine IT tasks, monitoring, and system efficiencies.


Job Specific Criteria:

  • CV Attachment is mandatory
  • Q1. Please share details of experience in troubleshooting (Rate out of 10, 10 being highly experienced) A. Windows Troubleshooting B. Linux Troubleshooting C. Macbook Troubleshooting
  • Q2. Please share details of experience in below process (Rate out of 10, 10 being highly experienced) A. User Onboarding/Offboarding B. Inventory Management
  • Q3. Please share details of experience in below tools and administrations (Rate out of 10, 10 being highly experienced) A. FortiGate Firewall B. FortiGate WiFi C. Antivirus Endpoint Central D. Networking E. Cisco/Polycom A/V solutions F. Shell Scripting/Bash/PowerShell G. Azure/Office 365 compliance and management
  • Q4. Are you okay for F2F round (Noida)?
  • Q5. What's you current company?
  • Q6. Are you okay for rotational shift (10am - 7pm and 2pm to 11pm)?


Role & Responsibilities:

We are seeking an experienced IT Infrastructure/System Administrator to manage, secure, and optimize our IT environment. The ideal candidate will have expertise in enterprise-grade tools, strong troubleshooting skills, and hands-on experience configuring secure integrations, managing endpoint deployments, and ensuring compliance across platforms.

  • Administer and manage Office 365 suite (Outlook, SharePoint, OneDrive, Teams etc) and related services/configurations.
  • Handle user onboarding and offboarding, ensuring secure and efficient account provisioning and deprovisioning.
  • Oversee IT compliance frameworks, audit processes, and IT asset inventory management, attendance systems.
  • Administer Jira, FortiGate firewalls and Wi-Fi, antivirus solutions, and endpoint management systems.
  • Provide network administration: routing, subnetting, VPNs, and firewall configurations.
  • Support, patch, update, and troubleshoot Windows, Linux, and macOS environments, including applying vulnerability fixes and ensuring system security.
  • Manage Assets Explorer for device and asset management/inventory.
  • Set up, manage, and troubleshoot Cisco and Polycom audio/video conferencing systems.
  • Provide remote support for end-users, ensuring quick resolution of technical issues.
  • Monitor IT systems and network for performance, security, and reliability, ensuring high availability.
  • Collaborate with internal teams and external vendors to resolve issues and optimize systems.
  • Document configurations, processes, and troubleshooting procedures for compliance and knowledge sharing.


Ideal Candidate:

  • Proven hands-on experience with:
  • Office 365 administration and compliance.
  • User onboarding/offboarding processes.
  • Compliance, audit, and inventory management tools.
  • Jira administration, FortiGate firewall, Wi-Fi, and antivirus solutions.
  • Networking fundamentals: subnetting, routing, switching.
  • Patch management, updates, and vulnerability remediation across Windows, Linux, and macOS.
  • Assets Explorer/inventory management
  • Strong troubleshooting, documentation, and communication skills.

 

Preferred Skills:

  • Scripting knowledge in Bash, PowerShell for automation.
  • Experience working with Jira and Confluence.


Perks, Benefits and Work Culture:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits 
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Gurugram, Bhopal, Jaipur, Bengaluru (Bangalore)
2 - 4 yrs
₹5L - ₹12L / yr
Windows Azure
SQL
Data Structures
databricks

 Hiring: Azure Data Engineer

⭐ Experience: 2+ Years

📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Bangalore

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

Passport: Mandatory & Valid

(Only immediate joiners & candidates serving notice period)


Mandatory Skills:

Azure Synapse, Azure Databricks, Azure Data Factory (ADF), SQL, Delta Lake, ADLS, ETL/ELT,Pyspark .


Responsibilities:

  • Build and maintain data pipelines using ADF, Databricks, and Synapse.
  • Develop ETL/ELT workflows and optimize SQL queries.
  • Implement Delta Lake for scalable lakehouse architecture.
  • Create Synapse data models and Spark/Databricks notebooks.
  • Ensure data quality, performance, and security.
  • Collaborate with cross-functional teams on data requirements.


Nice to Have:

Azure DevOps, Python, Streaming (Event Hub/Kafka), Power BI, Azure certifications (DP-203).


Read more
Upland Software

at Upland Software

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Remote only
5yrs+
Upto ₹33L / yr (Varies
)
skill icon.NET
SQL
Object Oriented Programming (OOPs)
Windows Azure
ASP.NET
+1 more

We are looking for an enthusiastic and dynamic individual to join Upland India as a Senior Software Engineer I (Backend) for our Panviva product. The individual will work with our global development team.


What would you do?

  • Develop, Review, test and maintain application code
  • Collaborating with other developers and product to fulfil objectives
  • Troubleshoot and diagnose issues
  • Take lead on tasks as needed
  • Jump in and help the team deliver features when it is required

What are we looking for?

Experience

  • 5 + years of experience in Designing and implementing application architecture
  • Back-end developer who enjoys solving problems
  • Demonstrated experience with the .NET ecosystem (.NET Framework, ASP.NET, .NET Core) & SQL server
  • Experience in building cloud-native applications (Azure)
  • Must be skilled at writing Quality, scalable, maintainable, testable code

Leadership Skills

  • Strong communication skills
  • Ability to mentor/lead junior developers


Primary Skills: The candidate must possess the following primary skills:

  • Strong Back-end developer who enjoys solving problems
  • Solid experience NET Core, SQL Server, and .Net Design patterns such as Strong Understanding of OOPs Principles, .net specific implementation (DI/CQRS/Repository etc., patterns) & Knowing Architectural Solid principles, Unit testing tools, Debugging techniques
  • Applying patterns to improve scalability and reduce technical debt
  • Experience with refactoring legacy codebases using design patterns
  • Real-World Problem Solving
  • Ability to analyze a problem and choose the most suitable design pattern
  • Experience balancing performance, readability, and maintainability
  • Experience building modern, scalable, reliable applications on the MS Azure cloud including services such as:
  • App Services
  • Azure Service Bus/ Event Hubs
  • Azure API Management Service Azure Bot Service
  • Function/Logic Apps
  • Azure key vault & Azure Configuration Service
  • CosmosDB, Mongo DB
  • Azure Search
  • Azure Cognitive Services

Understanding Agile Methodology and Tool Familiarity

  • Solid understanding of Agile development processes, including sprint planning, daily stand-ups, retrospectives, and backlog grooming
  • Familiarity with Agile tools such as JIRA for tracking tasks, managing workflows, and collaborating across teams
  • Experience working in cross-functional Agile teams and contributing to iterative development cycles

Secondary Skills: It would be advantageous if the candidate also has the following secondary skills:

  • Experience with front-end React/Jquery/Javascript, HTML and CSS Frameworks
  • APM tools - Worked on any tools such as Grafana, NR, Cloudwatch etc.,
  • Basic Understanding of AI models
  • Python

About Upland

Upland Software (Nasdaq: UPLD) helps global businesses accelerate digital transformation with a powerful cloud software library that provides choice, flexibility, and value. Upland India is a fully owned subsidiary of Upland Software and headquartered in Bangalore. We are a remote-first company. Interviews and on-boarding are conducted virtually.


Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
NA

NA

Agency job
via eTalent Services by JaiPrakash Bharti
Remote only
3 - 8 yrs
₹5L - ₹14L / yr
skill iconPython
skill iconMachine Learning (ML)
Windows Azure
TensorFlow
MLFlow
+6 more

Role: Azure AI Tech Lead

Exp-3.5-7 Years

Location: Remote / Noida (NCR)

Notice Period: Immediate to 15 days

 

Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana

 

JOB DESCRIPTION

As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.

 

Key Responsibilities:

  • Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
  • Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
  • Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
  • Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
  • Collaborate cross-functionally to translate business goals into innovative AI solutions.
  • Enforce governance, responsible AI practices, and performance optimization standards.
  • Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.

 

Qualifications:

  • Bachelor’s or Master’s in Computer Science or related field.
  • 3.5–7 years of experience delivering end-to-end AI/ML solutions.
  • Strong expertise in Azure AI ecosystem and production-grade model deployment.
  • Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
  • Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort