50+ SQL Azure Jobs in India
Apply to 50+ SQL Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse SQL Azure Jobs and apply today!
- We are seeking a detail-oriented Database Administrator to design, implement, and maintain our database systems. You will ensure that our data remains secure, available, and optimized for performance. As a DBA, you will work closely with developers to build efficient schemas and with security teams to protect against emerging threats.
- Key Responsibilities
- Performance Tuning: Monitor system health and optimize complex SQL queries, indexes, and stored procedures to ensure maximum speed and efficiency.
- Security & Compliance: Implement "Zero-Trust" security models, manage user permissions, and ensure compliance with data regulations
- Backup & Disaster Recovery: Develop and test robust backup strategies and failover procedures to minimize downtime in case of system failure.
- Cloud & Hybrid Management: Manage databases across on-premises and cloud platforms (AWS, Azure, GCP), including instance provisioning and cost optimization.
- Automation: Utilize tools like Ansible, Terraform, or AI-driven "Copilots" to automate routine maintenance tasks and schema deployments.
- ETL & Data Migration: Oversee the Extraction, Transformation, and Loading (ETL) of data from various sources into central repositories.
- Required Skills & Qualifications
- Technical Skills
- DBMS Expertise: Proficiency in one or more major systems: MySQL, PostgreSQL, Microsoft SQL Server, or Oracle.
- Cloud Proficiency: Hands-on experience with cloud-native databases (e.g., Azure SQL, Amazon RDS, MongoDB Atlas).
- Scripting: Mastery of SQL and familiarity with automation languages like Python or PowerShell.
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
Job Details
- Job Title: Specialist I - Software Engineering-.Net Fullstack Lead-TVM
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-9 years
- Employment Type: Full Time
- Job Location: Trivandrum, Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
· Minimum 5+ years experienced senior/Lead .Net developer, including experience of the full development lifecycle, including post-live support.
· Significant experience delivering software using Agile iterative delivery methodologies.
· JIRA knowledge preferred.
· Excellent ability to understand requirement/story scope and visualise technical elements required for application solutions.
· Ability to clearly articulate complex problems and solutions in terms that others can understand.
· Lots of experience working with .Net backend API development.
· Significant experience of pipeline design, build and enhancement to support release cadence targets, including Infrastructure as Code (preferably Terraform).
· Strong understanding of HTML and CSS including cross-browser, compatibility, and performance.
· Excellent knowledge of unit and integration testing techniques.
· Azure knowledge (Web/Container Apps, Azure Functions, SQL Server).
· Kubernetes / Docker knowledge. Knowledge of JavaScript UI frameworks, ideally Vue Extensive experience with source control (preferably Git).
· Strong understanding of RESTful services (JSON) and API Design.
· Broad knowledge of Cloud infrastructure (PaaS, DBaaS).
· Experience of mentoring and coaching engineers operating within a co-located environment.
Skills: .Net Fullstack, Azure Cloudformation, Javascript, Angular
Must-Haves:
.Net (5+ years), Agile methodologies, RESTful API design, Azure (Web/Container Apps, Functions, SQL Server), Git source control
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
F2F Weekend Interview on 14th Feb 2026
JOB DETAILS:
* Job Title: Associate III - Azure Data Engineer
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -6 years
* Location: Trivandrum, Kochi
Job Description: Azure Data Engineer (4–6 Years Experience)
Job Type: Full-time
Locations: Kochi, Trivandrum
Must-Have Skills
Azure & Data Engineering
- Azure Data Factory (ADF)
- Azure Databricks (PySpark)
- Azure Synapse Analytics
- Azure Data Lake Storage Gen2
- Azure SQL Database
Programming & Querying
- Python (PySpark)
- SQL / Spark SQL
Data Modelling
- Star & Snowflake schema
- Dimensional modelling
Source Systems
- SQL Server
- Oracle
- SAP
- REST APIs
- Flat files (CSV, JSON, XML)
CI/CD & Version Control
- Git
- Azure DevOps / GitHub Actions
Monitoring & Scheduling
- ADF triggers
- Databricks jobs
- Log Analytics
Security
- Managed Identity
- Azure Key Vault
- Azure RBAC / Access Control
Soft Skills
- Strong analytical & problem-solving skills
- Good communication and collaboration
- Ability to work in Agile/Scrum environments
- Self-driven and proactive
Good-to-Have Skills
- Power BI basics
- Delta Live Tables
- Synapse Pipelines
- Real-time processing (Event Hub / Stream Analytics)
- Infrastructure as Code (Terraform / ARM templates)
- Data governance tools like Azure Purview
- Azure Data Engineer Associate (DP-203) certification
Educational Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Skills: Azure Data Factory, Azure Databricks, Azure Synapse, Azure Data Lake Storage
Must-Haves
Azure Data Factory (4-6 years), Azure Databricks/PySpark (4-6 years), Azure Synapse Analytics (4-6 years), SQL/Spark SQL (4-6 years), Git/Azure DevOps (4-6 years)
Skills: Azure, Azure data factory, Python, Pyspark, Sql, Rest Api, Azure Devops
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
JOB DETAILS:
* Job Title: Associate III - Data Engineering
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4-6 years
* Location: Trivandrum, Kochi
Job Description
Job Title:
Data Services Engineer – AWS & Snowflake
Job Summary:
As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.
You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.
Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.
Responsibilities:
• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)
• Optimize query performance using clustering keys, materialized views, and caching
• Develop and maintain Snowflake data warehouses and data marts.
• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).
• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)
• Utilize Snowpark and Python/Java for complex transformations
• Implement RBAC, data masking, and row-level security.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with stakeholders to gather data requirements and deliver solutions.
• Ensure data quality, governance, and compliance with industry standards.
• Monitor, troubleshoot, and resolve data pipeline and performance issues.
• Document data architecture, processes, and best practices.
• Support data migration and integration from various sources.
Qualifications:
• Bachelor’s degree in Computer Science, Information Technology, or a related field.
• 3 to 4 years of hands-on experience in data engineering or data services.
• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).
• Strong expertise in Snowflake architecture, development, and optimization.
• Proficiency in SQL and Python for data manipulation and scripting.
• Solid understanding of ETL/ELT processes and data modeling.
• Experience with data integration tools and orchestration frameworks.
• Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
• AWS Glue, AWS Lambda, Amazon Redshift
• Snowflake Data Warehouse
• SQL & Python
Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse
Must-Haves
AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)
Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
Have you ever dreamed of being part of new product initiatives? Feel the energy and excitement to work on version 1 of a product and bring the "Idea on Paper" to life. Do you crave to work on SAAS products that can become the next Uber, Airbnb, or Flipkart? We give you the opportunity to be part of a team that will be leading the development of a SAAS product.
Our organization relies on its central engineering workforce to develop and maintain a product portfolio of several different startups. Our product portfolio continuously grows as we incubate more startups, which means that various products will likely use other technologies, architecture, & frameworks - a fun place for smart tech lovers!
We are looking for a DotNet Software Engineer to join our engineering teams at our Hyderabad office.
What would you do?
● Building products is all about solving hard problems. It requires creativity and out-of-the-box thinking.
● We believe in freedom and ownership, so expect a large and important portion of the product to gradually become your responsibility.
● Diagnose and fix bugs and performance bottlenecks for performance.
● Maintain code and write automated tests to ensure the product is of the highest quality.
● You will be collaborating with your teammates and clients.
Experience?
4-8 years of experience working on DotNet Core Framework - Web APIs. Experience building a Product, preferably SAAS.
Technical Skillset?
● In-depth knowledge of C# .NET languages.
● Hands-on experience in DotNet Core Web API Framework and programming and some in ReactJS/Angular framework in a development-intensive individual contributor role
● Experience working in Cloud, Agile, CI/CD, and DevOps environments. We live in the Cloud.
● Good Knowledge of SOLID Principles and OOPS Concepts
● Strong knowledge of SQL - Writing and debugging complex queries, stored procedures, and functions.
● Strong knowledge of SQL DBs like PostgreSQL or SQL Server
● Experience with Entity Framework or Dapper.
● Experience with working with Authentication Providers like IdentityServer, Auth0, or equivalent is good to have
Functional Skillset?
● Comfortable “working virtually” with teammates and customers worldwide - we do a lot of Slack, Zoom, and Google Meet.
● Good proficiency in the English language
● Demonstrated success as a problem solver, result-oriented, self-starter
● Good attention to detail
● An inclination to get things done based on clear/aggressive goal setting and improving productivity metrics
About CAW Studios
We are a Product Engineering Company of 200 geeks.
We run engineering for startups like Hoichoi.tv, Interakt, CashFlo, hBits, FastTI, Fhynix, BeTagged, FastBar, Haptik, CloudDefense, AccelData Flipspaces, Aerchain, and Reeco
We are obsessed with automation, DevOps, OOPS, and SOLID. We are not into one tech stack - we are into solving problems.
Website: https://www.caw.tech/
Know More: https://www.caw.tech/handbook

Global digital transformation solutions provider.
JOB DETAILS:
Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering
Industry: Global digital transformation solutions provider
Work Mode: Hybrid
Salary: Best in Industry
Experience: 6-8 years
Location: Hyderabad
Job Description:
• Experience in Microsoft Web development technologies such as Web API, SOAP XML
• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure
• Knowledge of cloud architecture and technologies
• Support/Incident management experience in a 24/7 environment
• SQL Server and SSIS experience
• DevOps experience of Github and Jenkins CI/CD pipelines or similar
• Windows Server 2016/2019+ and SQL Server 2019+ experience
• Experience of the full software development lifecycle
• You will write clean, scalable code, with a view towards design patterns and security best practices
• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge
Must-Haves
C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)
.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)
Mandatory skills: Net core with Azure or AWS experience
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Drive - 17th Jan
Review Criteria:
Mandatory:
- Strong IT Infrastructure Lead Profile
- Must have 10+ years of hands-on experience in global IT Infrastructure management, including administration of Azure Entra ID, Office 365 Suite (Outlook, SharePoint, OneDrive), Azure Exchange, Microsoft Teams, Intune, and Windows Autopilot
- Must have strong expertise in Azure/Office 365 compliance and governance, including audit readiness, data governance policies, and global regulatory frameworks (e.g., GDPR, HIPAA)
- Must have solid experience managing IT operations end-to-end: user onboarding/offboarding, identity & access management, SAML/SSO integrations, and enterprise-wide provisioning/deprovisioning
- Must have strong knowledge and hands-on experience with FortiGate Firewalls, FortiGate WiFi, VPN, routing, subnetting, and overall network administration
- Must have proven capability in endpoint and device management: ManageEngine Endpoint Central, Assets Explorer, Antivirus Endpoint Security, JAMF (macOS), and multi-OS troubleshooting (Windows, Linux, Mac)
- Must have strong Jira/Confluence administration experience for global teams, including configuration, access control, and workflow governance
- Must have experience supporting, patching, updating, and troubleshooting multi-OS environments (Windows, Linux, macOS) with strong focus on security hardening and vulnerability fixes
- Must have strong hands-on experience in shell scripting / bash / PowerShell for automation, system tasks, and operational efficiency
- Must have experience in configuration and troubleshooting of Cisco/Polycom audio-video solutions and collaboration tools
Preferred:
- Experience with Highspot, HubSpot, Gong, or similar platforms for basic administration
- Strong background in cybersecurity frameworks, risk management, IT governance, incident response, and GRC practices
- Bachelor’s or master’s degree in information technology, Computer Science, or related field
- Candidates from NCR/Noida preferred
Role & Responsibilities:
The incumbent will be responsible for managing and enhancing the company’s IT infrastructure, cybersecurity, and IT operations globally. This role will require a strategic leader with a hands-on approach to overseeing infrastructure design, network security, data privacy, and compliance. The IT Head will drive initiatives to maintain a secure, efficient, and scalable technology environment that aligns with company’s business goals.
Key Responsibilities-
IT Infrastructure Management:
- Lead the design, implementation, and management of the IT infrastructure across company’s global offices.
- Oversee IT systems, network architecture, hardware, and software procurement, and ensure optimal performance and uptime.
- Plan and execute IT modernization and digital transformation initiatives to support business growth.
Cybersecurity and Risk Management:
- Establish and maintain robust cybersecurity policies, frameworks, and controls to protect the company’s data, systems, and intellectual property.
- Monitor, detect, and respond to cybersecurity threats, vulnerabilities, and breaches.
- Implement secure access controls, multi-factor authentication, and endpoint security measures to safeguard global IT environments.
Compliance and Data Privacy:
- Ensure compliance with global data privacy regulations, such as GDPR, HIPAA, and other applicable data protection laws.
- Support internal and external audits, ensuring adherence to regulatory and industry standards.
IT Governance and Strategy:
- Develop and execute the IT strategy in alignment with company’s business objectives.
- Create and enforce IT policies, procedures, and best practices for global operations.
- Prepare and manage the IT budget, ensuring cost-effective solutions for infrastructure and security investments.
Vendor Management and Contract Negotiations:
- Build and manage relationships with technology vendors, service providers, and consultants.
- Negotiate contracts to achieve favorable pricing and terms for the company.
Team Leadership and Development:
- Lead, mentor, and develop a high-performing IT team across multiple geographies.
- Foster a culture of innovation, collaboration, and continuous learning.
Ideal Candidate:
- Bachelor’s or master’s degree in information technology, Computer Science, or a related field.
- 10+ years of progressive experience in IT infrastructure, security, and operations, with at least 7 years in a senior leadership role.
- Strong experience in managing global IT environments, distributed teams, and multi-office setups.
- Administer and manage Azure Entra ID, Office 365 suite (Outlook, SharePoint, OneDrive), Azure Exchange, Microsoft Teams, Microsoft Intune, Windows Autopilot, and related services.
- Configure and manage SAML/Azure SSO integrations across enterprise applications.
- Ensure Office 365 compliance management, including audit readiness and data governance policies.
- Handle user onboarding and offboarding, ensuring secure and efficient account provisioning and deprovisioning.
- Oversee IT compliance frameworks, audit processes, and IT asset inventory management, attendance systems.
- Administer Jira, FortiGate firewalls and Wi-Fi, FortiGate EMS, antivirus solutions, and endpoint management systems.
- Provide network administration: routing, subnetting, VPNs, and firewall configurations.
- Support, patch, update, and troubleshoot Windows, Linux, and macOS environments, including applying vulnerability fixes and ensuring system security.
- Manage JAMF, ManageEngine Endpoint Central, and Assets Explorer for device and asset management.
- Provide configuration and basic administration knowledge for Highspot, HubSpot, and Gong platforms.
- Set up, manage, and troubleshoot Cisco and Polycom audio/video conferencing systems.
- Provide remote support for end-users, ensuring quick resolution of technical issues.
- Monitor IT systems and network for performance, security, and reliability, ensuring high availability.
- Collaborate with internal teams and external vendors to resolve issues and optimize systems.
- Working Knowledge of data privacy regulations (GDPR, HIPAA) and experience driving regulatory compliance.
- Strong project management, problem-solving, and stakeholder management skills.
- Document configurations, processes, and troubleshooting procedures for compliance and knowledge sharing.
- Ability to influence cross-functional teams and present technical information to non-technical stakeholders.
- Good Experience in driving GRC
Perks, Benefits and Work Culture:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
Job Description:-
We are looking for an experienced Team Lead – .NET Developer with 5+ years of hands-on experience in designing, developing, and leading enterprise-level applications. The ideal candidate will have strong technical expertise along with leadership capabilities to guide and mentor the development team.
Key Responsibilities
- Lead, mentor, and manage a team of .NET developers
- Design, develop, and maintain scalable web applications using .NET technologies
- Participate in requirement analysis, system design, and architecture decisions
- Ensure coding standards, best practices, and quality benchmarks are followed
- Conduct code reviews and provide constructive feedback to team members
- Collaborate with stakeholders, product managers, and QA teams
- Troubleshoot, debug, and resolve complex technical issues
- Ensure timely delivery of projects with high quality standards
Required Skills & Qualifications (Must Have)
- 6+ years of experience in .NET development
- Strong expertise in C#, ASP.NET, MVC, Web API, .NET Core
- Mandatory experience in Angular (latest versions preferred)
- Strong knowledge of TypeScript, HTML, CSS, JavaScript
- Experience with SQL Server / relational databases
- Experience in team handling and technical leadership
- Knowledge of RESTful APIs and integrations
- Familiarity with Agile/Scrum methodologies
Preferred Skills (Good to Have)
- Experience with Azure / Cloud services
- Knowledge of Microservices architecture
- Exposure to CI/CD pipelines
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
About QX: The QX Impact was launched with a mission to make A.I accessible and affordable and deliver AI Products/Solutions at scale for the enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization. Job Summary: We are seeking an experienced and driven Technical Project Manager / Technical Delivery Manager to lead complex, high-impact data analytics and data science projects for global clients. This role demands a unique blend of project management expertise, technical depth in cloud and data technologies, and the ability to collaborate across cross-functional teams. You will be responsible for ensuring the successful delivery of data platforms, data products, and enterprise analytics solutions that drive business value. Key Responsibilities: Project & Delivery Management • Lead the full project lifecycle for enterprise-scale data platforms—including requirement gathering, development, testing, deployment, and post-production support. • Own the delivery of Data Warehousing and Data Lakehouse solutions on cloud platforms (Azure, AWS, or GCP). • Prepare and maintain detailed project plans (Microsoft Project Plan) and align them with the Statement of Work (SOW) and client expectations. • Utilize hybrid project methodologies (Agile + Waterfall) for managing scope, budget, and timelines. • Monitor key project KPIs (e.g., SLA, MTTR, MTTA, MTBF) and ensure adherence using tools like ServiceNow. Data Platform & Architecture Oversight • Collaborate with data engineers and architects to guide the implementation of scalable Data Warehouses (e.g., Redshift, Synapse) and Data Lakehouse architectures (e.g., Databricks, Delta Lake). • Ensure data platform solutions meet performance, security, and governance standards. • Understand and help manage data integration pipelines, ETL/ELT processes, and BI/reporting requirements. Client Engagement & Stakeholder Management • Serve as the primary liaison for US/UK clients; manage regular status updates, escalation paths, and expectations across stakeholders. • Conduct WSRs, MSRs, and QBRs with clients and internal teams to drive transparency and performance reviews. • Facilitate team meetings, highlight risks or blockers, and ensure consistent stakeholder alignment. Technical Leadership & Troubleshooting • Provide hands-on support and guidance in data infrastructure troubleshooting using tools like Splunk, AppDynamics, and Azure Monitor. • Lead incident, problem, and change management processes with data platform operations in mind. • Identify automation opportunities and propose technical process improvements across data pipelines and workflows. Governance, Documentation & Compliance • Create and maintain SOPs, runbooks, implementation documents, and architecture diagrams. • Manage project compliance related to data privacy, security, and internal/external audits. • Initiate and track Change Requests (CRs) and look for revenue expansion opportunities with clients. Continuous Improvement & Innovation • Participate in and lead at least three internal processes: optimization or innovation initiatives annually. • Work with engineering, analytics, and DevOps teams to improve CI/CD pipelines and data delivery workflows. • Monitor production environments to reduce deployment issues and improve time-to-insight. Must Have: • 10+ years of experience in technical project delivery, with strong focus on data analytics, BI, and cloud data platforms. • Strong hands-on experience with SQL and data warehouse technologies like Snowflake, Synapse, Redshift, BigQuery, etc. • Proven experience delivering Data Warehouse and Data Lakehouse solutions. • Familiarity with tools such as Redshift, Synapse, BigQuery, Databricks, Delta Lake. • Strong cloud knowledge with Azure, AWS, or GCP. • Proficiency in project management tools like Microsoft Project Plan (MPP), JIRA, Confluence, and ServiceNow. • Expertise in Agile project methodologies. • Excellent communication skills—both verbal and written—with no MTI or grammatical errors. • Hands-on experience working with global delivery models (onshore/offshore). Good-to-Have: • PMP or Scrum Master certification. • Understanding of ITIL processes and DataOps practices. • Experience managing end-to-end cloud data transformation projects. • Experience in project estimation, proposal writing, and RFP handling. Competencies: • Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications. • Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels. • Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm. • Customer Focus - Building strong customer relationships and delivering customer-centric solutions. • Optimize Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement. Why Join Us? • Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions. • Work on impactful projects that make a difference across industries. • Opportunities for professional growth and continuous learning. • Competitive salary and benefits package. Application Details Ready to make an impact? Apply today and become part of the QX Impact team!
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
ROLES AND RESPONSIBILITIES:
We are seeking a skilled Data Engineer who can work independently on data pipeline development, troubleshooting, and optimisation tasks. The ideal candidate will have strong SQL skills, hands-on experience with Databricks, and familiarity with cloud platforms such as AWS and Azure. You will be responsible for building and maintaining reliable data workflows, supporting analytical teams, and ensuring high-quality, secure, and accessible data across the organisation.
KEY RESPONSIBILITIES:
- Design, develop, and maintain scalable data pipelines and ETL/ELT workflows.
- Build, optimise, and troubleshoot SQL queries, transformations, and Databricks data processes.
- Work with large datasets to deliver efficient, reliable, and high-performing data solutions.
- Collaborate closely with analysts, data scientists, and business teams to support data requirements.
- Ensure data quality, availability, and security across systems and workflows.
- Monitor pipeline performance, diagnose issues, and implement improvements.
- Contribute to documentation, standards, and best practices for data engineering processes.
IDEAL CANDIDATE:
- Proven experience as a Data Engineer or in a similar data-focused role (3+ years).
- Strong SQL skills with experience writing and optimising complex queries.
- Hands-on experience with Databricks for data engineering tasks.
- Experience with cloud platforms such as AWS and Azure.
- Understanding of ETL/ELT concepts, data modelling, and pipeline orchestration.
- Familiarity with Power BI and data integration with BI tools.
- Strong analytical and troubleshooting skills, with the ability to work independently.
- Experience working end-to-end on data engineering workflows and solutions.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
ROLES AND RESPONSIBILITIES:
Standardization and Governance:
- Establishing and maintaining project management standards, processes, and methodologies.
- Ensuring consistent application of project management policies and procedures.
- Implementing and managing project governance processes.
Resource Management:
- Facilitating the sharing of resources, tools, and methodologies across projects.
- Planning and allocating resources effectively.
- Managing resource capacity and forecasting future needs.
Communication and Reporting:
- Ensuring effective communication and information flow among project teams and stakeholders.
- Monitoring project progress and reporting on performance.
- Communicating strategic work progress, including risks and benefits.
Project Portfolio Management:
- Supporting strategic decision-making by aligning projects with organizational goals.
- Selecting and prioritizing projects based on business objectives.
- Managing project portfolios and ensuring efficient resource allocation across projects.
Process Improvement:
- Identifying and implementing industry best practices into workflows.
- Improving project management processes and methodologies.
- Optimizing project delivery and resource utilization.
Training and Support:
- Providing training and support to project managers and team members.
- Offering project management tools, best practices, and reporting templates.
Other Responsibilities:
- Managing documentation of project history for future reference.
- Coaching project teams on implementing project management steps.
- Analysing financial data and managing project costs.
- Interfacing with functional units (Domain, Delivery, Support, Devops, HR etc).
- Advising and supporting senior management.
IDEAL CANDIDATE:
- 3+ years of proven experience in Project Management roles with strong exposure to PMO processes, standards, and governance frameworks.
- Demonstrated ability to manage project status tracking, risk assessments, budgeting, variance analysis, and defect tracking across multiple projects.
- Proficient in Project Planning and Scheduling using tools like MS Project and Advanced Excel (e.g., Gantt charts, pivot tables, macros).
- Experienced in developing project dashboards, reports, and executive summaries for senior management and stakeholders.
- Active participant in Agile environments, attending and contributing to Scrum calls, sprint planning, and retrospectives.
- Holds a Bachelor’s degree in a relevant field (e.g., Engineering, Business, IT, etc.).
- Preferably familiar with Jira, Azure DevOps, and Power BI for tracking and visualization of project data.
- Exposure to working in product-based companies or fast-paced, innovation-driven environments is a strong advantage.
Review Criteria
- Strong IT Engineer Profile
- 4+ years of hands-on experience in Azure/Office 365 compliance and management, including policy enforcement, audit readiness, DLP, security configurations, and overall governance.
- Must have strong experience handling user onboarding/offboarding, identity & access provisioning, MFA, SSO configurations, and lifecycle management across Windows/Mac/Linux environments.
- Must have proven expertise in IT Inventory Management, including asset tracking, device lifecycle, CMDB updates, and hardware/software allocation with complete documentation.
- Hands-on experience configuring and managing FortiGate Firewalls, including routing, VPN setups, policies, NAT, and overall network security.
- Must have practical experience with FortiGate WiFi, AP configurations, SSID management, troubleshooting connectivity issues, and securing wireless environments.
- Must have strong knowledge and hands-on experience with Antivirus Endpoint Central (or equivalent) for patching, endpoint protection, compliance, and threat remediation.
- Must have solid understanding of Networking, including routing, switching, subnetting, DHCP, DNS, VPN, LAN/WAN troubleshooting.
- Must have strong troubleshooting experience across Windows, Linux, and macOS environments for system issues, updates, performance, and configurations.
- Must have expertise in Cisco/Polycom A/V solutions, including setup, configuration, video conferencing troubleshooting, and meeting room infrastructure support.
- Must have hands-on experience in Shell Scripting / Bash / PowerShell for automation of routine IT tasks, monitoring, and system efficiencies.
Job Specific Criteria:
- CV Attachment is mandatory
- Q1. Please share details of experience in troubleshooting (Rate out of 10, 10 being highly experienced) A. Windows Troubleshooting B. Linux Troubleshooting C. Macbook Troubleshooting
- Q2. Please share details of experience in below process (Rate out of 10, 10 being highly experienced) A. User Onboarding/Offboarding B. Inventory Management
- Q3. Please share details of experience in below tools and administrations (Rate out of 10, 10 being highly experienced) A. FortiGate Firewall B. FortiGate WiFi C. Antivirus Endpoint Central D. Networking E. Cisco/Polycom A/V solutions F. Shell Scripting/Bash/PowerShell G. Azure/Office 365 compliance and management
- Q4. Are you okay for F2F round (Noida)?
- Q5. What's you current company?
- Q6. Are you okay for rotational shift (10am - 7pm and 2pm to 11pm)?
Role & Responsibilities:
We are seeking an experienced IT Infrastructure/System Administrator to manage, secure, and optimize our IT environment. The ideal candidate will have expertise in enterprise-grade tools, strong troubleshooting skills, and hands-on experience configuring secure integrations, managing endpoint deployments, and ensuring compliance across platforms.
- Administer and manage Office 365 suite (Outlook, SharePoint, OneDrive, Teams etc) and related services/configurations.
- Handle user onboarding and offboarding, ensuring secure and efficient account provisioning and deprovisioning.
- Oversee IT compliance frameworks, audit processes, and IT asset inventory management, attendance systems.
- Administer Jira, FortiGate firewalls and Wi-Fi, antivirus solutions, and endpoint management systems.
- Provide network administration: routing, subnetting, VPNs, and firewall configurations.
- Support, patch, update, and troubleshoot Windows, Linux, and macOS environments, including applying vulnerability fixes and ensuring system security.
- Manage Assets Explorer for device and asset management/inventory.
- Set up, manage, and troubleshoot Cisco and Polycom audio/video conferencing systems.
- Provide remote support for end-users, ensuring quick resolution of technical issues.
- Monitor IT systems and network for performance, security, and reliability, ensuring high availability.
- Collaborate with internal teams and external vendors to resolve issues and optimize systems.
- Document configurations, processes, and troubleshooting procedures for compliance and knowledge sharing.
Ideal Candidate:
- Proven hands-on experience with:
- Office 365 administration and compliance.
- User onboarding/offboarding processes.
- Compliance, audit, and inventory management tools.
- Jira administration, FortiGate firewall, Wi-Fi, and antivirus solutions.
- Networking fundamentals: subnetting, routing, switching.
- Patch management, updates, and vulnerability remediation across Windows, Linux, and macOS.
- Assets Explorer/inventory management
- Strong troubleshooting, documentation, and communication skills.
Preferred Skills:
- Scripting knowledge in Bash, PowerShell for automation.
- Experience working with Jira and Confluence.
Perks, Benefits and Work Culture:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits

Global digital transformation solutions provider.
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Outcomes:
Interpret the application/feature/component design to develop the same in accordance with specifications.
Code debug test document and communicate product/component/feature development stages.
Validate results with user representatives; integrates and commissions the overall solution
Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions
Optimises efficiency cost and quality.
Influence and improve customer satisfaction
Set FAST goals for self/team; provide feedback to FAST goals of team members
Measures of Outcomes:
Adherence to engineering process and standards (coding standards)
Adherence to project schedule / timelines
Number of technical issues uncovered during the execution of the project
Number of defects in the code
Number of defects post-delivery
Number of non compliance issues
On time completion of mandatory compliance trainings
Outputs Expected:
Code:
Code as per design
Follow coding standards
templates and checklists
Review code – for team and peers
Documentation:
Create/review templates
checklists
guidelines
standards for design/process/development
Create/review deliverable documents. Design documentation
r and requirements
test cases/results
Configure:
Define and govern configuration management plan
Ensure compliance from the team
Test:
Review and create unit test cases
scenarios and execution
Review test plan created by testing team
Provide clarifications to the testing team
Domain relevance:
Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client.
Learn more about the customer domain identifying opportunities to provide valuable addition to customers
Complete relevant domain certifications
Manage Project:
Manage delivery of modules and/or manage user stories
Manage Defects:
Perform defect RCA and mitigation
Identify defect trends and take proactive measures to improve quality
Estimate:
Create and provide input for effort estimation for projects
Manage knowledge:
Consume and contribute to project related documents
share point
libraries and client universities
Review the reusable documents created by the team
Release:
Execute and monitor release process
Design:
Contribute to creation of design (HLD
LLD
SAD)/architecture for Applications/Features/Business Components/Data Models
Interface with Customer:
Clarify requirements and provide guidance to development team
Present design options to customers
Conduct product demos
Manage Team:
Set FAST goals and provide feedback
Understand aspirations of team members and provide guidance opportunities etc
Ensure team is engaged in project
Certifications:
Take relevant domain/technology certification
Skill Examples:
Explain and communicate the design / development to the customer
Perform and evaluate test results against product specifications
Break down complex problems into logical components
Develop user interfaces business software components
Use data models
Estimate time and effort required for developing / debugging features / components
Perform and evaluate test in the customer or target environment
Make quick decisions on technical/project related challenges
Manage a Team mentor and handle people related issues in team
Maintain high motivation levels and positive dynamics in the team.
Interface with other teams designers and other parallel practices
Set goals for self and team. Provide feedback to team members
Create and articulate impactful technical presentations
Follow high level of business etiquette in emails and other business communication
Drive conference calls with customers addressing customer questions
Proactively ask for and offer help
Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
Build confidence with customers by meeting the deliverables on time with quality.
Estimate time and effort resources required for developing / debugging features / components
Make on appropriate utilization of Software / Hardware’s.
Strong analytical and problem-solving abilities
Knowledge Examples:
Appropriate software programs / modules
Functional and technical designing
Programming languages – proficient in multiple skill clusters
DBMS
Operating Systems and software platforms
Software Development Life Cycle
Agile – Scrum or Kanban Methods
Integrated development environment (IDE)
Rapid application development (RAD)
Modelling technology and languages
Interface definition languages (IDL)
Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
About the Role: We are looking for a Senior Software Developer with strong experience in .NET development and Microsoft Azure to help build and scale our next-generation FinTech platforms. You will work on secure, high-availability systems that power core financial services, collaborating with cross-functional teams to deliver features that directly impact our customers. You’ll play a key role in developing backend services, cloud integrations, and microservices that are performant, secure, and compliant with financial regulations. Key Responsibilities: Design, develop, and maintain backend services and APIs using C# and .NET Core. Build and deploy cloud-native applications on Microsoft Azure, leveraging services such as App Services, Azure Functions, Key Vault, Service Bus, and Azure SQL. Contribute to architecture decisions and write clean, maintainable, well-tested code. Participate in code reviews, technical planning, and sprint ceremonies in an Agile environment. Collaborate with QA, DevOps, Product, and Security teams to deliver robust, secure solutions. Ensure applications meet high standards of security, reliability, and scalability, especially in a regulated FinTech environment. Support and troubleshoot production issues and contribute to continuous improvement. Required Skills & Qualifications: 5–8 years of experience in software development, primarily with C# / .NET Core. Strong hands-on experience with Microsoft Azure, including Azure App Services, Azure Functions, Azure SQL, Key Vault, and Service Bus. Experience building RESTful APIs, microservices, and integrating with third-party services. Proficiency with Azure DevOps, Git, and CI/CD pipelines. Solid understanding of software design principles, object-oriented programming, and secure coding practices. Familiarity with Agile/Scrum development methodologies. Bachelor’s degree in Computer Science, Engineering, or a related field.
Skills: Dot Net, C#, Azure
Must-Haves
Net with Azure Developer -Required: Function app, Logic Apps, Event Grid, Service Bus, Durable Functions
We are seeking a highly skilled and experienced Senior Full Stack Developerwith 8+years of experience to join our dynamic team. The ideal candidate will have a strong background in both front-end and back-end development, with expertise in .NET, Angular, TypeScript, Azure, SQL Server, Agile methodologies, and Design Patterns. Experience with DocuSign is a plus.
Responsibilities:
- Design, develop, and maintain web applications using .NET, Angular, and TypeScript.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Implement and maintain cloud-based solutions using Azure.
- Develop and optimize SQL Server databases.
- Follow Agile methodologies to manage project tasks and deliverables.
- Apply design patterns and best practices to ensure high-quality, maintainable code.
- Troubleshoot and resolve software defects and issues.
- Mentor and guide junior developers.
Requirements:
- Bachelor's degree in computer science, Engineering, or a related field.
- Proven experience as a Full Stack Developer or similar role.
- Strong proficiency in .NET, Angular, and TypeScript.
- Experience with Azure cloud services.
- Proficient in SQL Server and database design.
- Familiarity with Agile methodologies and practices.
- Solid understanding of design patterns and software architecture principles.
- Excellent problem-solving skills and attention to detail.
- Strong communication and teamwork abilities.
- Experience with DocuSign is a plus.
Job Position: Lead II - Software Engineering
Domain: Information technology (IT)
Location: India - Thiruvananthapuram
Salary: Best in Industry
Job Positions: 1
Experience: 8 - 12 Years
Skills: .Net, Sql Azure, Rest Api, Vue.Js
Notice Period: Immediate – 30 Days
Job Summary:
We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.
Key Responsibilities:
- Design, develop, and maintain scalable and secure .NET backend APIs.
- Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
- Lead and contribute to Agile software delivery processes (Scrum, Kanban).
- Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
- Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
- Implement unit and integration testing to ensure code quality and system stability.
- Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
- Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
- Mentor and coach engineers within a co-located or distributed team environment.
- Maintain best practices in code versioning, testing, and documentation.
Mandatory Skills:
- 7+ years of .NET development experience, including API design and development
- Strong experience with Azure Cloud services, including:
- Web/Container Apps
- Azure Functions
- Azure SQL Server
- Solid understanding of Agile development methodologies (Scrum/Kanban)
- Experience in CI/CD pipeline design and implementation
- Proficient in Infrastructure as Code (IaC) – preferably Terraform
- Strong knowledge of RESTful services and JSON-based APIs
- Experience with unit and integration testing techniques
- Source control using Git
- Strong understanding of HTML, CSS, and cross-browser compatibility
Good-to-Have Skills:
- Experience with Kubernetes and Docker
- Knowledge of JavaScript UI frameworks, ideally Vue.js
- Familiarity with JIRA and Agile project tracking tools
- Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
- Experience mentoring or coaching junior developers
- Strong problem-solving and communication skills
Sr Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference.
At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
Position Responsibilities :
About the Role
We are looking for a skilled and motivated Senior Software Developer to join our team responsible for developing and maintaining a robust ERP solution used by approximately 400 customers and more than 30000 users worldwide. The system is built using C# (.NET Core), leverages SQL Server for data management, and is hosted in the Microsoft Azure cloud.
This role offers the opportunity to work on a mission-critical product, contribute to architectural decisions, and help shape the future of our cloud-native ERP platform.
Key Responsibilities
- Design, develop, and maintain features and modules within the ERP system using C# (.NET Core)
- Optimize and manage SQL Server database interactions for performance and scalability
- Collaborate with cross-functional teams, including QA, DevOps, Product Management, and Support
- Participate in code reviews, architecture discussions, and technical planning
- Contribute to the adoption and improvement of CI/CD pipelines and cloud deployment practices
- Troubleshoot and resolve complex technical issues across the stack
- Ensure code quality, maintainability, and adherence to best practices
- Stay current with emerging technologies and recommend improvements where applicable
Qualifications
- Curiosity, passion, teamwork, and initiative
- Strong experience with C# and .NET Core in enterprise application development
- Solid understanding of SQL Server, including query optimization and schema design
- Experience with Azure cloud services (App Services, Azure SQL, Storage, etc.)
- Ability to utilize agentic AI as a development support, with a critical thinking attitude
- Familiarity with agile development methodologies and DevOps practices
- Ability to work independently and collaboratively in a fast-paced environment
- Excellent problem-solving and communication skills
- Master's degree in Computer Science or equivalent; 5+ years of relevant work experience
- Experience with ERP systems or other complex business applications is a plus
What We Offer
- A chance to work on a product that directly impacts thousands of users worldwide
- A collaborative and supportive engineering culture
- Opportunities for professional growth and technical leadership
- Competitive salary and benefits package
Role & responsibilities
- Develop and maintain server-side applications using Go Lang.
- Design and implement scalable, secure, and maintainable RESTful APIs and microservices.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic
- Optimize applications for performance, reliability, and scalability.
- Write clean, efficient, and reusable code that adheres to best practices.
Preferred candidate profile
- Minimum 5 years of working experience in Go Lang development.
- Proven experience in developing RESTful APIs and microservices.
- Familiarity of cloud platforms like AWS, GCP, or Azure.
- Familiarity with CI/CD pipelines and DevOps practices
Role: Data Scientist (Python + R Expertise)
Exp: 8 -12 Years
CTC: up to 30 LPA
Required Skills & Qualifications:
- 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
- Strong expertise in Python and R for data analysis, modeling, and visualization.
- Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
- Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
- Experience with SQL and working with large-scale structured and unstructured data.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
- Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
- Experience with NLP, time series forecasting, or deep learning projects.
- Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
- Experience working in product or data-driven organizations.
- Knowledge of MLOps and model lifecycle management is a plus.
If interested kindly share your updated resume on 82008 31681
MANDATORY:
- Super Quality Data Architect, Data Engineering Manager / Director Profile
- Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
- Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
- Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
- Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
- Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
- Product Companies (Prefers high-scale, data-heavy companies)
PREFERRED:
- Must be from Tier - 1 Colleges, preferred IIT
- Candidates must have spent a minimum 3 yrs in each company.
- Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company
ROLES & RESPONSIBILITIES:
- Lead and mentor a team of data engineers, ensuring high performance and career growth.
- Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
- Drive the development and implementation of data governance frameworks and best practices.
- Work closely with cross-functional teams to define and execute a data roadmap.
- Optimize data processing workflows for performance and cost efficiency.
- Ensure data security, compliance, and quality across all data platforms.
- Foster a culture of innovation and technical excellence within the data team.
IDEAL CANDIDATE:
- 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
- Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
- Proficiency in SQL, Python, and Scala for data processing and analytics.
- Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
- Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
- Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
- Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
- Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
- Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
- Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
- Proven ability to drive technical strategy and align it with business objectives.
- Strong leadership, communication, and stakeholder management skills.
PREFERRED QUALIFICATIONS:
- Experience in machine learning infrastructure or MLOps is a plus.
- Exposure to real-time data processing and analytics.
- Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
- Prior experience in a SaaS or high-growth tech company.
- Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
- Strong understanding of the Spring framework and its various modules.
- Experience with JDBC for database connectivity and manipulation
- Utilize database management systems to store and retrieve data efficiently.
- Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
- Experience in in working with relational and nosql databases.
- Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes)
- Perform debugging and troubleshooting of applications using log analysis techniques.
- Understand multi-service flow and integration between components.
- Handle large-scale data processing tasks efficiently and effectively.
- Hands on experience using Spark is an added advantage.
- Good problem-solving and analytical abilities.
- Collaborate with cross-functional teams to identify and solve complex technical problems.
- Knowledge of Agile methodologies such as Scrum or Kanban
- Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies
If interested please share your resume with details :
Total Experience -
Relevant Experience in Java,Spring,Data structures,Alogorithm,SQL, -
Relevant Experience in Cloud - AWS/Azure/GCP -
Current CTC -
Expected CTC -
Notice Period -
Reason for change -
About Us
We are building the next generation of AI-powered products and platforms that redefine how businesses digitize, automate, and scale. Our flagship solutions span eCommerce, financial services, and enterprise automation, with an emerging focus on commercializing cutting-edge AI services across Grok, OpenAI, and the Azure Cloud ecosystem.
Role Overview
We are seeking a highly skilled Full-Stack Developer with a strong foundation in e-commerce product development and deep expertise in backend engineering using Python. The ideal candidate is passionate about designing scalable systems, has hands-on experience with cloud-native architectures, and is eager to drive the commercialization of AI-driven services and platforms.
Key Responsibilities
- Design, build, and scale full-stack applications with a strong emphasis on backend services (Python, Django/FastAPI/Flask).
- Lead development of eCommerce features including product catalogs, payments, order management, and personalized customer experiences.
- Integrate and operationalize AI services across Grok, OpenAI APIs, and Azure AI services to deliver intelligent workflows and user experiences.
- Build and maintain secure, scalable APIs and data pipelines for real-time analytics and automation.
- Collaborate with product, design, and AI research teams to bring experimental features into production.
- Ensure systems are cloud-ready (Azure preferred) with CI/CD, containerization (Docker/Kubernetes), and strong monitoring practices.
- Contribute to frontend development (React, Angular, or Vue) to deliver seamless, responsive, and intuitive user experiences.
- Champion best practices in coding, testing, DevOps, and Responsible AI integration.
Required Skills & Experience
- 5+ years of professional full-stack development experience.
- Proven track record in eCommerce product development (payments, cart, checkout, multi-tenant stores).
- Strong backend expertise in Python (Django, FastAPI, Flask).
- Experience with cloud services (Azure preferred; AWS/GCP is a plus).
- Hands-on with AI/ML integration using APIs like OpenAI, Grok, Azure Cognitive Services.
- Solid understanding of databases (SQL & NoSQL), caching, and API design.
- Familiarity with frontend frameworks such as React, Angular, or Vue.
- Experience with DevOps practices: GitHub/GitLab, CI/CD, Docker, Kubernetes.
- Strong problem-solving skills, adaptability, and a product-first mindset.
Nice to Have
- Knowledge of vector databases, RAG pipelines, and LLM fine-tuning.
- Experience in scalable SaaS architectures and subscription platforms.
- Familiarity with C2PA, identity security, or compliance-driven development.
What We Offer
- Opportunity to shape the commercialization of AI-driven products in fast-growing markets.
- A high-impact role with autonomy and visibility.
- Competitive compensation, equity opportunities, and growth into leadership roles.
- Collaborative environment working with seasoned entrepreneurs, AI researchers, and cloud architects.
We are hiring a Senior Data Engineer with strong expertise in Databricks, Azure Data Factory, and PySpark.
Must Have:
- Databricks, ADF, PySpark
- Mastery: AWS/Azure/SAP, ELT, Data Modeling
- Skills: Data Integration & Processing, GitHub/GitHub Actions, Azure DevOps, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest
Responsibilities:
- Build and optimize scalable data pipelines
- Architect & implement ELT/data models
- Manage data ingestion, integration, and processing workflows
- Enable CI/CD with DevOps tools
- Ensure code quality & reliability
Tableau Server Administrator (10+ Yrs Exp.) 📊🔒
📍Location: Remote
🗓️ Experience: 10+ years
MandatorySkills & Qualifications:
1. Proven expertise in Tableau architecture, clustering, scalability, and high availability.
2. Proficiency in PowerShell, Python, or Shell scripting.
3. Experience with cloud platforms (AWS, Azure, GCP) and Tableau Cloud.
4. Familiarity with database systems (SQL Server, Oracle, Snowflake).
5. Any certification Plus.
Job Title : Senior .NET Developer
Experience : 8+ Years
Location : Trivandrum / Kochi
Notice Period : Immediate
Working Hours : 12 PM – 9 PM IST (4-hour mandatory overlap with EST)
Job Summary :
We are hiring a Senior .NET Developer with strong hands-on experience in .NET Core (6/8+), C#, Azure Cloud Services, Azure DevOps, and SQL Server. This is a client-facing role for a US-based client, requiring excellent communication and coding skills, along with experience in cloud-based enterprise application development.
Mandatory Skills :
.NET Core 6/8+, C#, Entity Framework/Core, REST APIs, JavaScript, jQuery, MS SQL Server, Azure Cloud Services (Functions, Service Bus, Event Grid, Key Vault, SQL Azure), Azure DevOps (CI/CD), Unit Testing (XUnit/MSTest), Strong Communication Skills.
Key Responsibilities :
- Design, develop, and maintain scalable applications using .NET Core, C#, REST APIs, SQL Server
- Work with Azure Services: Functions, Durable Functions, Service Bus, Event Grid, Key Vault, Storage Queues, SQL Azure
- Implement and manage CI/CD pipelines using Azure DevOps
- Participate in Agile/Scrum ceremonies, collaborate with cross-functional teams
- Perform troubleshooting, debugging, and performance tuning
- Ensure high-quality code through unit testing and technical documentation
Primary Skills (Must-Have) :
- .NET Core 6/8+, C#, Entity Framework / EF Core
- REST APIs, JavaScript, jQuery
- SQL Server: Stored Procedures, Views, Functions
- Azure Cloud (2+ years): Functions, Service Bus, Event Grid, Blob Storage, SQL Azure, Monitoring
- Unit Testing (XUnit, MSTest), CI/CD (Classic/YAML pipelines)
- Strong knowledge of design patterns, architecture, and microservices
- Excellent communication and leadership skills
Secondary Skills (Nice to Have) :
- AngularJS/ReactJS
- Azure APIM, ADF, Logic Apps
- Azure Kubernetes Service (AKS)
- Application support & operational monitoring
Certifications (Preferred) :
- Microsoft Certified: Azure Fundamentals
- Microsoft Certified: Azure Developer Associate
- Relevant Azure/.NET/Cloud certifications
Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Position : .NET Architect (Blazor)
Experience : 7+ Years
Location : Pan India
Notice Period : Immediate Joiners / Currently Serving Notice
Key Responsibilities :
- Design, develop, and maintain enterprise-grade Blazor applications.
- Strong expertise in .NET Core and advanced C# (OOPs, LINQ, Lambda, Expressions).
- Experience with Azure PaaS services (App Services, Azure Functions, Logic Apps, Cosmos DB/SQL Azure).
- Ensure scalable and maintainable architecture across the application lifecycle.
Requirements :
- Proven experience as a .NET Architect with strong hands-on in Blazor.
- Deep understanding of modern web development and cloud-native applications on Azure.
🚀 Job Title : Python AI/ML Engineer
💼 Experience : 3+ Years
📍 Location : Gurgaon (Work from Office, 5 Days/Week)
📅 Notice Period : Immediate
Summary :
We are looking for a Python AI/ML Engineer with strong experience in developing and deploying machine learning models on Microsoft Azure.
🔧 Responsibilities :
- Build and deploy ML models using Azure ML.
- Develop scalable Python applications with cloud-first design.
- Create data pipelines using Azure Data Factory, Blob Storage & Databricks.
- Optimize performance, fix bugs, and ensure system reliability.
- Collaborate with cross-functional teams to deliver intelligent features.
✅ Requirements :
- 3+ Years of software development experience.
- Strong Python skills; experience with scikit-learn, pandas, NumPy.
- Solid knowledge of SQL and relational databases.
- Hands-on with Azure ML, Data Factory, Blob Storage.
- Familiarity with Git, REST APIs, Docker.
Azure DE
Primary Responsibilities -
- Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
- Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes
- Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations
- Use Azure Data Factory and Databricks to assemble large, complex data sets
- Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
- Ensure data security and compliance
- Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures
Required skills:
- Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams
- Azure DevOps
- Apache Spark, Python
- SQL proficiency
- Azure Databricks knowledge
- Big data technologies
The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL).
Out of the 7 open demands, 5 of The Azure Data Engineers should have minimum 5 years of relevant Data Engineering experience.
Job Title : Software Engineer (.NET, Azure)
Location : Remote
Employment Type : [Full-time/Contract]
Experience Level : 3+ Years
Job Summary :
We are looking for a skilled Software Engineer (.NET, Azure) to develop high-quality, secure, and scalable software solutions. You will collaborate with product owners, security specialists, and engineers to deliver robust applications.
Responsibilities :
- Develop and maintain server-side software using .NET (C#), SQL, and Azure.
- Build and secure RESTful APIs.
- Deploy and manage applications on Azure.
- Ensure version control using Azure DevOps/Git.
- Write clean, maintainable, and scalable code.
- Debug and optimize application performance.
Qualifications :
- 3+ Years of server-side development experience.
- Strong proficiency in .NET, SQL, and Azure.
- Experience with RESTful APIs and DevOps/Git.
- Ability to work collaboratively and independently.
- Familiarity with Scrum methodologies.
· What you’ll be doing:
Architecting and delivering mid to large scale Enterprise Applications on Microsoft Platform.
· Recommending and participating in activities related to the design, development and maintenance of the Enterprise Application Architecture.
Delivering projects on time with quality.
· Provide technical leadership regarding technology or project to customers and team members
· Shares best practices, lessons learned and constantly updates the technical system architecture requirements based on changing technologies and knowledge related to recent, current and up-coming vendor products and solutions.
· Participate in technical forums and discussion within the team and with clients.
· Guides and mentors’ team in terms of technical solutioning.
What you’ll bring to the team:
· You have 15+ years of experience in Strong design & coding experience in Dot net framework using design patterns & OOPs concepts.
· You have practical knowledge & good understanding of cross technology and functional domains, programming concepts and logical approach for problem solving.
You hold rich experience on web development technologies including ASP.NET, MVC5, JavaScript, jQuery, HTML5, AJAX, XML and CSS.
· You have experience in web application Architecture and Development with hands on expertise in delivering solutions, customizing C#, ASP.Net applications using development tools like Visual Studio 2010/2012.
About the Company-
AdPushup is an award-winning ad revenue optimization platform and Google Certified Publishing Partner (GCPP), helping hundreds of web publishers grow their revenue using cutting-edge technology, premium demand partnerships, and proven ad ops expertise.
Our team is a mix of engineers, marketers, product evangelists, and customer success specialists, united by a common goal of helping publishers succeed. We have a work culture that values expertise, ownership, and a collaborative spirit.
Job Overview- Java Backend- Lead Role :-
We are seeking a highly skilled and motivated Software Engineering Team Lead to join our dynamic team. The ideal candidate will have a strong technical background, proven leadership experience, and a passion for mentoring and developing a team of talented engineers. This role will be pivotal in driving the successful delivery of high-quality software solutions and fostering a collaborative and innovative work environment.
Exp- 5+ years
Location- New Delhi
Work Mode- Hybrid
Key Responsibilities:-
● Leadership and Mentorship: Lead, mentor, and develop a team of software engineers, fostering an environment of continuous improvement and professional growth.
● Project Management: Oversee the planning, execution, and delivery of software projects, ensuring they meet quality standards, timelines, and budget constraints.
● Technical Expertise: Provide technical guidance and expertise in software design, architecture, development, and best practices. Stay updated with the latest industry trends and technologies. Design, develop, and maintain high-quality applications, taking full, end-to-end ownership, including writing test cases, setting up monitoring, etc.
● Collaboration: Work closely with cross-functional teams to define project requirements, scope, and deliverables.
● Code Review and Quality Assurance: Conduct code reviews to ensure adherence to coding standards, best practices, and overall software quality. Implement and enforce quality assurance processes.
● Problem Solving: Identify, troubleshoot, and resolve technical challenges and bottlenecks. Provide innovative solutions to complex problems.
● Performance Management: Set clear performance expectations, provide regular feedback, and conduct performance evaluations for team members.
● Documentation: Ensure comprehensive documentation of code, processes, and project-related information.
Qualifications:-
● Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
● Experience: Minimum of 5 years of experience in software development.
● Technical Skills:
○ A strong body of prior backend work, successfully delivered in production. Experience building large volume data processing pipelines will be an added bonus.
○ Expertise in Core Java.
■ In-depth knowledge of the Java concurrency framework.
■ Sound knowledge of concepts like exception handling, garbage collection, and generics.
■ Experience in writing unit test cases, using any framework.
■ Hands-on experience with lambdas and streams.
■ Experience in using build tools like Maven and Ant.
○ Good understanding and Hands on experience of any Java frameworks e.g. SpringBoot, Vert.x will be an added advantage.
○ Good understanding of security best practices. ○ Hands-on experience with Low Level and High Level Design Practices and Patterns.
○ Hands on experience with any of the cloud platforms such as AWS, Azure, and Google Cloud.
○ Familiarity with containerization and orchestration tools like Docker, Kubernetes and Terraform.
○ Strong understanding of database technologies, both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Couchbase).
○ Knowledge of DevOps practices and tools such as Jenkins, CI/CD.
○ Strong understanding of software development methodologies (e.g., Agile, Scrum).
● Leadership Skills: Proven ability to lead, mentor, and inspire a team of engineers. Excellent interpersonal and communication skills.
● Problem-Solving Skills: Strong analytical and problem-solving abilities. Ability to think critically and provide innovative solutions.
● Project Management: Experience in managing software projects from conception to delivery. Strong organizational and time-management skills.
● Collaboration: Ability to work effectively in a cross-functional team environment. Strong collaboration and stakeholder management skills.
● Adaptability: Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements.
Why Should You Work for AdPushup?
At AdPushup, we have
1. A culture of valuing our employees and promoting an autonomous, transparent, and ethical work environment.
2. Talented and supportive peers who value your contributions.
3. Challenging opportunities: learning happens outside the comfort-zone and that’s where our team likes to be - always pushing the boundaries and growing personally and professionally.
4. Flexibility to work from home: We believe in work & performance instead of measuring conventional benchmarks like work-hours.
5. Plenty of snacks and catered lunch.
6. Transparency: an open, honest and direct communication with co-workers and business associates.
Required Skill Set :--
- Data Model & Mapping
- MS SQL Database
- Analytics SQL Query
- Genesys Cloud Reporting & Analytics API
- Snow Flake (Good to have)
- Cloud Exposure – AWS or Azure
Technical Experience –
· 5 - 8 Years of experience, preferable at technology or Financial firm
· Strong understanding of data analysis & reporting tools.
· Experience with data mining & machine learning techniques.
· Excellent communication & presentation skills
· Must have at least 2 – 3 years of experience in Data Model/Analysis /mapping
· Must have hands on experience in database tools & technologies
· Must have exposure to Genesys cloud, WFM, GIM, Genesys Analytics API
· Good to have experience or exposure on salesforce, AWS or AZUre , & Genesys cloud
· Ability to work independently & as part of a team
· Strong attention to detail and accuracy.
Work Scope –
- Data Model similar GIM database based on the Genesys Cloud data.
- API to column data mapping.
- Data Model for business for Analytics
- Data base artifacts
- Scripting – Python
- Autosys, TWS job setup.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
Data Engineer
Brief Posting Description:
This person will work independently or with a team of data engineers on cloud technology products, projects, and initiatives. Work with all customers, both internal and external, to make sure all data related features are implemented in each solution. Will collaborate with business partners and other technical teams across the organization as required to deliver proposed solutions.
Detailed Description:
· Works with Scrum masters, product owners, and others to identify new features for digital products.
· Works with IT leadership and business partners to design features for the cloud data platform.
· Troubleshoots production issues of all levels and severities, and tracks progress from identification through resolution.
· Maintains culture of open communication, collaboration, mutual respect and productive behaviors; participates in the hiring, training, and retention of top tier talent and mentors team members to new and fulfilling career experiences.
· Identifies risks, barriers, efficiencies and opportunities when thinking through development approach; presents possible platform-wide architectural solutions based on facts, data, and best practices.
· Explores all technical options when considering solution, including homegrown coding, third-party sub-systems, enterprise platforms, and existing technology components.
· Actively participates in collaborative effort through all phases of software development life cycle (SDLC), including requirements analysis, technical design, coding, testing, release, and customer technical support.
· Develops technical documentation, such as system context diagrams, design documents, release procedures, and other pertinent artifacts.
· Understands lifecycle of various technology sub-systems that comprise the enterprise data platform (i.e., version, release, roadmap), including current capabilities, compatibilities, limitations, and dependencies; understands and advises of optimal upgrade paths.
· Establishes relationships with key IT, QA, and other corporate partners, and regularly communicates and collaborates accordingly while working on cross-functional projects or production issues.
Job Requirements:
EXPERIENCE:
2 years required; 3 - 5 years preferred experience in a data engineering role.
2 years required, 3 - 5 years preferred experience in Azure data services (Data Factory, Databricks, ADLS, Synapse, SQL DB, etc.)
EDUCATION:
Bachelor’s degree information technology, computer science, or data related field preferred
SKILLS/REQUIREMENTS:
Expertise working with databases and SQL.
Strong working knowledge of Azure Data Factory and Databricks
Strong working knowledge of code management and continuous integrations systems (Azure DevOps or Github preferred)
Strong working knowledge of cloud relational databases (Azure Synapse and Azure SQL preferred)
Familiarity with Agile delivery methodologies
Familiarity with NoSQL databases (such as CosmosDB) preferred.
Any experience with Python, DAX, Azure Logic Apps, Azure Functions, IoT technologies, PowerBI, Power Apps, SSIS, Informatica, Teradata, Oracle DB, and Snowflake preferred but not required.
Ability to multi-task and reprioritize in a dynamic environment.
Outstanding written and verbal communication skills
Working Environment:
General Office – Work is generally performed within an office environment, with standard office equipment. Lighting and temperature are adequate and there are no hazardous or unpleasant conditions caused by noise, dust, etc.
physical requirements:
Work is generally sedentary in nature but may require standing and walking for up to 10% of the time.
Mental requirements:
Employee required to organize and coordinate schedules.
Employee required to analyze and interpret complex data.
Employee required to problem-solve.
Employee required to communicate with the public.
Responsibilities:
- Develop application modules independently and fix any bugs promptly.
- Perform unit testing for the development work carried out
- Act as a mentor to the junior resources and provide technical guidance.
- Troubleshoot problems and provide solutions.
- Conduct and participate in project planning & scheduling, design discussions, and provide assistance during testing.
- Remain up to date with the modern industry practices involved in designing & developing high-quality software.
- Ability to perform engineering and identify and fix bottlenecks.
Must have
- Must have at least 2 years of experience in MERN Stack development.
- Technical documentation as the requirements of the project.
- Must possess strong analytical skills to be able to break down complex problems into smaller atomic units of work.
- Good knowledge of Express.js, React and JS libraries.
- Clear understanding of JavaScript and Typescript.
- Sound understanding of MVC and design patterns.
- Excellent grasp of data structures and designing and developing ReST APIs.
- Good skills in either RDBMS (e.g. MySQL or PostgreSQL) or NoSQL (MongoDB or equivalent).
- Experience in developing responsive web applications.
- Good communication skills.
- Willingness to learn and adopt new technologies in a short period of time as required by the project.
- Sound understanding of Agile and Scrum methodologies and ability to participate in Sprint ceremonies.
Nice to have:
- Good grasp of UI / UX concepts
- Experience in using Git & VSCode.
- Knowledge of AWS, Azure, CI / CD, Gitflow, shell scripting
- Ability to build/own/maintain a comprehensive set of component libraries for a React JS UI
- Ability to design/develop for cross-browser/device compatibility
We strive to create an environment where differences are not only accepted but greatly valued; where everyone can make the most of their capabilities and potential. We promote meritocracy, competence and a sharing of ideas and opinions. We are driven by data and believe the diversity, agility, generosity, and curiosity of our people is what sets us apart as an organization and helps us thrive.
Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop high-quality software solutions in C#/.NET according to technical specifications.
- Participate in code reviews and provide constructive feedback to peers.
- Debug, troubleshoot, and resolve software defects to ensure optimal performance.
- Assist in the maintenance and enhancement of existing software applications.
- Stay up-to-date with the latest .NET technologies and industry trends.
- Document software features, technical specifications, and implementation details.
- Contribute to the continuous improvement of development processes and best practices.
- Communicate effectively with team members and stakeholders to ensure project success.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field.
- Minimum 1 year of professional experience in software development using C# and the .NET framework.
- Solid understanding of object-oriented programming principles and design patterns.
- Proficiency in Microsoft technologies such as ASP.NET, MVC, and Entity Framework.
- Experience with front-end development technologies like HTML, CSS, JavaScript, and jQuery.
- Familiarity with relational databases (e.g., SQL Server) and SQL queries.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to work both independently and collaboratively in a fast-paced environment.
- Excellent verbal and written communication skills.
- Demonstrated willingness to learn and adapt to new technologies and methodologies.
KEY RESPONSIBILITIES
· Develop high-quality database solutions.
· Use T-SQL to develop and implement procedures and functions.
· Review and interpret ongoing business report requirements.
· Research required data.
· Build appropriate and useful reporting deliverables.
· Analyze existing SQL queries for performance improvements.
· Suggest new queries.
· Develop procedures and scripts for data migration.
· Provide timely scheduled management reporting.
· Investigate exceptions with regard to asset movements.
MUST-HAVES FOR THIS GIG
T-SQL, Stored Procedure, Functions, Triggers, XML Operations, JSON support on SQL 2016 and above SSIS, SSRS, CTE, EAV Data structure, Integration with NoSQL(MongoDB), SQL Server Indexes, Bulk Insert, BCP, CMD Shell ,Memory Optimization, Performance Tunning, Query Optimization, Database Designing, Table Joins, SQL Server Job agent
Backup and Maintenance plan ,Data Migration, Good Communication
NICE-TO-HAVES FOR THIS GIG:
- Working knowledge of mobile development activity.
- Working knowledge of web hosting solution on IIS7.
Experience working with an offshore –onsite development process
Bachelor’s Degree in Information Technology or related field desirable.
• 5 years of Database administrator experience in Microsoft technologies
• Experience with Azure SQL in a multi-region configuration
• Azure certifications (Good to have)
• 2+ Years’ Experience in performing data migrations upgrades/modernizations, performance tuning on IaaS and PaaS Managed Instance and SQL Azure
• Experience with routine maintenance, recovery, and handling failover of a databases
Knowledge about the RDBMS e.g., Microsoft SQL Server or Azure cloud platform.
• Expertise Microsoft SQL Server on VM, Azure SQL Managed Instance, Azure SQL
• Experience in setting up and working with Azure data warehouse.
Roles and Responsibilities:
1. Develop, enhance, document, and maintain application features in C#/Asp.Net.
2. Excellent understanding of Database concepts and strong ability to write well-tuned SQL Statements.
3. Participate in design, code and test inspections throughout product life cycle to contribute technical expertise and to identify issues.
4. Knowledge of developing desktop-based applications is also desirable.
5. Understand technical project priorities, implementation, dependencies, risks and issues.
6. Big Data knowledge is a Plus.
7. Drive design reviews while adhering to security requirements.
8. Provide direction for .Net developers and act as escalation point for questions or issues.
9. Performs technical analysis to identify and troubleshoot application code – related issues.
Keyskills
1. Must have hands on experience with ASP_Net, ASP_Net_MVC, Asp .net 4.5 , Core
2. Must have good experience in C#
3. Must have good experience in API
4. Must have hands on experience in HTML_5, angular js,CSS,CSS,CSS3, jQuery, Bootstrap, jAVASCRIPT,
5, Must have experience in Microsoft_SQL_server
ROLE AND RESPONSIBILITIES
Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should
be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and
transform data into insights that drive business value, through use of data analytics, data visualization and data
modeling techniques.
QUALIFICATIONS AND EDUCATION REQUIREMENTS
Technical Bachelor’s Degree.
Non-Technical Degree holders should have 1+ years of relevant experience.
Roles and Responsibilities
• Ability to create solution prototype and conduct proof of concept of new tools.
• Work in research and understanding of new tools and areas.
• Clearly articulate pros and cons of various technologies/platforms and perform
detailed analysis of business problems and technical environments to derive a
solution.
• Optimisation of the application for maximum speed and scalability.
• Work on feature development and bug fixing.
Technical skills
• Must have knowledge of the networking in Linux, and basics of computer networks in
general.
• Must have intermediate/advanced knowledge of one programming language,
preferably Python.
• Must have experience of writing shell scripts and configuration files.
• Should be proficient in bash.
• Should have excellent Linux administration capabilities.
• Working experience of SCM. Git is preferred.
• Knowledge of build and CI-CD tools, like Jenkins, Bamboo etc is a plus.
• Understanding of Architecture of OpenStack/Kubernetes is a plus.
• Code contributed to OpenStack/Kubernetes community will be a plus.
• Data Center network troubleshooting will be a plus.
• Understanding of NFV and SDN domain will be a plus.
Soft skills
• Excellent verbal and written communications skills.
• Highly driven, positive attitude, team player, self-learning, self motivating and flexibility
• Strong customer focus - Decent Networking and relationship management
• Flair for creativity and innovation
• Strategic thinking This is an individual contributor role and will need client interaction on technical side.
Must have Skill - Linux, Networking, Python, Cloud
Additional Skills-OpenStack, Kubernetes, Shell, Java, Development
🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.


















