50+ Windows Azure Jobs in India
Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!
Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals
Responsibilities:
- Cloud Architecture & Strategy
- Define and evolve the company’s cloud architecture, with AWS as the primary platform.
- Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
- Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
- Partner with engineering and product to convert custom solutions into productised capabilities.
- Security & Compliance Enablement
- Act as a foundational partner in building out the company’s security andcompliance functions.
- Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
- Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
- Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
- Customer & Solutions Enablement
- Work with Solutions Engineering and customers to design and validate complex deployments.
- Contribute to processes that productise custom implementations into scalable platform features.
- Leadership & Influence
- Serve as a technical thought leader across cloud, data, and security domains.
- Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
- Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
- Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
- Data Platforms & Governance
- Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
- Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
- Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
- Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
- Developer Experience & DevOps
- Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
- Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
- Embed security, compliance, and reliability standards into the development lifecycle.
Requirements:
- 12+ years of experience in cloud engineering or architecture roles.
- Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
- Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
- Strong foundation in data management and governance, including lifecycle and compliance.
- Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
- Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
- Strong foundation in networking, security, observability, and performance engineering.
- Excellent communication and influencing skills, with the ability to partner across technical and business functions.
Good to Have:
- Exposure to Azure, GCP, or other cloud environments.
- Experience working in SaaS/PaaS at enterprise scale.
- Background in product engineering, with experience shaping technical direction in collaboration with product teams.
- Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
- You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.
About The Role
- As a Data Platform Lead, you will utilize your strong technical background and hands-on development skills to design, develop, and maintain data platforms.
- Leading a team of skilled data engineers, you will create scalable and robust data solutions that enhance business intelligence and decision-making.
- You will ensure the reliability, efficiency, and scalability of data systems while mentoring your team to achieve excellence.
- Collaborating closely with our client’s CXO-level stakeholders, you will oversee pre-sales activities, solution architecture, and project execution.
- Your ability to stay ahead of industry trends and integrate the latest technologies will be crucial in maintaining our competitive edge.
Key Responsibilities
- Client-Centric Approach: Understand client requirements deeply and translate them into robust technical specifications, ensuring solutions meet their business needs.
- Architect for Success: Design scalable, reliable, and high-performance systems that exceed client expectations and drive business success.
- Lead with Innovation: Provide technical guidance, support, and mentorship to the development team, driving the adoption of cutting-edge technologies and best practices.
- Champion Best Practices: Ensure excellence in software development and IT service delivery, constantly assessing and evaluating new technologies, tools, and platforms for project suitability.
- Be the Go-To Expert: Serve as the primary point of contact for clients throughout the project lifecycle, ensuring clear communication and high levels of satisfaction.
- Build Strong Relationships: Cultivate and manage relationships with CxO/VP level stakeholders, positioning yourself as a trusted advisor.
- Deliver Excellence: Manage end-to-end delivery of multiple projects, ensuring timely and high-quality outcomes that align with business goals.
- Report with Clarity: Prepare and present regular project status reports to stakeholders, ensuring transparency and alignment.
- Collaborate Seamlessly: Coordinate with cross-functional teams to ensure smooth and efficient project execution, breaking down silos and fostering collaboration.
- Grow the Team: Provide timely and constructive feedback to support the professional growth of team members, creating a high-performance culture.
Qualifications
- Master’s (M. Tech., M.S.) in Computer Science or equivalent from reputed institutes like IIT, NIT preferred
- Overall 6–8 years of experience with minimum 2 years of relevant experience and a strong technical background
- Experience working in mid size IT Services company is preferred
Preferred Certification
- AWS Certified Data Analytics Specialty
- AWS Solution Architect Professional
- Azure Data Engineer + Solution Architect
- Databricks Certified Data Engineer / ML Professional
Technical Expertise
- Advanced knowledge of distributed architectures and data modeling practices.
- Extensive experience with Data Lakehouse systems like Databricks and data warehousing solutions such as Redshift and Snowflake.
- Hands-on experience with data technologies such as Apache Spark, SQL, Airflow, Kafka, Jenkins, Hadoop, Flink, Hive, Pig, HBase, Presto, and Cassandra.
- Knowledge in BI tools including PowerBi, Tableau, Quicksight and open source equivalent like Superset and Metabase is good to have.
- Strong knowledge of data storage formats including Iceberg, Hudi, and Delta.
- Proficient programming skills in Python, Scala, Go, or Java.
- Ability to architect end-to-end solutions from data ingestion to insights, including designing data integrations using ETL and other data integration patterns.
- Experience working with multi-cloud environments, particularly AWS and Azure.
- Excellent teamwork and communication skills, with the ability to thrive in a fast-paced, agile environment.
Skills required:
- Strong expertise in .NET Core / ASP.NET MVC
- Candidate must have 4+ years of experience in Dot Net.
- Candidate must have experience with Angular.
- Hands-on experience with Entity Framework & LINQ
- Experience with SQL Server (performance tuning, stored procedures, indexing)
- Understanding of multi-tenancy architecture
- Experience with Microservices / API development (REST, GraphQL)
- Hands-on experience in Azure Services (App Services, Azure SQL, Blob Storage, Key Vault, Functions, etc.)
- Experience in CI/CD pipelines with Azure DevOps
- Knowledge of security best practices in cloud-based applications
- Familiarity with Agile/Scrum methodologies
- Flexible to use copilot or any other AI tool to write automated test cases and faster code writing
Roles and Responsibilities:
- Good communication Skills is must.
- Develop features across multiple subsystems within our applications, including collaboration in requirements definition, prototyping, design, coding, testing, and deployment.
- Understand how our applications operate, are structured, and how customers use them
- Provide engineering support (when necessary) to our technical operations staff when they are building, deploying, configuring, and supporting systems for customers.
Job Description :
We are looking for an experienced DevOps Engineer with strong expertise in Azure DevOps, CI/CD pipelines, and PowerShell scripting, who has worked extensively with .NET-based applications in a Windows environment.
Mandatory Skills
- Strong hands-on experience with Azure DevOps
- GIT version control
- CI/CD pipelines (Classic & YAML)
- Excellent experience in PowerShell scripting
- Experience working with .NET-based applications
- Understanding of Solutions, Project files, MSBuild
- Experience using Visual Studio / MSBuild tools
- Strong experience in Windows environment
- End-to-end experience in build, release, and deployment pipelines
Good to Have Skills
- Terraform (optional / good to have)
- Experience with JFrog Artifactory
- SonarQube integration knowledge
JD :
• Master’s degree in Computer Science, Computational Sciences, Data Science, Machine Learning, Statistics , Mathematics any quantitative field
• Expertise with object-oriented programming (Python, C++)
• Strong expertise in Python libraries like NumPy, Pandas, PyTorch, TensorFlow, and Scikit-learn
• Proven experience in designing and deploying ML systems on cloud platforms (AWS, GCP, or Azure).
• Hands-on experience with MLOps frameworks, model deployment pipelines, and model monitoring tools.
• Track record of scaling machine learning solutions from prototype to production.
• Experience building scalable ML systems in fast-paced, collaborative environments.
• Working knowledge of adversarial machine learning techniques and their mitigation
• Agile and Waterfall methodologies.
• Personally invested in continuous improvement and innovation.
• Motivated, self-directed individual that works well with minimal supervision.
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
About the company:
Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.
About the role:
As a Technical Project Manager, you will lead the planning, execution, and delivery of complex technical projects while ensuring alignment with business objectives and timelines. You will act as a bridge between technical teams and stakeholders, managing resources, risks, and communications to deliver high-quality solutions. This role demands strong leadership, project management expertise, and technical acumen to drive project success in a dynamic and collaborative environment.
Qualifications:
- Education Background: Any ME / M Tech / BE / B Tech
Key Competencies:
Technical Skills
1. Data & BI Technologies-
- Proficiency in SQL & PL/SQL for database querying and optimization.
- Understanding of data warehousing concepts, dimensional modeling, and data lake/lakehouse architectures.
- Experience with BI tools such as Power BI, Tableau, Qlik Sense/View.
- Familiarity with traditional platforms like Oracle, Informatica, SAP BO, BODS, BW.
2. Cloud & Data Engineering :
- Strong knowledge of AWS (EC2, S3, Lambda, Glue, Redshift), Azure (Data Factory, Synapse, Databricks, ADLS),
- Snowflake (warehouse architecture, performance tuning), and Databricks (Delta Lake, Spark).
- Experience with cloud-based ETL/ELT pipelines, data ingestion, orchestration, and workflow automation.
3. Programming
- Hands-on experience in Python or similar scripting languages for data processing and automation.
Soft Skills
- Strong leadership and team management skills.
- Excellent verbal and written communication for stakeholder alignment.
- Structured problem-solving and decision-making capability.
- Ability to manage ambiguity and handle multiple priorities.
Tools & Platforms
- Cloud: AWS, Azure
- Data Platforms: Snowflake, Databricks
- BI Tools: Power BI, Tableau, Qlik
- Data Management: Oracle, Informatica, SAP BO
- Project Tools: JIRA, MS Project, Confluence (recommended additions if you want)
Key Responsibilities:
- End-to-End Project Management: Lead the team through the full project lifecycle, delivering techno-functional solutions.
- Methodology Expertise: Apply Agile, PMP, and other frameworks to ensure effective project execution and resource management.
- Technology Integration: Oversee technology integration and ensure alignment with business goals.
- Stakeholder & Conflict Management: Manage relationships with customers, partners, and vendors, addressing expectations and conflicts proactively.
- Technical Guidance: Provide expertise in software design, architecture, and ensure project feasibility.
- Change Management: Analyse new requirements/change requests, ensuring alignment with project goals.
- Effort & Cost Estimation: Estimate project efforts and costs and identify potential risks early.
- Risk Mitigation: Proactively identify risks and develop mitigation strategies, escalating issues in advance.
- Hands-On Contribution: Participate in coding, code reviews, testing, and documentation as needed.
- Project Planning & Monitoring: Develop detailed project plans, track progress, and monitor task dependencies.
- Scope Management: Manage project scope, deliverables, and exclusions, ensuring technical feasibility.
- Effective Communication: Communicate with stakeholders to ensure agreement on scope, timelines, and objectives.
- Reporting: Provide status and RAG reports, proactively addressing risks and issues.
- Change Control: Manage changes in project scope, schedule, and costs using appropriate verification techniques.
- Performance Measurement: Measure project performance with tools and techniques to ensure progress.
- Operational Process Management: Oversee operational tasks like timesheet approvals, leave, appraisals, and invoicing.
Job Summary:
We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment

Global digital transformation solutions provider.
Role Proficiency:
Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.
Knowledge Examples:
- Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
- Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
- Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
- Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
- Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
- Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
- Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
- Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
- Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
- Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
- Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
- Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
- Solution Structuring: Demonstrates working knowledge of service offering and products
Additional Comments:
Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:
• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.
• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.
• Expertise in cloud-based applications on Azure, leveraging key Azure services.
• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.
• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.
• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.
• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.
• Excellent communication skills
• Mentor team members providing guidance on technical challenges and helping them grow their skill set.
• Good to have experience in GCP and retail domain.
Skills: Devops, Azure, Java
Must-Haves
Java (12+ years), React, Azure, DevOps, Cloud Architecture
Strong Java architecture and design experience.
Expertise in Azure cloud services.
Hands-on experience with React and front-end integration.
Proven track record in DevOps practices (CI/CD, automation).
Notice period - 0 to 15days only
Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum
Excellent communication and leadership skills.

Global digital transformation solutions provider.
Job Description
We are seeking a skilled Microsoft Dynamics 365 Developer with 4–7 years of hands-on experience in designing, customizing, and developing solutions within the Dynamics 365 ecosystem. The ideal candidate should have strong technical expertise, solid understanding of CRM concepts, and experience integrating Dynamics 365 with external systems.
Key Responsibilities
- Design, develop, and customize solutions within Microsoft Dynamics 365 CE.
- Work on entity schema, relationships, form customizations, and business logic components.
- Develop custom plugins, workflow activities, and automation.
- Build and enhance integrations using APIs, Postman, and related tools.
- Implement and maintain security models across roles, privileges, and access levels.
- Troubleshoot issues, optimize performance, and support deployments.
- Collaborate with cross-functional teams and communicate effectively with stakeholders.
- Participate in version control practices using GIT.
Must-Have Skills
Core Dynamics 365 Skills
- Dynamics Concepts (Schema, Relationships, Form Customization): Advanced
- Plugin Development: Advanced (writing and optimizing plugins, calling actions, updating related entities)
- Actions & Custom Workflows: Intermediate
- Security Model: Intermediate
- Integrations: Intermediate (API handling, Postman, error handling, authorization & authentication, DLL merging)
Coding & Versioning
- C# Coding Skills: Intermediate (Able to write logic using if-else, switch, loops, error handling)
- GIT: Basic
Communication
- Communication Skills: Intermediate (Ability to clearly explain technical concepts and work with business users)
Good-to-Have Skills (Any 3 or More)
Azure & Monitoring
- Azure Functions: Basic (development, debugging, deployment)
- Azure Application Insights: Intermediate (querying logs, pushing logs)
Reporting & Data
- Power BI: Basic (building basic reports)
- Data Migration: Basic (data import with lookups, awareness of migration tools)
Power Platform
- Canvas Apps: Basic (building basic apps using Power Automate connector)
- Power Automate: Intermediate (flows & automation)
- PCF (PowerApps Component Framework): Basic
Skills: Microsoft Dynamics, Javascript, Plugins
Must-Haves
Microsoft Dynamics 365 (4-7 years), Plugin Development (Advanced), C# (Intermediate), Integrations (Intermediate), GIT (Basic)
Core Dynamics 365 Skills
Dynamics Concepts (Schema, Relationships, Form Customization): Advanced
Plugin Development: Advanced (writing and optimizing plugins, calling actions, updating related entities)
Actions & Custom Workflows: Intermediate
Security Model: Intermediate
Integrations: Intermediate
(API handling, Postman, error handling, authorization & authentication, DLL merging)
Coding & Versioning
C# Coding Skills: Intermediate
(Able to write logic using if-else, switch, loops, error handling)
GIT: Basic
Notice period - Immediate to 15 days
Locations: Bangalore only
(Ability to clearly explain technical concepts and work with business users)
Nice to Haves
(Any 3 or More)
Azure & Monitoring
Azure Functions: Basic (development, debugging, deployment)
Azure Application Insights: Intermediate (querying logs, pushing logs)
Reporting & Data
Power BI: Basic (building basic reports)
Data Migration: Basic
(data import with lookups, awareness of migration tools)
Power Platform
Canvas Apps: Basic (building basic apps using Power Automate connector)
Power Automate: Intermediate (flows & automation)
PCF (PowerApps Component Framework): Basic
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.
What You’ll Do
- Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
- Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
- Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
- Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
- Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
- Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.
What Makes You a Strong Fit
- You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
- You think in systems and second-order effects—not just in ticket-by-ticket outputs.
- You prefer well-reasoned defaults over overengineering.
- You take ownership—not just of code, but of the outcomes it enables.
- You work cleanly, write clear code, and make life easier for those who come after you.
- You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.
Bonus if You Have Experience With
- Building tools or workflows that accelerate other developers.
- Working with AI coding tools and integrating them meaningfully into your workflow.
- Building for SaaS products, especially those with large user bases or self-serve motions.
- Working in small, fast-moving product teams with a high bar for ownership.
Why Join Us
- A small team that values craftsmanship, curiosity, and momentum.
- A product-driven culture where engineering decisions are informed by customer outcomes.
- A chance to work on multiple zero-to-one opportunities with strong PMF.
- No vanity perks—just meaningful work with people who care.
Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Work Location: Pune
Job Type: Hybrid
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.
Job Description:
Senior Full Stack Developer (Java + React)
Experience: 7+ Years
Location: Bangalore, Hyderabad, Pune, Noida
Employment Type: Full-time
Preferred: Ready to join within 15 days - 30 days
🔍 About the Role
We are seeking a highly skilled Senior Full Stack Developer with strong backend expertise in Java and hands-on experience with React on the frontend. The ideal candidate should possess exceptional analytical skills, deep knowledge of software design principles, and the ability to build scalable, high-performance applications.
🧠 Key Responsibilities
- Design, develop, and maintain scalable backend services using Core Java & Spring frameworks.
- Build responsive and interactive UI components using ReactJS/Redux.
- Implement high-quality code using TDD/BDD practices (JUnit, JBehave/Cucumber).
- Work on RESTful API development, integration, and optimization.
- Develop and manage efficient database schemas using SQL (DB2) and MongoDB.
- Collaborate with cross-functional teams (DevOps, QA, Product) to deliver robust solutions.
- Participate in code reviews, technical discussions, and architectural decisions.
- Optimize system performance using multithreading, caching, and scalable design patterns.
🛠️ Required Skills
Backend (Strong Expertise Required)
- 7+ years of experience in Java backend development
- Deep knowledge of:
- Core Java (class loading, garbage collection, collections, streams, reflections)
- OOPs, data structures, algorithms, graph data
- Design patterns, MVC, multithreading, recursion
- Spring, JSR-303, Logback, Apache Commons
Database Skills
- Strong knowledge of Relational Databases & SQL (DB2)
- Good understanding of NoSQL (MongoDB)
Frontend Skills
- Solid experience with ReactJS/Redux
- Strong understanding of REST APIs, JSON, XML, HTTP
DevOps & Tools
- Strong knowledge of Git, Gradle, Jenkins, CI/CD pipelines
- Experience with Liquibase for schema management
- Hands-on with Unix/Linux
✨ Good to Have
- Experience with Azure, Snowflake, Databricks
- Knowledge of Camunda 7/8 (BPMN/DMN)
- Experience with TDD, BDD methodologies
- Understanding of workflow engines & cloud data stack
🎓 Education
- Bachelor’s degree in Computer Science, Engineering, or a related field.
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Job Description: Data Engineer
Location: Ahmedabad
Experience: 5 to 6 years
Employment Type: Full-Time
We are looking for a highly motivated and experienced Data Engineer to join our team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.
Responsibilities
● Design and optimize data pipelines for various data sources
● Design and implement efficient data storage and retrieval mechanisms
● Develop data modelling solutions and data validation mechanisms
● Troubleshoot data-related issues and recommend process improvements
● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
● Coach and mentor junior data engineers in the team
Skills Required:
● Minimum 4 years of experience in data engineering or related field
● Proficient in designing and optimizing data pipelines and data modeling
● Strong programming expertise in Python
● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive
● Extensive experience with cloud data services such as AWS, Azure, and GCP
● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing
● Knowledge of distributed computing and storage systems
● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage
● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities
Qualifications
- Bachelor's degree in Computer Science, Data Science, or a Computer related field
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
Role Responsibilities:
Following are high level responsibilities that you will play but not limited to:
- Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
- Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
- Enable business analytics and self-service reporting through Power BI and other visualization tools.
- Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
- Implement and enforce best practices for data governance, data quality, and security.
- Mentor and guide junior data engineers; establish coding and design standards.
- Evaluate emerging technologies and tools to continuously improve the data ecosystem.
Required Qualifications:
- Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
- Bachelor’s/Master’s degree in Computer Science, Information Technology, Engineering, or related field.
- 5-7 years of experience in data engineering or data platform development, with at least 2–3 years in a lead or architect role.
- Strong hands-on experience in one or more of the following:
- Microsoft Fabric (Data Factory, Lakehouse, Data Warehouse)
- Databricks (Spark, Delta Lake, PySpark, MLflow)
- Snowflake (Data Warehousing, Snowpipe, Performance Optimization)
- Power BI (Data Modeling, DAX, Report Development)
- Proficiency in SQL and programming languages like Python or Scala.
- Experience with Azure, AWS, or GCP cloud data services.
- Solid understanding of data modeling, data governance, security, and CI/CD practices.
Preferred Qualifications:
- Familiarity with data modeling techniques and practices for Power BI.
- Knowledge of Azure Databricks or other data processing frameworks.
- Knowledge of Microsoft Fabric or other Cloud Platforms.
What we need?
· B. Tech computer science or equivalent.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Role: Senior Data Engineer (Azure)
Experience: 5+ Years
Location: Anywhere in india
Work Mode: Remote
Notice Period - Immediate joiners or Serving notice period
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Data processing on Azure using ADF, Streaming Analytics, Event Hubs, Azure Databricks, Data Migration Services, and Data Pipelines
- Provisioning, configuring, and developing Azure solutions (ADB, ADF, ADW, etc.)
- Designing and implementing scalable data models and migration strategies
- Working on distributed big data batch or streaming pipelines (Kafka or similar)
- Developing data integration & transformation solutions for structured and unstructured data
- Collaborating with cross-functional teams for performance tuning and optimization
- Monitoring data workflows and ensuring compliance with governance and quality standards
- Driving continuous improvement through automation and DevOps practices
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞:
- 5–10 years of experience as a Data Engineer
- Strong proficiency in Azure Databricks, PySpark, Python, SQL, and Azure Data Factory
- Experience in Data Modelling, Data Migration, and Data Warehousing
- Good understanding of database structure principles and schema design
- Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms
- Experience with DevOps tools (Azure DevOps, Jenkins, Airflow, Azure Monitor) — good to have
- Knowledge of distributed data processing and real-time streaming (Kafka/Event Hub)
- Familiarity with visualization tools like Power BI or Tableau
- Strong analytical, problem-solving, and debugging skills
- Self-motivated, detail-oriented, and capable of managing priorities effectively
Review Criteria
- Strong Senior Data Engineer profile
- 4+ years of hands-on Data Engineering experience
- Must have experience owning end-to-end data architecture and complex pipelines
- Must have advanced SQL capability (complex queries, large datasets, optimization)
- Must have strong Databricks hands-on experience
- Must be able to architect solutions, troubleshoot complex data issues, and work independently
- Must have Power BI integration experience
- CTC has 80% fixed and 20% variable in their ctc structure
Preferred
- Worked on Call center data, understand nuances of data generated in call centers
- Experience implementing data governance, quality checks, or lineage frameworks
- Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture
Job Specific Criteria
- CV Attachment is mandatory
- Are you Comfortable integrating with Power BI datasets?
- We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?
Role & Responsibilities
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
Ideal Candidate
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
ROLES AND RESPONSIBILITIES:
Standardization and Governance:
- Establishing and maintaining project management standards, processes, and methodologies.
- Ensuring consistent application of project management policies and procedures.
- Implementing and managing project governance processes.
Resource Management:
- Facilitating the sharing of resources, tools, and methodologies across projects.
- Planning and allocating resources effectively.
- Managing resource capacity and forecasting future needs.
Communication and Reporting:
- Ensuring effective communication and information flow among project teams and stakeholders.
- Monitoring project progress and reporting on performance.
- Communicating strategic work progress, including risks and benefits.
Project Portfolio Management:
- Supporting strategic decision-making by aligning projects with organizational goals.
- Selecting and prioritizing projects based on business objectives.
- Managing project portfolios and ensuring efficient resource allocation across projects.
Process Improvement:
- Identifying and implementing industry best practices into workflows.
- Improving project management processes and methodologies.
- Optimizing project delivery and resource utilization.
Training and Support:
- Providing training and support to project managers and team members.
- Offering project management tools, best practices, and reporting templates.
Other Responsibilities:
- Managing documentation of project history for future reference.
- Coaching project teams on implementing project management steps.
- Analysing financial data and managing project costs.
- Interfacing with functional units (Domain, Delivery, Support, Devops, HR etc).
- Advising and supporting senior management.
IDEAL CANDIDATE:
- 3+ years of proven experience in Project Management roles with strong exposure to PMO processes, standards, and governance frameworks.
- Demonstrated ability to manage project status tracking, risk assessments, budgeting, variance analysis, and defect tracking across multiple projects.
- Proficient in Project Planning and Scheduling using tools like MS Project and Advanced Excel (e.g., Gantt charts, pivot tables, macros).
- Experienced in developing project dashboards, reports, and executive summaries for senior management and stakeholders.
- Active participant in Agile environments, attending and contributing to Scrum calls, sprint planning, and retrospectives.
- Holds a Bachelor’s degree in a relevant field (e.g., Engineering, Business, IT, etc.).
- Preferably familiar with Jira, Azure DevOps, and Power BI for tracking and visualization of project data.
- Exposure to working in product-based companies or fast-paced, innovation-driven environments is a strong advantage.
Experience: 3+ years (Backend/Full-Stack)
Note: You will be the 3rd engineer on the team. If you are comfortable with Java and Springboot plus Cloud, then you will easily be able to pick up the following stack.
Key Requirements —
- Primary Stack: Experience with .NET
- Cloud: Solid understanding of cloud platforms (preferably Azure)
- Frontend/DevOps: Familiarity with React and DevOps practices
- Architecture: Strong grasp of microservices
- Technical Skills: Basic proficiency in scripting, databases, and Git
Compensation: competitive salary, based on experience and fit
Review Criteria
- Strong IT Engineer Profile
- 4+ years of hands-on experience in Azure/Office 365 compliance and management, including policy enforcement, audit readiness, DLP, security configurations, and overall governance.
- Must have strong experience handling user onboarding/offboarding, identity & access provisioning, MFA, SSO configurations, and lifecycle management across Windows/Mac/Linux environments.
- Must have proven expertise in IT Inventory Management, including asset tracking, device lifecycle, CMDB updates, and hardware/software allocation with complete documentation.
- Hands-on experience configuring and managing FortiGate Firewalls, including routing, VPN setups, policies, NAT, and overall network security.
- Must have practical experience with FortiGate WiFi, AP configurations, SSID management, troubleshooting connectivity issues, and securing wireless environments.
- Must have strong knowledge and hands-on experience with Antivirus Endpoint Central (or equivalent) for patching, endpoint protection, compliance, and threat remediation.
- Must have solid understanding of Networking, including routing, switching, subnetting, DHCP, DNS, VPN, LAN/WAN troubleshooting.
- Must have strong troubleshooting experience across Windows, Linux, and macOS environments for system issues, updates, performance, and configurations.
- Must have expertise in Cisco/Polycom A/V solutions, including setup, configuration, video conferencing troubleshooting, and meeting room infrastructure support.
- Must have hands-on experience in Shell Scripting / Bash / PowerShell for automation of routine IT tasks, monitoring, and system efficiencies.
Job Specific Criteria
- CV Attachment is mandatory
- Q1. Please share details of experience in troubleshooting (Rate out of 10, 10 being highly experienced) A. Windows Troubleshooting B. Linux Troubleshooting C. Macbook Troubleshooting
- Q2. Please share details of experience in below process (Rate out of 10, 10 being highly experienced) A. User Onboarding/Offboarding B. Inventory Management
- Q3. Please share details of experience in below tools and administrations (Rate out of 10, 10 being highly experienced) A. FortiGate Firewall B. FortiGate WiFi C. Antivirus Endpoint Central D. Networking E. Cisco/Polycom A/V solutions F. Shell Scripting/Bash/PowerShell G. Azure/Office 365 compliance and management
- Q4. Are you okay for F2F round (Noida)?
- Q5. What's you current company?
- Q6. Are you okay for rotational shift (10am - 7pm and 2pm to 11pm)?
Role & Responsibilities
We are seeking an experienced IT Infrastructure/System Administrator to manage, secure, and optimize our IT environment. The ideal candidate will have expertise in enterprise-grade tools, strong troubleshooting skills, and hands-on experience configuring secure integrations, managing endpoint deployments, and ensuring compliance across platforms.
- Administer and manage Office 365 suite (Outlook, SharePoint, OneDrive, Teams etc) and related services/configurations.
- Handle user onboarding and offboarding, ensuring secure and efficient account provisioning and deprovisioning.
- Oversee IT compliance frameworks, audit processes, and IT asset inventory management, attendance systems.
- Administer Jira, FortiGate firewalls and Wi-Fi, antivirus solutions, and endpoint management systems.
- Provide network administration: routing, subnetting, VPNs, and firewall configurations.
- Support, patch, update, and troubleshoot Windows, Linux, and macOS environments, including applying vulnerability fixes and ensuring system security.
- Manage Assets Explorer for device and asset management/inventory.
- Set up, manage, and troubleshoot Cisco and Polycom audio/video conferencing systems.
- Provide remote support for end-users, ensuring quick resolution of technical issues.
- Monitor IT systems and network for performance, security, and reliability, ensuring high availability.
- Collaborate with internal teams and external vendors to resolve issues and optimize systems.
- Document configurations, processes, and troubleshooting procedures for compliance and knowledge sharing.
Ideal Candidate
- Proven hands-on experience with:
- Office 365 administration and compliance.
- User onboarding/offboarding processes.
- Compliance, audit, and inventory management tools.
- Jira administration, FortiGate firewall, Wi-Fi, and antivirus solutions.
- Networking fundamentals: subnetting, routing, switching.
- Patch management, updates, and vulnerability remediation across Windows, Linux, and macOS.
- Assets Explorer/inventory management
- Strong troubleshooting, documentation, and communication skills.
Preferred Skills:
- Scripting knowledge in Bash, PowerShell for automation.
- Experience working with Jira and Confluence.
Job Overview
As a software Engineer, you will play a crucial role in leading our development efforts, ensuring best practices, and supporting the team on a day-to-day basis. This role requires deep technical knowledge, a proactive mindset, and a commitment to guiding the team in tackling challenging issues. You will work primarily with .NET Core on the backend while also keeping a strategic focus on product security, DevOps, quality assurance, and cloud infrastructure.
Responsibilities
• Forward-Looking Product Development:
o Collaborate with product and engineering teams to align on the technical
direction, scalability, and maintainability of the product.
o Proactively consider and address security, performance, and scalability
requirements during development.
- Cloud and Infrastructure: Leverage Microsoft Azure for cloud infrastructure,
- ensuring efficient and secure use of cloud services. Work closely with DevOps to
- improve deployment processes.
- DevOps & CI/CD: Support the setup and maintenance of CI/CD pipelines, enabling
- smooth and frequent deployments. Collaborate with the DevOps team to automate and
- optimize the development process.
- Technical Mentorship: Provide technical guidance and support to team members,
- helping them solve day-to-day challenges, enhance code quality, and adopt best
- practices.
- Quality Assurance: Collaborate with QA to ensure thorough testing, automated testing
- coverage, and overall product quality.
- Product Security: Actively implement and promote security best practices to protect
- data and ensure compliance with industry standards.
- Documentation & Code Reviews: Promote good coding practices, conduct code
- reviews, and maintain clear documentation.
- Qualifications
• Technical Skills:
o Strong experience with .NET Core for backend development and RESTful API
design.
o Hands-on experience with Microsoft Azure services, including but not limited
to VMs, databases, application gateways, and user management.
o Familiarity with DevOps practices and tools, particularly CI/CD pipeline
configuration and deployment automation.
o Strong knowledge of product security best practices and experience implementing secure coding practices.
o Familiarity with QA processes and automated testing tools is a plus.
o Ability to support team members in solving technical challenges and sharing
knowledge effectively.
Preferred Qualifications
- 3+ years of experience in software development, with a strong focus on .NET Core
- Previous experience as a Staff SE, tech lead, or in a similar hands-on tech role.
- Strong problem-solving skills and ability to work in a fast-paced, startup environment.
- What We Offer
- Opportunity to lead and grow within a dynamic and ambitious team.
- Challenging projects that focus on innovation and cutting-edge technology.
- Collaborative work environment with a focus on learning, mentorship, and growth.
- Competitive compensation, benefits, and stock options.
- If you’re a proactive, forward-thinking technology leader with a passion for .NET Core and you’re ready to make an impact, we’d love to meet you!
Hiring: Azure Data Engineer
⭐ Experience: 2+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Bangalore
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
Passport: Mandatory & Valid
(Only immediate joiners & candidates serving notice period)
Mandatory Skills:
Azure Synapse, Azure Databricks, Azure Data Factory (ADF), SQL, Delta Lake, ADLS, ETL/ELT,Pyspark .
Responsibilities:
- Build and maintain data pipelines using ADF, Databricks, and Synapse.
- Develop ETL/ELT workflows and optimize SQL queries.
- Implement Delta Lake for scalable lakehouse architecture.
- Create Synapse data models and Spark/Databricks notebooks.
- Ensure data quality, performance, and security.
- Collaborate with cross-functional teams on data requirements.
Nice to Have:
Azure DevOps, Python, Streaming (Event Hub/Kafka), Power BI, Azure certifications (DP-203).
We are looking for an enthusiastic and dynamic individual to join Upland India as a Senior Software Engineer I (Backend) for our Panviva product. The individual will work with our global development team.
What would you do?
- Develop, Review, test and maintain application code
- Collaborating with other developers and product to fulfil objectives
- Troubleshoot and diagnose issues
- Take lead on tasks as needed
- Jump in and help the team deliver features when it is required
What are we looking for?
Experience
- 5 + years of experience in Designing and implementing application architecture
- Back-end developer who enjoys solving problems
- Demonstrated experience with the .NET ecosystem (.NET Framework, ASP.NET, .NET Core) & SQL server
- Experience in building cloud-native applications (Azure)
- Must be skilled at writing Quality, scalable, maintainable, testable code
Leadership Skills
- Strong communication skills
- Ability to mentor/lead junior developers
Primary Skills: The candidate must possess the following primary skills:
- Strong Back-end developer who enjoys solving problems
- Solid experience NET Core, SQL Server, and .Net Design patterns such as Strong Understanding of OOPs Principles, .net specific implementation (DI/CQRS/Repository etc., patterns) & Knowing Architectural Solid principles, Unit testing tools, Debugging techniques
- Applying patterns to improve scalability and reduce technical debt
- Experience with refactoring legacy codebases using design patterns
- Real-World Problem Solving
- Ability to analyze a problem and choose the most suitable design pattern
- Experience balancing performance, readability, and maintainability
- Experience building modern, scalable, reliable applications on the MS Azure cloud including services such as:
- App Services
- Azure Service Bus/ Event Hubs
- Azure API Management Service Azure Bot Service
- Function/Logic Apps
- Azure key vault & Azure Configuration Service
- CosmosDB, Mongo DB
- Azure Search
- Azure Cognitive Services
Understanding Agile Methodology and Tool Familiarity
- Solid understanding of Agile development processes, including sprint planning, daily stand-ups, retrospectives, and backlog grooming
- Familiarity with Agile tools such as JIRA for tracking tasks, managing workflows, and collaborating across teams
- Experience working in cross-functional Agile teams and contributing to iterative development cycles
Secondary Skills: It would be advantageous if the candidate also has the following secondary skills:
- Experience with front-end React/Jquery/Javascript, HTML and CSS Frameworks
- APM tools - Worked on any tools such as Grafana, NR, Cloudwatch etc.,
- Basic Understanding of AI models
- Python
About Upland
Upland Software (Nasdaq: UPLD) helps global businesses accelerate digital transformation with a powerful cloud software library that provides choice, flexibility, and value. Upland India is a fully owned subsidiary of Upland Software and headquartered in Bangalore. We are a remote-first company. Interviews and on-boarding are conducted virtually.
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
Role: Azure AI Tech Lead
Exp-3.5-7 Years
Location: Remote / Noida (NCR)
Notice Period: Immediate to 15 days
Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana
JOB DESCRIPTION
As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.
Key Responsibilities:
- Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
- Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
- Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
- Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
- Collaborate cross-functionally to translate business goals into innovative AI solutions.
- Enforce governance, responsible AI practices, and performance optimization standards.
- Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.
Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 3.5–7 years of experience delivering end-to-end AI/ML solutions.
- Strong expertise in Azure AI ecosystem and production-grade model deployment.
- Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
- Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
We are Looking for "IoT Migration Architect (Azure to AWS)"- Contract to Hire role.
"IoT Migration Architect (Azure to AWS)" – Role 1
Salary between 28LPA -33 LPA -Fixed
We have Other Positions as well in IOT.
- IoT Solutions Engineer - Role 2
- IoT Architect – 8+ Yrs - Role -3
Designed end to end IoT Architecture, Define Strategy, Integrate Hardware, /Software /Cloud Components.
Skills -Cloud Platform, AWS IoT, Azure IoT, Networking Protocols,
Experience in Large Scale IoT Deployment.
Contract to Hire role.
Location – Pune/Hyderabad/Chennai/ Bangalore
Work Mode -2-3 days Hybrid from Office in week.
Duration -Long Term, With Potential for full time conversion based on Performance and Business needs.
How much notice period we can consider.
15-25 Days( Not more than that)
Client Company – One of Leading Technology Consulting
Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east.
Highlights of this role.
• It’s a long term role.
• High Possibility of conversion within 6 Months or After 6 months ( if you perform well).
• Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai.
Point to be remember.
1. You should have valid experience cum relieving letters of your all past employer.
2. Must have available to join within 15 days’ time.
3. Must be ready to work 2-3 days from Client Office in a week.
4. Must have PF service history of last 4 years in Continuation
What we offer During the role.
- Competitive Salary
- Flexible working hours and hybrid work mode.
- Potential for full time conversion, Including Comprehensive Benefits, PF, Gratuity, Paid leave, Paid Holiday (as per client), Health Insurance and form 16.
How to Apply.
- Pls fill the given below summary sheet.
- Pls provide UAN Service history
- Latest Photo.
IoT Migration Architect (Azure to AWS) - Job Description
Job Title: IoT Migration Architect (Azure to AWS)
Experience Range: 10+ Years
Role Summary
The IoT Migration Architect is a senior-level technical expert responsible for providing architecture leadership, design, and hands-on execution for migrating complex Internet of Things (IoT) applications and platforms from Microsoft Azure to Amazon Web Services (AWS). This role requires deep expertise in both Azure IoT and the entire AWS IoT ecosystem, ensuring a seamless, secure, scalable, and cost-optimized transition with minimal business disruption.
Required Technical Skills & Qualifications
10+ years of progressive experience in IT architecture, with a minimum of 4+ years focused on IoT Solution Architecture and Cloud Migrations.
Deep, hands-on expertise in the AWS IoT ecosystem, including design, implementation, and operations (AWS IoT Core, Greengrass, Device Management, etc.).
Strong, hands-on experience with Azure IoT services, specifically Azure IoT Hub, IoT Edge, and related data/compute services (e.g., Azure Stream Analytics, Azure Functions).
Proven experience in cloud-to-cloud migration projects, specifically moving enterprise-grade applications and data, with a focus on the unique challenges of IoT device and data plane migration.
Proficiency with IoT protocols such as MQTT, AMQP, HTTPS, and securing device communication (X.509).
Expertise in Cloud-Native Architecture principles, microservices, containerization (Docker/Kubernetes/EKS), and Serverless technologies (AWS Lambda).
Solid experience with CI/CD pipelines and DevOps practices in a cloud environment (e.g., Jenkins, AWS Code Pipeline, GitHub Actions).
Strong knowledge of database technologies, both relational (e.g., RDS) and NoSQL (e.g., DynamoDB).
Certifications Preferred: AWS Certified Solutions Architect (Professional level highly desired), or other relevant AWS/Azure certifications.
Your full Name ( Full Name means full name) –
Contact NO –
Alternate Contact No-
Email ID –
Alternate Email ID-
Total Experience –
Experience in IoT –
Experience in AWS IoT-
Experience in Azure IoT –
Experience in Kubernetes –
Experience in Docker –
Experience in EKS-
Do you have valid passport –
Current CTC –
Expected CTC –
What is your notice period in your current Company-
Are you currently working or not-
If not working then when you have left your last company –
Current location –
Preferred Location –
It’s a Contract to Hire role, Are you ok with that-
Highest Qualification –
Current Employer (Payroll Company Name)
Previous Employer (Payroll Company Name)-
2nd Previous Employer (Payroll Company Name) –
3rd Previous Employer (Payroll Company Name)-
Are you holding any Offer –
Are you Expecting any offer -
Are you open to consider Contract to Hire role , It’s a C2H Role-
PF Deduction is happening in Current Company –
PF Deduction happened in 2nd last Employer-
PF Deduction happened in 3 last Employer –
Latest Photo –
UAN Service History -
Shantpriya Chandra
Director & Head of Recruitment.
Harel Consulting India Pvt Ltd
https://www.linkedin.com/in/shantpriya/
www.harel-consulting.com
Infrastructure Engineer – Database & Storage
Responsibilities
- Design and maintain PostgreSQL, OpenSearch, and Azure Blob/S3 clusters.
- Implement schema registry, metadata catalog, and time-versioned storage.
- Configure read replicas, backups, encryption-at-rest, and WORM (Write Once Read Many) compliance.
- Optimize query execution, indexing, and replication latency.
- Partner with DevOps on infrastructure as code and cross-region replication.
Requirements
- 6 + years database / data-infrastructure administration.
- Mastery of indexing, partitioning, query tuning, sharding.
- Proven experience deploying cloud-native DB stacks with Terraform or Helm.
Job Title: Infrastructure Engineer
Experience: 4.5Years +
Location: Bangalore
Employment Type: Full-Time
Joining: Immediate Joiner Preferred
💼 Job Summary
We are looking for a skilled Infrastructure Engineer to manage, maintain, and enhance our on-premise and cloud-based systems. The ideal candidate will have strong experience in server administration, virtualization, hybrid cloud environments, and infrastructure automation. This role requires hands-on expertise, strong troubleshooting ability, and the capability to collaborate with cross-functional teams.
Roles & Responsibilities
- Install, configure, and manage Windows and Linux servers.
- Maintain and administer Active Directory, DNS, DHCP, and file servers.
- Manage virtualization platforms such as VMware or Hyper-V.
- Monitor system performance, logs, and uptime to ensure high availability.
- Provide L2/L3 support, diagnose issues, and maintain detailed technical documentation.
- Deploy and manage cloud servers and resources in AWS, Azure, or Google Cloud.
- Design, build, and maintain hybrid environments (on-premises + cloud).
- Administer data storage systems and implement/test backup & disaster recovery plans.
- Handle cloud services such as cloud storage, networking, and identity (IAM, Azure AD).
- Ensure compliance with security standards like ISO, SOC, GDPR, PCI DSS.
- Integrate and manage monitoring and alerting tools.
- Support CI/CD pipelines and automation for infrastructure deployments.
- Collaborate with Developers, DevOps, and Network teams for seamless system integration.
- Troubleshoot and resolve complex infrastructure & system-level issues.
Key Skills Required
- Windows Server & Linux Administration
- VMware / Hyper-V / Virtualization technologies
- Active Directory, DNS, DHCP administration
- Knowledge of CI/CD and Infrastructure as Code
- Hands-on experience in AWS, Azure, or GCP
- Experience with cloud migration and hybrid cloud setups
- Proficiency in backup, replication, and disaster recovery tools
- Familiarity with automation tools (Terraform, Ansible, etc. preferred)
- Strong troubleshooting and documentation skills
- Understanding of networking concepts (TCP/IP, VPNs, firewalls, routing) is an added advantage
About the Job
This is a full-time role for a Lead DevOps Engineer at Spark Eighteen. We are seeking an experienced DevOps professional to lead our infrastructure strategy, design resilient systems, and drive continuous improvement in our deployment processes. In this role, you will architect scalable solutions, mentor junior engineers, and ensure the highest standards of reliability and security across our cloud infrastructure. The job location is flexible with preference for the Delhi NCR region.
Responsibilities
- Lead and mentor the DevOps/SRE team
- Define and drive DevOps strategy and roadmaps
- Oversee infrastructure automation and CI/CD at scale
- Collaborate with architects, developers, and QA teams to integrate DevOps practices
- Ensure security, compliance, and high availability of platforms
- Own incident response, postmortems, and root cause analysis
- Budgeting, team hiring, and performance evaluation
Requirements
Technical Skills
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 7+ years of professional DevOps experience with demonstrated progression.
- Strong architecture and leadership background
- Deep hands-on knowledge of infrastructure as code, CI/CD, and cloud
- Proven experience with monitoring, security, and governance
- Effective stakeholder and project management
- Experience with tools like Jenkins, ArgoCD, Terraform, Vault, ELK, etc.
- Strong understanding of business continuity and disaster recovery
Soft Skills
- Cross-functional communication excellence with ability to lead technical discussions.
- Strong mentorship capabilities for junior and mid-level team members.
- Advanced strategic thinking and ability to propose innovative solutions.
- Excellent knowledge transfer skills through documentation and training.
- Ability to understand and align technical solutions with broader business strategy.
- Proactive problem-solving approach with focus on continuous improvement.
- Strong leadership skills in guiding team performance and technical direction.
- Effective collaboration across development, QA, and business teams.
- Ability to make complex technical decisions with minimal supervision.
- Strategic approach to risk management and mitigation.
What We Offer
- Professional Growth: Continuous learning opportunities through diverse projects and mentorship from experienced leaders
- Global Exposure: Work with clients from 20+ countries, gaining insights into different markets and business cultures
- Impactful Work: Contribute to projects that make a real difference, with solutions generating over $1B in revenue
- Work-Life Balance: Flexible arrangements that respect personal wellbeing while fostering productivity
- Career Advancement: Clear progression pathways as you develop skills within our growing organization
- Competitive Compensation: Attractive salary packages that recognize your contributions and expertise
Our Culture
At Spark Eighteen, our culture centers on innovation, excellence, and growth. We believe in:
- Quality-First: Delivering excellence rather than just quick solutions
- True Partnership: Building relationships based on trust and mutual respect
- Communication: Prioritizing clear, effective communication across teams
- Innovation: Encouraging curiosity and creative approaches to problem-solving
- Continuous Learning: Supporting professional development at all levels
- Collaboration: Combining diverse perspectives to achieve shared goals
- Impact: Measuring success by the value we create for clients and users
Apply Here - https://tinyurl.com/t6x23p9b
What You Will be Doing:
● Develop and maintain software that is scalable, secure, and efficient
● Collaborate with Technical Architects & Business Analysts
● Architect and design software solutions that meet project requirements
● Mentor and train junior developers to improve their skills and knowledge
● Conduct code reviews ensuring the code is maintainable, readable, and efficient
● Research and evaluate new technologies to improve the processes
● Effective communication skills, particularly in documenting and explaining code and technical concepts.
Skills We Are Looking For:
● 5+ years extensive hands-on experience with NodeJS and Typescript
● Strong understanding of RESTful API design and implementation.
● Comfortable with debugging, performance tuning, and optimizing Node.js applications.
● Strong problem-solving abilities and attention to detail.
● Experience with authentication and authorization protocols, such as OAuth, JWT and session management.
● Understanding of security best practices in backend development, including data encryption and vulnerability mitigation.
Bonus Skills
● Experience with server-side frameworks such as Express.js or NestJS.
● Familiarity with cloud platforms (e.g., AWS, Azure, (preferred) Google Cloud) and their services for backend deployment.
● Familiarity with NoSQL databases (Mongo preferred), and the ability to design and optimize database queries.
Why You’ll Love It Here
● Innovative Culture - We believe in pushing boundaries
● Impactful Work - You won’t just write code, you will help build the future
● Collaborative Environment - We believe that everyone has a voice that matters
● Work Life Balance - Our flexible work environment encourages you to have space to
recharge
Position Name:- Azure Infrastructure Engineer
Year of Exp:- 8+ years
Mode :- Remote
Key Important Skill Set:-
Cloud Networking
Azure Networking
Hub-and-Spoke Architecture
Azure Kubernetes Service (AKS)
Terraform Automation
State File Management (Terraform)
Disaster Recovery (DR) Setup
High Availability Configuration
Network Troubleshooting (Azure)
AKS Networking
Scaling and Load Balancing (AKS)
VM Deployment (Terraform)
Service Mesh
Helm for AKS
Production-Level Infrastructure Management
Network Security Groups (NSG)
Container Management (AKS)
Detailed Job Description: -
Key Responsibilities:
Operate and troubleshoot production AKS clusters
Build and deploy workloads using Docker and Helm
Automate infra provisioning with Terraform
Configure autoscaling using KEDA and HPA
Manage Istio or equivalent service mesh (ingress, routing, mTLS)
Maintain robust CI/CD pipelines (Azure DevOps/GitHub Actions)
Handle complex Azure networking (VNet, NSG, DNS, LB, peering)
Support and execute disaster recovery procedures
You will be responsible for building a highly-scalable and extensible robust application. This position reports to the Engineering Manager.
Responsibilities:
- Align Sigmoid with key Client initiatives
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Ability to understand business requirements and tie them to technology solutions
- Open to work from client location as per the demand of the project / customer.
- Facilitate in Technical Aspects
- Develop and evolve highly scalable and fault-tolerant distributed components using Java technologies.
- Excellent experience in Application development and support, integration development and quality assurance.
- Provide technical leadership and manage it day to day basis
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Stay up-to-date on the latest technology to ensure the greatest ROI for customer & Sigmoid
- Hands on coder with good understanding on enterprise level code.
- Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems
- Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment
- Culture
- Must be a strategic thinker with the ability to think unconventional / out:of:box.
- Analytical and solution driven orientation.
- Raw intellect, talent and energy are critical.
- Entrepreneurial and Agile : understands the demands of a private, high growth company.
- Ability to be both a leader and hands on "doer".
Qualifications: -
- 3-5 year track record of relevant work experience and a computer Science or a related technical discipline is required
- Experience in development of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, and propose comprehensive solutions.
- Experience with functional and object-oriented programming, Java (Preferred) or Python is a must.
- Hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.
- Development and support experience in Big Data domain
- Experience with database modelling and development, data mining and warehousing.
- Unit, Integration and User Acceptance Testing.
- Effective communication skills (both written and verbal)
- Ability to collaborate with a diverse set of engineers, data scientists and product managers
- Comfort in a fast-paced start-up environment.
Preferred Qualification:
- Experience in Agile methodology.
- Proficient with SQL and its variation among popular databases.
- Experience working with large, complex data sets from a variety of sources.
Your Responsibilities
1. Design, implement, and maintain cloud infrastructure on Azure, ensuring
scalability, reliability, and security.
2. Setup and manage Kubernetes clusters for container orchestration, including
deployment, scaling, and monitoring.
3. Develop and maintain automated deployment pipelines using GitHub Actions or
similar CI/CD tools.
4. Utilize configuration management tools like Helm, Kubernetes, and Ansible for
infrastructure as code (IaC) and automated provisioning.
5. Collaborate closely with development teams to streamline workflows and
improve release processes.
6. Monitor system performance, troubleshoot issues, and implement solutions to
ensure optimal uptime and performance.
7. Stay up-to-date with industry best practices, emerging technologies, and trends in
DevOps and cloud computing.
The Skills you’ll need
In order to be successful in this role, you must have the following skills & experience:
1. 4-5 years of professional experience in DevOps or related roles.
2. Strong experience with cloud platforms such as AWS or Azure.
3. Proficiency in setting up and managing Kubernetes clusters.
4. Hands-on experience with containerization technologies like Docker.
5. Demonstrated expertise in building and maintaining CI/CD pipelines using GitHub
Actions or similar tools.
6. Solid understanding of infrastructure as code (IaC) principles, with experience
using tools like Helm, Kubernetes, and Ansible.
7. Excellent communication skills and ability to collaborate effectively with
cross-functional teams.
8. Proven problem-solving skills and ability to thrive in a fast-paced, dynamic
environment.
Good to have skills
1. Self-starter and highly motivated individual who is prepared to use his/her own
initiative in understanding and following up issues.
2. Takes ownership and responsibility of problems through to resolution.
3. Keen to learn business processes and technical skills.
4. Ability to work under pressure and multi-task when necessary.
5. Hands on experience using tools like Trello, Gitlab, Zoom
Title – Principal Cloud Architect
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career!
External Job Title :
Principal Cloud Cost Optimization Engineer
Position Responsibilities :
The Cloud Cost Optimization Engineer plays a key role in supporting the full lifecycle of cloud financial management (FinOps) at Deltek—driving visibility, accountability, and efficiency across our cloud investments. This role is responsible for managing cloud spend, forecasting, and identifying optimization opportunities that support Deltek's cloud expansion and financial performance goals.
We are seeking a candidate with hands-on experience in Cloud FinOps practices, software development capabilities, AI/automation expertise, strong analytical skills, and a passion for driving financial insights that enable smarter business decisions. The ideal candidate is a self-starter with excellent cross-team collaboration abilities and a proven track record of delivering results in a fast-paced environment.
Key Responsibilities:
- Prepare and deliver monthly reports and presentations on cloud spend performance versus plan and forecast for Finance, IT, and business leaders.
- Support the evaluation, implementation, and ongoing management of cloud consumption and financial management tools.
- Apply financial and vendor management principles to support contract optimization, cost modeling, and spend management.
- Clearly communicate technical and financial insights, presenting complex topics in a simple, actionable manner to both technical and non-technical audiences.
- Partner with engineering, product, and infrastructure teams to identify cost drivers, promote best practices for efficient cloud consumption, and implement savings opportunities.
- Lead cost optimization initiatives, including analyzing and recommending savings plans, reserved instances, and right-sizing opportunities across AWS, Azure, and OCI.
- Collaborate with the Cloud Governance team to ensure effective tagging strategies and alerting frameworks are deployed and maintained at scale.
- Support forecasting by partnering with infrastructure and engineering teams to understand demand plans and proactively manage capacity and spend.
- Build and maintain financial models and forecasting tools that provide actionable insights into current and future cloud expenditures.
- Develop and maintain automated FinOps solutions using Python, SQL, and cloud-native services (Lambda, Azure Functions) to streamline cost analysis, anomaly detection, and reporting workflows.
- Design and implement AI-powered cost optimization tools leveraging GenAI APIs (OpenAI, Claude, Bedrock) to automate spend analysis, generate natural language insights, and provide intelligent recommendations to stakeholders.
- Build custom integrations and data pipelines connecting cloud billing APIs, FinOps platforms, and internal systems to enable real-time cost visibility and automated alerting.
- Develop and sustain relationships with internal stakeholders, onboarding them to FinOps tools, processes, and continuous cost optimization practices.
- Create and maintain KPIs, scorecards, and financial dashboards to monitor cloud spend and optimization progress.
- Drive a culture of optimization by translating financial insights into actionable engineering recommendations, promoting cost-conscious architecture, and leveraging automation for resource optimization.
- Use FinOps tools and services to analyze cloud usage patterns and provide technical cost-saving recommendations to application teams.
- Develop self-service FinOps portals and chatbots using GenAI to enable teams to query cost data, receive optimization recommendations, and understand cloud spending through natural language interfaces.
- Leverage Generative AI tools to enhance FinOps automation, streamline reporting, and improve team productivity across forecasting, optimization, and anomaly detection.
Qualifications :
- Bachelor's degree in Finance, Computer Science, Information Systems, or a related field.
- 4+ years of professional experience in Cloud FinOps, IT Financial Management, or Cloud Cost Governance within an IT organization.
- 6-8 years of overall experience in Cloud Infrastructure Management, DevOps, Software Development, or related technical roles with hands-on cloud platform expertise
- Hands-on experience with native cloud cost management tools (e.g., AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis) and/or third-party FinOps platforms (e.g., Cloudability, CloudHealth, Apptio).
- Proven experience working within the FinOps domain in a large enterprise environment.
- Strong background in building and managing custom reports, dashboards, and financial insights.
- Deep understanding of cloud financial management practices, including chargeback/showback models, cost savings and avoidance tracking, variance analysis, and financial forecasting.
- Solid knowledge of cloud provider pricing models, billing structures, and optimization strategies.
- Practical experience with cloud optimization and governance practices such as anomaly detection, capacity planning, rightsizing, tagging strategies, and storage lifecycle policies.
- Skilled in leveraging automation to drive operational efficiency in cloud cost management processes.
- Strong analytical and data storytelling skills, with the ability to collect, interpret, and present complex financial and technical data to diverse audiences.
- Experience developing KPIs, scorecards, and metrics aligned with business goals and industry benchmarks.
- Ability to influence and drive change management initiatives that increase adoption and maturity of FinOps practices.
- Highly results-driven, detail-oriented, and goal-focused, with a passion for continuous improvement.
- Strong communicator and collaborative team player with a passion for mentoring and educating others.
- Strong proficiency in Python and SQL for data analysis, automation, and tool development, with demonstrated experience building production-grade scripts and applications.
- Hands-on development experience building automation solutions, APIs, or internal tools for cloud management or financial operations.
- Practical experience with GenAI technologies including prompt engineering, and integrating LLM APIs (OpenAI, Claude, Bedrock) into business workflows.
- Experience with Infrastructure as Code (Terraform etc.) and CI/CD pipelines for deploying FinOps automation and tooling.
- Familiarity with data visualization libraries (e.g. PowerBI ) and building interactive dashboards programmatically.
- Knowledge of ML/AI frameworks is a plus.
- Experience building chatbots or conversational AI interfaces for internal tooling is a plus.
- FinOps Certified Practitioner.
- AWS, Azure, or OCI cloud certifications are preferred.
Knowledge / Skills / Abilities
- Bachelor’s Degree or equivalent in Computer Science or related numerate
- Strong application development experience within a Microsoft .NET based environment
- Demonstrates good written and verbal communication skills in leading a small group of Developers and liaising with Business and IT stakeholders on agreed delivery commitments
- Demonstrates capability of proactive end-to-end ownership of product delivery including delivery to estimates, robust hosting, smooth release management, and consistency in quality to ensure overall satisfied product enhancement experience for the Product Owner and the wider, global end-user community
- Demonstrates good proficiency with established ageing tech stack but has also gained familiarity with new technologies as per market trends, which will be key to mutual success for the candidate and the organization in context with ongoing Business and IT transformation.
Essential skills
- Proficiency in C#, ASP.NET Webforms, MVC, WebAPI, jQuery, Angular, Entity Framework, SQL Server, XML, XSLT, JSON, .NET Core
- Familiarity with WPF, WCF, SSIS, SSRS, Azure DevOps, Cloud Technologies (Azure)
- Good analytical and communication skills
- Proficiency with Agile and Waterfall methodologies
Desirable skills
Familiarity with hosting and server infrastructure
Required Experience 10+
Knowledge & Experience:
- Providing Technical leadership and guidance to Teams In Data and Analytics engineering solutions and platforms
- Strong problem-solving skills and the ability to translate business requirements into actionable data science solutions.
- Excellent communication skills, with the ability to effectively convey complex ideas to technical and non-technical stakeholders.
- Strong team player with excellent interpersonal and collaboration skills.
- Ability to manage multiple projects simultaneously and deliver high-quality results within specified timelines.
- Proven ability to work collaboratively in a global, matrixed environment and engage effectively with global stakeholders across multiple business groups.
Relevant Experience:
- 12+ years of IT experience in delivering medium-to-large data engineering, and analytics solutions
- Min. 4 years of Experience working with Azure Databricks, Azure Data Factory, Azure Data Lake, Azure SQL DW, Azure SQL, Power BI, SAC and other BI, data visualization and exploration tools
- Deep understanding of master data management & governance concepts and methodologies
- Experience in Data Modelling & Source System Analysis
- Familiarity with PySpark
- Mastery of SQL
- Experience with Python programming language used for Data Engineering purpose.
- Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows.
- Preferred but not required -
- Microsoft Certified: Azure Data Engineer Associate
- Experience in preparing data for Data Science and Machine Learning
- Knowledge of Jupyter Notebooks or Databricks Notebooks for Python development
- Power BI Dataset Development and Dax
- Power BI Report development
- Exposure to AI services in Azure and Agentic Analytics solutions
Job Summary
We are looking for an experienced Backend Developer proficient in .NET, Node.js, and MS SQL Server to join our technical team. The candidate will be responsible for building, maintaining, and optimizing scalable backend services and APIs, ensuring system reliability, performance, and security.
Key Responsibilities
- Design, develop, and maintain backend applications and APIs using .NET (Core/ASP.NET) and Node.js.
- Develop and manage MS SQL Server databases, including schema design, stored procedures, indexing, and performance optimization.
- Integrate backend logic with various third-party systems and APIs.
- Ensure scalability, high performance, and security across backend systems.
- Write clean, maintainable, and well-documented code following best practices.
- Debug and resolve production issues, ensuring system stability and reliability.
- Collaborate with QA engineers, DevOps, and other backend developers to deliver end-to-end solutions.
- Participate in Agile development processes including sprint planning, daily stand-ups, and retrospectives.
- Stay updated with emerging backend technologies and contribute to continuous improvement.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 3–4 years of professional experience in backend development.
- Strong hands-on experience with .NET Core / ASP.NET / C#.
- Strong hands-on experience with Node.js (Express.js or NestJS preferred).
- Proficiency in MS SQL Server (T-SQL, stored procedures, performance tuning, query optimization).
- Experience developing and consuming RESTful APIs.
- Knowledge of API security standards (JWT, OAuth2, etc.).
- Familiarity with Git or other version control systems.
- Experience in Agile/Scrum development environments.
Nice to Have
- Experience with cloud platforms like Azure or AWS.
- Familiarity with ORM frameworks (Entity Framework, Sequelize).
- Exposure to CI/CD pipelines and containerization (Docker).
- Understanding of Redis or message queue systems (RabbitMQ, Kafka).
Soft Skills
- Strong analytical and problem-solving mindset.
- Excellent communication and teamwork skills.
- High sense of responsibility and ownership of assigned projects.
- Ability to work independently under minimal supervision.
Compensation
- Competitive salary based on experience and technical expertise.
- Performance-based bonuses and career growth opportunities.
Job Type: Full-time
We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.
Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
- Monitor and optimize Azure environments to ensure high availability, performance, and security.
- Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
- Troubleshoot and resolve issues related to build, deployment, and infrastructure.
- Implement and manage version control systems, primarily using Git.
- Manage containerization and orchestration using tools like Docker and Kubernetes.
- Ensure compliance with industry standards and best practices for security, scalability, and reliability.
Who we are :
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
What You Will Do :
As a Data Governance Developer at Kanerika, you will be responsible for building and managing metadata, lineage, and compliance frameworks across the organizations data ecosystem.
Required Qualifications :
- 4 to 6 years of experience in data governance or data management.
- Strong experience in Microsoft Purview and Informatica governance tools.
- Proficient in tracking and visualizing data lineage across systems.
- Familiar with Azure Data Factory, Talend, dbt, and other integration tools.
- Understanding of data regulations : GDPR, CCPA, SOX, HIPAA.
- Ability to translate technical data governance concepts for business stakeholders.
Tools & Technologies :
- Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog
- Experience in Microsoft Purview areas :
1. Label creation and policy management
2. Publish/Auto-labeling
3. Data Loss Prevention & Compliance handling
4. Compliance Manager, Communication Compliance, Insider Risk Management
5. Records Management, Unified Catalog, Information Barriers
6. eDiscovery, Data Map, Lifecycle Management, Compliance Alerts, Audit
7. DSPM, Data Policy
Key Responsibilities :
- Set up and manage Microsoft Purview accounts, collections, and access controls (RBAC).
- Integrate Purview with data sources : Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake.
- Schedule and monitor metadata scanning and classification jobs.
- Implement and maintain collection hierarchies aligned with data ownership.
- Design metadata ingestion workflows for technical, business, and operational metadata.
- Enrich data assets with business context : descriptions, glossary terms, tags.
- Synchronize metadata across tools using REST APIs, PowerShell, or ADF.
- Validate end-to-end lineage for datasets and reports (ADF ? Synapse ? Power BI).
- Resolve lineage gaps or failures using mapping corrections or scripts.
- Perform impact analysis to support downstream data consumers.
- Create custom classification rules for sensitive data (PII, PCI, PHI).
- Apply and manage Microsoft Purview sensitivity labels and policies.
- Integrate with Microsoft Information Protection (MIP) for DLP.
- Manage business glossary in collaboration with domain owners and stewards.
- Implement approval workflows and term governance.
- Conduct audits for glossary and metadata quality and consistency.
- Automate Purview operations using :
- PowerShell, Azure Functions, Logic Apps, REST APIs
- Build pipelines for dynamic source registration and scanning.
- Automate tagging, lineage, and glossary term mapping.
- Enable operational insights using Power BI, Synapse Link, Azure Monitor, and governance APIs.
Designation: Senior Python Django Developer
Position: Senior Python Developer
Job Types: Full-time, Permanent
Pay: Up to ₹800,000.00 per year
Schedule: Day shift
Ability to commute/relocate: Bhopal Indrapuri (MP) And Bangalore JP Nagar
Experience: Back-end development: 4 years (Required)
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 4 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unit test, or factory boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
To design, develop, and maintain highly scalable, secure, and efficient backend systems that
power core business applications. The Senior Engineer – Backend will be responsible for
architecting APIs, optimizing data flow, and ensuring system reliability and performance. This
role will collaborate closely with frontend, DevOps, and product teams to deliver robust
solutions that enable seamless user experiences and support organizational growth through
clean, maintainable, and well-tested code.
Responsibilities:
• Design, develop, and maintain robust and scalable backend services using Node.js.
• Collaborate with front-end developers and product managers to define and implement
API specifications.
• Optimize application performance and scalability by identifying bottlenecks and
proposing solutions.
• Write clean, maintainable, and efficient code, and conduct code reviews to ensure
quality standards.
• Develop unit tests and maintain code coverage to ensure high quality.
• Document architectural solutions and system designs to ensure clarity and
maintainability.
• Troubleshoot and resolve issues in development, testing, and production environments.
• Stay up to date with emerging technologies and industry trends to continuously improve
our tech stack.
• Mentor and guide junior engineers, fostering a culture of learning and growth.
Key Skills and Qualifications:
• Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent
experience).
• 7+ years of experience in backend development with a focus on Node.js and Javascript.
• Strong understanding of RESTful APIs and microservices architecture.
• Proficiency in database technologies (SQL and NoSQL, such as DynamoDB, MongoDB,
PostgreSQL, etc.).
• Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
• Knowledge of cloud platform (AWS) and deployment best practices.
• Excellent problem-solving skills and the ability to work in a fast-paced environment.
• Strong communication and teamwork skills.
Good to have:
• Experience with front-end frameworks (e.g. Angular, React, Vue.js).
• Understanding of HTML, CSS, and JavaScript.
• Familiarity with responsive design and user experience principles.
Title: Senior VDI Consultant – Citrix/Azure/VMware
Location: Pune,balewadi
Shift: 3:30 PM – 12:30 AM IST (Aligned to US ET support window)
Fill time role
About:Join the core team delivering 24x7 operational support for a large-scale Virtual Desktop and Applications environment supporting 90,000–100,000 users and 50,000+ VMs across Citrix, Azure, and VMware platforms.
Key Responsibilities
- Provide advanced support, monitoring, and maintenance for Citrix Virtual Apps and Desktops, Azure Virtual Desktop, and VMware environments.
- Lead efforts to enhance image management, patching, provisioning, and migration processes.
- Manage incident triage, root cause analysis, and documentation within ITSM tools.
- Create and maintain PowerShell scripts and other automation solutions for routine tasks.
- Collaborate with US and offshore teams for seamless 24x7 support operations.
- Prepare daily and weekly operational reports summarizing key metrics, incidents, and improvement opportunities.
- Participate in client review meetings to discuss performance and backlog reduction.
Required Skills & Qualifications
- 7+ years in End User Compute (EUC) operations with 5+ years on Azure, Citrix, and VMware platforms.
- Proven experience in Citrix Virtual Apps and Desktops, Azure Virtual Desktop, VMware Horizon, and Windows OS troubleshooting.
- Strong proficiency in PowerShell scripting for automation and reporting.
- Familiarity with ITIL processes (Incident, Problem, Change Management).
- Hands-on experience with ITSM tools (ServiceNow, Remedy, etc.).
- Excellent communication skills for collaboration in distributed teams.
- Ability to work independently and drive technical improvements.
Preferred Skills
- Expertise in Citrix Cloud, Citrix Studio, MCS/PVS, and advanced troubleshooting.
- Experience supporting large-scale enterprise virtual desktop environments (50k+ users).
- Exposure to automation and orchestration frameworks in VDI environments.
Reporting
- Submit daily operational summaries and weekly performance reports covering trends, incidents, and process improvements.
- Collaborate with the Domestic CVS Operations Manager (US) for review meetings and escalation handling.
Why Join
- Opportunity to be part of a flagship virtualization project from its foundation phase.
- High-visibility role with potential to grow into a team lead as the project scales to 15–20 members in 2026.
- Work with a global team managing one of the largest Citrix/Azure VDI environments.

Global digital transformation solutions provider.
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Outcomes:
Interpret the application/feature/component design to develop the same in accordance with specifications.
Code debug test document and communicate product/component/feature development stages.
Validate results with user representatives; integrates and commissions the overall solution
Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions
Optimises efficiency cost and quality.
Influence and improve customer satisfaction
Set FAST goals for self/team; provide feedback to FAST goals of team members
Measures of Outcomes:
Adherence to engineering process and standards (coding standards)
Adherence to project schedule / timelines
Number of technical issues uncovered during the execution of the project
Number of defects in the code
Number of defects post-delivery
Number of non compliance issues
On time completion of mandatory compliance trainings
Outputs Expected:
Code:
Code as per design
Follow coding standards
templates and checklists
Review code – for team and peers
Documentation:
Create/review templates
checklists
guidelines
standards for design/process/development
Create/review deliverable documents. Design documentation
r and requirements
test cases/results
Configure:
Define and govern configuration management plan
Ensure compliance from the team
Test:
Review and create unit test cases
scenarios and execution
Review test plan created by testing team
Provide clarifications to the testing team
Domain relevance:
Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client.
Learn more about the customer domain identifying opportunities to provide valuable addition to customers
Complete relevant domain certifications
Manage Project:
Manage delivery of modules and/or manage user stories
Manage Defects:
Perform defect RCA and mitigation
Identify defect trends and take proactive measures to improve quality
Estimate:
Create and provide input for effort estimation for projects
Manage knowledge:
Consume and contribute to project related documents
share point
libraries and client universities
Review the reusable documents created by the team
Release:
Execute and monitor release process
Design:
Contribute to creation of design (HLD
LLD
SAD)/architecture for Applications/Features/Business Components/Data Models
Interface with Customer:
Clarify requirements and provide guidance to development team
Present design options to customers
Conduct product demos
Manage Team:
Set FAST goals and provide feedback
Understand aspirations of team members and provide guidance opportunities etc
Ensure team is engaged in project
Certifications:
Take relevant domain/technology certification
Skill Examples:
Explain and communicate the design / development to the customer
Perform and evaluate test results against product specifications
Break down complex problems into logical components
Develop user interfaces business software components
Use data models
Estimate time and effort required for developing / debugging features / components
Perform and evaluate test in the customer or target environment
Make quick decisions on technical/project related challenges
Manage a Team mentor and handle people related issues in team
Maintain high motivation levels and positive dynamics in the team.
Interface with other teams designers and other parallel practices
Set goals for self and team. Provide feedback to team members
Create and articulate impactful technical presentations
Follow high level of business etiquette in emails and other business communication
Drive conference calls with customers addressing customer questions
Proactively ask for and offer help
Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
Build confidence with customers by meeting the deliverables on time with quality.
Estimate time and effort resources required for developing / debugging features / components
Make on appropriate utilization of Software / Hardware’s.
Strong analytical and problem-solving abilities
Knowledge Examples:
Appropriate software programs / modules
Functional and technical designing
Programming languages – proficient in multiple skill clusters
DBMS
Operating Systems and software platforms
Software Development Life Cycle
Agile – Scrum or Kanban Methods
Integrated development environment (IDE)
Rapid application development (RAD)
Modelling technology and languages
Interface definition languages (IDL)
Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
About the Role: We are looking for a Senior Software Developer with strong experience in .NET development and Microsoft Azure to help build and scale our next-generation FinTech platforms. You will work on secure, high-availability systems that power core financial services, collaborating with cross-functional teams to deliver features that directly impact our customers. You’ll play a key role in developing backend services, cloud integrations, and microservices that are performant, secure, and compliant with financial regulations. Key Responsibilities: Design, develop, and maintain backend services and APIs using C# and .NET Core. Build and deploy cloud-native applications on Microsoft Azure, leveraging services such as App Services, Azure Functions, Key Vault, Service Bus, and Azure SQL. Contribute to architecture decisions and write clean, maintainable, well-tested code. Participate in code reviews, technical planning, and sprint ceremonies in an Agile environment. Collaborate with QA, DevOps, Product, and Security teams to deliver robust, secure solutions. Ensure applications meet high standards of security, reliability, and scalability, especially in a regulated FinTech environment. Support and troubleshoot production issues and contribute to continuous improvement. Required Skills & Qualifications: 5–8 years of experience in software development, primarily with C# / .NET Core. Strong hands-on experience with Microsoft Azure, including Azure App Services, Azure Functions, Azure SQL, Key Vault, and Service Bus. Experience building RESTful APIs, microservices, and integrating with third-party services. Proficiency with Azure DevOps, Git, and CI/CD pipelines. Solid understanding of software design principles, object-oriented programming, and secure coding practices. Familiarity with Agile/Scrum development methodologies. Bachelor’s degree in Computer Science, Engineering, or a related field.
Skills: Dot Net, C#, Azure
Must-Haves
Net with Azure Developer -Required: Function app, Logic Apps, Event Grid, Service Bus, Durable Functions
Job Specification:
- Job Location - Noida
- Experience - 2-5 Years
- Qualification - B.Tech, BE, MCA (Technical background required)
- Working Days - 5
- Job nature - Permanent
- Role - IT Cloud Engineer
- Proficient in Linux.
- Hands on experience with AWS cloud or Google Cloud.
- Knowledge of container technology like Docker.
- Expertise in scripting languages. (Shell scripting or Python scripting)
- Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.
Job Description:
The incumbent would be responsible for:
- Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
- Server monitoring, analysis and troubleshooting.
- Deploying multi-tier architectures using microservices.
- Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
- Automating workflow with python or shell scripting.
- CI and CD integration for application lifecycle management.
- Hosting and managing websites on Linux machines.
- Frontend, backend and database optimization.
- Protecting operations by keeping information confidential.
- Providing information by collecting, analyzing, summarizing development & service issues.
- Prepares & installs solutions by determining and designing system specifications, standards & programming.
Job Position: Lead II - Software Engineering
Domain: Information technology (IT)
Location: India - Thiruvananthapuram
Salary: Best in Industry
Job Positions: 1
Experience: 8 - 12 Years
Skills: .Net, Sql Azure, Rest Api, Vue.Js
Notice Period: Immediate – 30 Days
Job Summary:
We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.
Key Responsibilities:
- Design, develop, and maintain scalable and secure .NET backend APIs.
- Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
- Lead and contribute to Agile software delivery processes (Scrum, Kanban).
- Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
- Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
- Implement unit and integration testing to ensure code quality and system stability.
- Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
- Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
- Mentor and coach engineers within a co-located or distributed team environment.
- Maintain best practices in code versioning, testing, and documentation.
Mandatory Skills:
- 7+ years of .NET development experience, including API design and development
- Strong experience with Azure Cloud services, including:
- Web/Container Apps
- Azure Functions
- Azure SQL Server
- Solid understanding of Agile development methodologies (Scrum/Kanban)
- Experience in CI/CD pipeline design and implementation
- Proficient in Infrastructure as Code (IaC) – preferably Terraform
- Strong knowledge of RESTful services and JSON-based APIs
- Experience with unit and integration testing techniques
- Source control using Git
- Strong understanding of HTML, CSS, and cross-browser compatibility
Good-to-Have Skills:
- Experience with Kubernetes and Docker
- Knowledge of JavaScript UI frameworks, ideally Vue.js
- Familiarity with JIRA and Agile project tracking tools
- Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
- Experience mentoring or coaching junior developers
- Strong problem-solving and communication skills
Job Summary:
We are looking for technically skilled and customer-oriented SME Voice – Technical Support Associates to provide voice-based support to enterprise clients. The role involves real-time troubleshooting of complex issues across servers, networks, cloud platforms (Azure), databases, and more. Strong communication and problem-solving skills are essential.
Key Responsibilities:
- Provide technical voice support to B2B (enterprise) customers.
- Troubleshoot and resolve issues related to:
- SQL, DNS, VPN, Server Support (Windows/Linux)
- Networking (TCP/IP, routing, firewalls)
- Cloud Services – especially Microsoft Azure
- Application and system-level issues
- Assist with technical configurations and product usage.
- Accurately document cases and escalate unresolved issues.
- Ensure timely resolution while meeting SLAs and quality standards.
Required Skills & Qualifications:
- 2.5 to 5 years in technical support (voice-based, B2B preferred)
Proficiency in:
- SQL, DNS, VPN, Server Support
- Networking, Microsoft Azure
- Basic understanding of coding/scripting
- Strong troubleshooting and communication skills
- Ability to work in a 24x7 rotational shift environment
























