50+ Microsoft Windows Azure Jobs in India
Apply to 50+ Microsoft Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Microsoft Windows Azure Jobs and apply today!



- At least 12years of hands-on software development experience, with a strong emphasis on the Microsoft .NET technology stack.
- 3 years experience in Architect
- Proven track record in a technical architecture leadership role, designing complex, mission-critical applications
- Deep expertise in C#, .NET Core, ASP.NET MVC, and Web API development
- Strong front-end development experience, including HTML5, CSS3, JavaScript, and modern frameworks such as Angular or React
- Comprehensive knowledge of Microsoft Azure, with hands-on experience deploying and managing applications in a cloud environment
- Extensive experience with SQL Server, including advanced skills in performance tuning and database optimization
- Familiarity with CI/CD practices, automated testing frameworks, and DevOps methodologies
- Exceptional analytical, problem-solving, and communication skills
- Strategic thinker with the ability to influence and guide technical direction across teams
- Bachelors or masters degree in computer science, Engineering, or a related technical field


Company name: JPMorgan (JPMC)
Job Category: Predictive Science
Location: Parcel 9, Embassy Tech Village, Outer Ring Road, Deverabeesanhalli Village, Varthur Hobli, Bengaluru
Job Schedule: Full time
JOB DESCRIPTION
JPMC is hiring the best talents to join the growing Asset and Wealth Management AI team. We are executing like a startup and building next-generation technology that combines JPMC unique data and full-service advantage to develop high impact AI applications and platforms in the financial services industry. We are looking for hands-on ML Engineering leader and expert who is excited about the opportunity.
As a senior ML and GenAI engineer, you will play a lead role as a senior member of our global team. Your responsibilities will entail hands on development of high-impact business solutions through data analysis, developing cutting edge ML and LLM models, and deploying these models to production environments on AWS or Azure.
You'll combine your years of proven development expertise with a never-ending quest to create innovative technology through solid engineering practices. Your passion and experience in one or more technology domains will help solve complex business problems to serve our Private Bank clients. As a constant learner and early adopter, you’re already embracing leading-edge technologies and methodologies; your example encourages others to follow suit.
Job responsibilities
• Hands-on architecture and implementation of lighthouse ML and LLM-powered solutions
• Close partnership with peers in a geographically dispersed team and colleagues across organizational lines
• Collaborate across JPMorgan AWM’s lines of business and functions to accelerate adoption of common AI capabilities
• Design and implement highly scalable and reliable data processing pipelines and deploy model inference services.
• Deploy solutions into public cloud infrastructure
• Experiment, develop and productionize high quality machine learning models, services, and platforms to make a huge technology and business impact
Required qualifications, capabilities, and skills
• Formal training or certification on software engineering concepts and 5+ years applied experience
• MS in Computer Science, Statistics, Mathematics or Machine Learning.
• Development experience, along with hands-on Machine Learning Engineering
• Proven leadership capacity, including new AI/ML idea generation and GenAI-based solutions
• Solid Python programming skills required; with other high-performance language such as Go a big plus
• Expert knowledge of one of the cloud computing platforms preferred: Amazon Web Services (AWS), Azure, Kubernetes.
• Experience in using LLMs (OpenAI, Claude or other models) to solve business problems, including full workflow toolset, such as tracing, evaluations and guardrails. Understanding of LLM fine-tuning and inference a plus
• Knowledge of data pipelines, both batch and real-time data processing on both SQL (such as Postgres) and NoSQL stores (such as OpenSearch and Redis)
• Expertise in application, data, and infrastructure architecture disciplines
• Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics.
• Excellent communication skills and ability to communicate with senior technical and business partners
Preferred qualifications, capabilities, and skills
• Expert in at least one of the following areas: Natural Language Processing, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis.
• Knowledge of machine learning frameworks: Pytorch, Keras, MXNet, Scikit-Learn
• Understanding of finance or wealth management businesses is an added advantage
ABOUT US
JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.
ABOUT THE TEAM
J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.

We are seeking a highly skilled Fabric Data Engineer with strong expertise in Azure ecosystem to design, build, and maintain scalable data solutions. The ideal candidate will have hands-on experience with Microsoft Fabric, Databricks, Azure Data Factory, PySpark, SQL, and other Azure services to support advanced analytics and data-driven decision-making.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Microsoft Fabric and Azure data services.
- Implement data integration, transformation, and orchestration workflows with Azure Data Factory, Databricks, and PySpark.
- Work with stakeholders to understand business requirements and translate them into robust data solutions.
- Optimize performance and ensure data quality, reliability, and security across all layers.
- Develop and maintain data models, metadata, and documentation to support analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to deliver insights-driven solutions.
- Stay updated with emerging Azure and Fabric technologies to recommend best practices and innovations.
- Required Skills & Experience
- Proven experience as a Data Engineer with strong expertise in the Azure cloud ecosystem.
Hands-on experience with:
- Microsoft Fabric
- Azure Databricks
- Azure Data Factory (ADF)
- PySpark & Python
- SQL (T-SQL/PL-SQL)
- Solid understanding of data warehousing, ETL/ELT processes, and big data architectures.
- Knowledge of data governance, security, and compliance within Azure.
- Strong problem-solving, debugging, and performance tuning skills.
- Excellent communication and collaboration abilities.
Preferred Qualifications
- Microsoft Certified: Fabric Analytics Engineer Associate / Azure Data Engineer Associate.
- Experience with Power BI, Delta Lake, and Lakehouse architecture.
- Exposure to DevOps, CI/CD pipelines, and Git-based version control.

7+ years of experience in Python Development
Good experience in Microservices and APIs development.
Must have exposure to large scale data
Good to have Gen AI experience
Code versioning and collaboration. (Git)
Knowledge for Libraries for extracting data from websites.
Knowledge of SQL and NoSQL databases
Familiarity with RESTful APIs
Familiarity with Cloud (Azure /AWS) technologies
About Wissen Technology:
• The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
• Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
• Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
• Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
• Globally present with offices US, India, UK, Australia, Mexico, and Canada.
• We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
• Wissen Technology has been certified as a Great Place to Work®.
• Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
• Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
Website : www.wissen.com
1 Senior Associate Technology L1 – Java Microservices
Company Description
Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Job Description
We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.
We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.
Your Impact:
• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.
• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business
• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.
Qualifications
➢ 5 to 7 Years of software development experience
➢ Strong development skills in Java JDK 1.8 or above
➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts
➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure
➢ Database RDBMS/No SQL (SQL, Joins, Indexing)
➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)
➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)
➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)
➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)
➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of
➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.
➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.
➢ Good communication skills and ability to work with global teams to define and deliver on projects.
➢ Sound understanding/experience in software development process, test-driven development.
➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine
➢ Experience in Microservices


Job Overview.e
Note: This is a deep hands-on IC role
Responsibilities
- Forward-Looking Product Development:
- Collaborate with product and engineering teams to align on the technical direction, scalability, and maintainability of the product.
- Proactively consider and address security, performance, and scalability requirements during development.
- Cloud and Infrastructure: Leverage Microsoft Azure for cloud infrastructure, ensuring efficient and secure use of cloud services. Work closely with DevOps to improve deployment processes.
- DevOps & CI/CD: Support the setup and maintenance of CI/CD pipelines, enabling smooth and frequent deployments. Collaborate with the DevOps team to automate and optimize the development process.
- Technical Mentorship: Provide technical guidance and support to team members, helping them solve day-to-day challenges, enhance code quality, and adopt best practices.
- Quality Assurance: Collaborate with QA to ensure thorough testing, automated testing coverage, and overall product quality.
- Product Security: Actively implement and promote security best practices to protect data and ensure compliance with industry standards.
- Documentation & Code Reviews: Promote good coding practices, conduct code reviews, and maintain clear documentation.
Qualifications
- Technical Skills:
- Strong experience with .NET Core for backend development and RESTful API design.
- Hands-on experience with Microsoft Azure services, including but not limited to VMs, databases, application gateways, and user management.
- Familiarity with DevOps practices and tools, particularly CI/CD pipeline configuration and deployment automation.
- Strong knowledge of product security best practices and experience implementing secure coding practices.
- Familiarity with QA processes and automated testing tools is a plus.
- Ability to support team members in solving technical challenges and sharing knowledge effectively.
Preferred Qualifications
+ years of experience in software development, with a strong focus on .NET Core
- Previous experience as a Staff SE, tech lead, or in a similar hands-on tech role.
- Strong problem-solving skills and ability to work in a fast-paced, startup environment.
What We Offer
- Opportunity to lead and grow within a dynamic and ambitious team.
- Challenging projects that focus on innovation and cutting-edge technology.
- Collaborative work environment with a focus on learning, mentorship, and growth.
- Competitive compensation, benefits, and stock options.
If you’re a proactive, forward-thinking technology leader with a passion for .NET Core and React, and you’re ready to make an impact, we’d love to meet you!
We are seeking a skilled Azure Cloud Solution engineer to design, implement, and maintain scalable cloud-based applications using Microsoft Azure. He/ She will be responsible for integrating various Azure services to support user management, real-time communication, event scheduling, and intelligent automation. The ideal candidate will have hands-on experience with Azure services, infrastructure automation, and cloud security, and will play a key role in driving cloud adoption and optimization across the organization.
Key Responsibilities:
•Deploy and manage APIs and user/group services using Azure App Service.
•Build and maintain Azure Functions for authentication, invite tracking, membership updates, and event orchestration.
•Design and manage structured data in Azure SQL Database. Handle unstructured data using Azure Cosmos DB.
•Implement Azure Web Pub Sub for live messaging, group interactions, and event notifications.
•Configure Azure Notification Hubs to deliver push alerts for invites, reminders, and calendar events.
•Leverage Azure AI Services for intelligent scheduling, personalized recommendations, and time slot matching.
•Manage sensitive credentials and OAuth tokens securely using Azure Key Vault.
•Collaborate with development, security, and operations teams to ensure seamless integration and performance.
•Optimize cloud costs and resource utilization through monitoring and reporting.
•Ensure compliance with security and governance policies across Azure environments.
•Troubleshoot and resolve issues related to cloud infrastructure and services.
•Stay updated with the latest Azure features and best practices.

Job Role: Sr. Data Engineer
Location: Navrangpura, Ahmedabad
WORK FROM OFFICE - 5 DAYS A WEEK (UK Shift)
Job Description:
• 5+ years of core experience in python & Data Engineering.
• Must have experience with Azure Data factory and Databricks.
• Exposed to python-oriented Algorithm’s libraries such as NumPy, pandas, beautiful soup, Selenium, pdfplumber, Requests etc.
• Proficient in SQL programming.
• Knowledge on DevOps like CI/CD, Jenkins, Git.
• Experience working with Azure Databricks.
• Able to co-ordinate with Teams across multiple locations and time zones
• Strong interpersonal and communication skills with an ability to lead a team and keep them motivated.
Mandatory Skills : Data Engineer - Azure Data factory, Databricks, Python, SQL/MySQL/PostgreSQ

We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!
✅ Key Details
- Work Type: Freelance / Contract
- Location: Remote
- Time Zones: IST / EST only
- Domain: Data & AI, Cloud, Big Data, Machine Learning
- Collaboration: Work with industry leaders on innovative projects
🔹 Open Roles
1. Databricks – Senior Consultant
- Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
- Experience: 6+ years
2. Databricks – ML Engineer
- Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
- Experience: 4+ years
3. Databricks – Solution Architect
- Skills: Azure, GCP, AWS, CI/CD, MLOps
- Experience: 7+ years
4. Databricks – Solution Consultant
- Skills: SQL, Spark, BigQuery, Python, Scala
- Experience: 2+ years
✅ What We Offer
- Opportunity to work with top-tier professionals and clients
- Exposure to cutting-edge technologies and real-world data challenges
- Flexible remote work environment aligned with IST / EST time zones
- Competitive compensation and growth opportunities
📌 Skills We Value
Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |
About the Job
Cloud Engineer
Experience: 1–5 Years
Salary: Competitive
Preferred Notice Period: Immediate to 30 Days
Opportunity Type: Remote (Global)
Placement Type: Freelance/Contract
(Note: This is a requirement for one of TalentLo’s Clients)
Role Overview
We’re seeking experienced Cloud Engineers with 1–5 years of professional experience to design, build, and optimize cloud-native applications and infrastructure. This is a freelance/contract opportunity where you’ll work remotely with global clients on innovative and high-impact projects.
Responsibilities
- Design and implement cloud-native applications and infrastructure
- Create Infrastructure as Code (IaC) templates for automated deployments
- Set up and optimize CI/CD pipelines for cloud applications
- Implement security best practices for cloud environments
- Design scalable and cost-effective cloud architectures
- Troubleshoot and resolve complex cloud service issues
- Create cloud migration strategies and implementation plans
- Guide clients on cloud best practices and architectural decisions
Requirements
- Strong proficiency with at least one major cloud provider (AWS, Azure, GCP)
- Experience with Infrastructure as Code tools (Terraform, CloudFormation, ARM templates)
- Knowledge of containerization and orchestration (Docker, Kubernetes)
- Understanding of cloud networking and security concepts
- Experience with CI/CD tools and methodologies
- Scripting and automation skills
- Solid understanding of high availability and disaster recovery
- Experience implementing monitoring and logging solutions
How to Apply
- Create your profile on TalentLo’s platform → https://www.talentlo.com/signup
- Submit your GitHub, portfolio, or sample projects
- Take the required assessment and get qualified
- Get shortlisted & connect with the client
About TalentLo
TalentLo is a revolutionary talent platform connecting exceptional tech professionals with high-quality clients worldwide. We’re building a carefully curated pool of skilled experts to match with companies actively seeking specialized talent for impactful projects.
✨ If you’re ready to work on exciting cloud projects, collaborate with global teams, and take your career to the next level — apply today!
Key Responsibilities:
- Design, build, and enhance Salesforce applications using Apex, Lightning Web Components (LWC), Visualforce, and SOQL.
- Implement integrations with external systems using REST APIs and event-driven messaging (e.g., Kafka).
- Collaborate with architects and business analysts to translate requirements into scalable, maintainable solutions.
- Establish and follow engineering best practices, including source control (Git), code reviews, branching strategies, CI/CD pipelines, automated testing, and environment management.
- Establish and maintain Azure DevOps-based workflows (repos, pipelines, automated testing) for Salesforce engineering.
- Ensure solutions follow Salesforce security, data modeling, and performance guidelines.
- Participate in Agile ceremonies, providing technical expertise and leadership within sprints and releases.
- Optimize workflows, automations, and data processes across Sales Cloud, Service Cloud, and custom Salesforce apps.
- Provide technical mentoring and knowledge sharing when required.
- Support production environments, troubleshoot issues, and drive root-cause analysis for long-term reliability.
- Stay current on Salesforce platform updates, releases, and new features, recommending adoption where beneficial.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
- 6+ years of Salesforce development experience with strong knowledge of Apex, Lightning Web Components, and Salesforce APIs.
- Proven experience with Salesforce core clouds (Sales Cloud, Service Cloud, or equivalent).
- Strong hands-on experience with API integrations (REST/SOAP) and event-driven architectures (Kafka, Pub/Sub).
- Solid understanding of engineering practices: Git-based source control (Salesforce DX/metadata), branching strategies, CI/CD, automated testing, and deployment management.
- Familiarity with Azure DevOps repositories and pipelines.
- Strong knowledge of Salesforce data modeling, security, and sharing rules.
- Excellent problem-solving skills and ability to collaborate across teams.
Preferred Qualifications:
- Salesforce Platform Developer II certification (or equivalent advanced credentials).
- Experience with Health Cloud, Financial Services Cloud, or other industry-specific Salesforce products.
- Experience implementing logging, monitoring, and observability within Salesforce and integrated systems.
- Background in Agile/Scrum delivery with strong collaboration skills.
- Prior experience establishing or enforcing engineering standards across Salesforce teams.

Job Description:
- 4+ years of experience in a Data Engineer role,
- Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc.
- Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/Hive
- Experience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event Hub
- Experience with GCP/Azure data factory/AWS
- Strong in SQL Scripting
- Experience with ETL tools
- Knowledge of Snowflake Data Warehouse
- Knowledge of Orchestration frameworks: Airflow/Luig
- Good to have knowledge of Data Quality Management frameworks
- Good to have knowledge of Master Data Management
- Self-learning abilities are a must
- Familiarity with upcoming new technologies is a strong plus.
- Should have a bachelor's degree in big data analytics, computer engineering, or a related field
Personal Competency:
- Strong communication skills is a MUST
- Self-motivated, detail-oriented
- Strong organizational skills
- Ability to prioritize workloads and meet deadlines
Requirements
- Design, implement, and manage CI/CD pipelines using Azure DevOps, GitHub, and Jenkins for automated deployments of applications and infrastructure changes.
- Architect and deploy solutions on Kubernetes clusters (EKS and AKS) to support containerized applications and microservices architecture.
- Collaborate with development teams to streamline code deployments, releases, and continuous integration processes across multiple environments.
- Configure and manage Azure services including Azure Synapse Analytics, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), and other data services for efficient data processing and analytics workflows.
- Utilize AWS services such as Amazon EMR, Amazon Redshift, Amazon S3, Amazon Aurora, IAM policies, and Azure Monitor for data management, warehousing, and governance.
- Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate provisioning and management of cloud resources.
- Ensure high availability, performance monitoring, and disaster recovery strategies for cloud-based applications and services.
- Develop and enforce security best practices and compliance policies, including IAM policies, encryption, and access controls across Azure environments.
- Collaborate with cross-functional teams to troubleshoot production issues, conduct root cause analysis, and implement solutions to prevent recurrence.
- Stay current with industry trends, best practices, and evolving technologies in cloud computing, DevOps, and container orchestration.
**Qualifications: **
- Bachelor’s degree in Computer Science, Engineering, or related field; or equivalent work experience.
- 5+ years of experience as a DevOps Engineer or similar role with hands-on expertise in AWS and Azure cloud environments.
- Strong proficiency in Azure DevOps, Git, GitHub, Jenkins, and CI/CD pipeline automation.
- Experience deploying and managing Kubernetes clusters (EKS, AKS) and container orchestration platforms.
- Deep understanding of cloud-native architectures, microservices, and serverless computing.
- Familiarity with Azure Synapse, ADF, ADLS, and AWS data services (EMR, Redshift, Glue) for data integration and analytics.
- Solid grasp of infrastructure as code (IaC) tools like Terraform, CloudFormation, or ARM templates.
- Experience with monitoring tools (e.g., Prometheus, Grafana) and logging solutions for cloud-based applications.
- Excellent troubleshooting skills and ability to resolve complex technical issues in production environments.

We are seeking a Full Stack Developer with exceptional communication skills to collaborate daily with our international clients in the US and Australia. This role requires not only technical expertise but also the ability to clearly articulate ideas, gather requirements, and maintain strong client relationships. Communication is the top priority.
The ideal candidate is passionate about technology, eager to learn and adapt to new stacks, and capable of delivering scalable, high-quality solutions across the stack.
Key Responsibilities
- Client Communication: Act as a daily point of contact for clients (US & Australia), ensuring smooth collaboration and requirement gathering.
- Backend Development:
- Design and implement REST APIs and GraphQL endpoints.
- Integrate secure authentication methods including OAuth, Passwordless, and Signature-based authentication.
- Build scalable backend services with Node.js and serverless frameworks.
- Frontend Development:
- Develop responsive, mobile-friendly UIs using React and Tailwind CSS.
- Ensure cross-browser and cross-device compatibility.
- Database Management:
- Work with RDBMS, NoSQL, MongoDB, and DynamoDB.
- Cloud & DevOps:
- Deploy applications on AWS / GCP / Azure (knowledge of at least one required).
- Work with CI/CD pipelines, monitoring, and deployment automation.
- Quality Assurance:
- Write and maintain unit tests to ensure high code quality.
- Participate in code reviews and follow best practices.
- Continuous Learning:
- Stay updated on the latest technologies and bring innovative solutions to the team.
Must-Have Skills
- Excellent communication skills (verbal & written) for daily client interaction.
- 2+ years of experience in full-stack development.
- Proficiency in Node.js and React.
- Strong knowledge of REST API and GraphQL development.
- Experience with OAuth, Passwordless, and Signature-based authentication methods.
- Database expertise with RDBMS, NoSQL, MongoDB, DynamoDB.
- Experience with Serverless Framework.
- Strong frontend skills: React, Tailwind CSS, responsive design.
Nice-to-Have Skills
- Familiarity with Python for backend or scripting.
- Cloud experience with AWS, GCP, or Azure.
- Knowledge of DevOps practices and CI/CD pipelines.
- Experience with unit testing frameworks and TDD.
Who You Are
- A confident communicator who can manage client conversations independently.
- Passionate about learning and experimenting with new technologies.
- Detail-oriented and committed to delivering high-quality software.
- A collaborative team player who thrives in dynamic environments.
Job Description
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
Work location: Pune/Mumbai/Bangalore
Experience: 4-7 Years
Joining: Mid of October
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
Here’s why Wissen Technology stands out:
Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.
Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.
Recognitions: Great Place to Work® Certified.
Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).
Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.
Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Job Description: Java Developer
Position: Java Developer
Experience: 5 to 7 Years
Notice Period: Immediate Joiner
Key Responsibilities
- Design, develop, and maintain scalable, high-performance Java applications.
- Work with Core Java and Advanced Java concepts to build reliable backend solutions.
- Develop and deploy applications using Spring Boot framework.
- Design and implement RESTful Microservices with best practices in scalability and performance.
- Collaborate with cross-functional teams in an Agile/Scrum environment.
- Manage code versions effectively using Git/GitHub.
- Ensure code quality by integrating and analyzing with SonarQube.
- Participate in code reviews, sprint planning, and daily stand-ups.
- Troubleshoot production issues and optimize system performance.
Required Skills
- Strong proficiency in Core Java (OOPs, Collections, Multithreading, Exception Handling).
- Hands-on experience in Advanced Java (JDBC, Servlets, JSP, JPA/Hibernate).
- Proven experience with Spring Boot for application development.
- Knowledge and experience in Microservices Architecture.
- Familiarity with REST APIs, JSON, and Web Services.
- Proficient in Git/GitHub for version control and collaboration.
- Experience with Sonar Qube for code quality and security checks.
- Good understanding of Agile/Scrum methodologies.
- Strong problem-solving and debugging skills.
Nice-to-Have
- Experience with CI/CD pipelines (Jenkins, GitHub Actions, or similar).
- Familiarity with Docker/Kubernetes for containerized deployments.
- Basic knowledge of cloud platforms (AWS, Azure, GCP).

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
-Design, build, and maintain scalable data pipelines for
structured and unstructured data sources
-Develop ETL processes to collect, clean, and transform data
from internal and external systems. Support integration of data into
dashboards, analytics tools, and reporting systems
-Collaborate with data analysts and software developers to
improve data accessibility and performance.
-Document workflows and maintain data infrastructure best
practices.
-Assist in identifying opportunities to automate repetitive data
tasks
Please send your resume to talent@springer. capital

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
-Design, build, and maintain scalable data pipelines for
structured and unstructured data sources
-Develop ETL processes to collect, clean, and transform data
from internal and external systems. Support integration of data into
dashboards, analytics tools, and reporting systems
-Collaborate with data analysts and software developers to
improve data accessibility and performance.
-Document workflows and maintain data infrastructure best
practices.
-Assist in identifying opportunities to automate repetitive data
tasks
Please send your resume to talent@springer. capital

Salary (Lacs): Up to 22 LPA
Required Qualifications
• 4–7 years of total experience, with a minimum of 4 years in a full-time DevOps role
• Hands-on experience with major cloud platforms (GCP, AWS, Azure, OCI), more than one will be a plus
• Proficient in Kubernetes administration and container technologies (Docker, containerd)
• Strong Linux fundamentals
• Scripting skills in Python and shell scripting
• Knowledge of infrastructure as code with hands-on experience in Terraform and/or Pulumi (mandatory)
• Experience in maintaining and troubleshooting production environments
• Solid understanding of CI/CD concepts with hands-on experience in tools like Jenkins, GitLab CI, GitHub Actions, ArgoCD, Devtron, GCP Cloud Build, or Bitbucket Pipelines
If Interested kindly share your updated resume on 82008 31681

Business Summary
The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career!
Position Responsibilities
- Diagnose and resolve complex software issues, including operational challenges, performance bottlenecks, and system faults.
- Collaborate with Global SRE, Product Delivery, Product Engineering, and Support Services teams to ensure seamless operations.
- Maintain consistent service availability by proactively monitoring system stability and performance.
- Execute daily operational tasks such as database creation, schema management, restores, application configuration, and other system administration duties.
- Lead incident response efforts, manage major incident bridges, and contribute to post-incident reviews to drive continuous improvement and prevent recurrence.
- Develop and implement automation strategies to minimize manual work and enhance operational efficiency.
- Monitor system resource utilization, errors, and alert trends to support capacity planning.
- Document internal operational processes, procedures, and policies for consistency and knowledge sharing.
- Work within a 24/7 shift schedule to provide reliable coverage.
- Participate in maintenance activities and on-call rotations as needed.
Qualifications
- Bachelor’s degree in Computer Science or a related field, or equivalent experience.
- 5+ years of experience supporting large-scale enterprise applications and systems on public cloud infrastructure (AWS preferred).
- 5+ years of hands-on experience managing and operating enterprise-grade Windows or Linux production environments.
- 3+ years of experience applying an automation-first approach using configuration management tools and scripting (e.g., Bash, Python, PowerShell).
- Familiarity with Incident Management and ITIL service operations (ServiceNow experience preferred).
- Experience with monitoring and observability tools such as AppDynamics, Splunk, SolarWinds, DPA, Nagios, NewRelic, Grafana, or Prometheus.
- Proficiency in database management, including Oracle or Microsoft SQL Server administration.
- Strong knowledge of database query optimization and performance tuning.
- Passion for leveraging technology with a proactive, self-directed learning mindset.
- Detail-oriented, results-driven, and able to communicate effectively in English.
- Strong teamwork and collaboration skills, with the ability to solve problems across departments.
Must Have -
a. Background working with Startups
b. Good knowledge of Kubernetes & Docker
c. Background working in Azure
What you’ll be doing
- Ensure that our applications and environments are stable, scalable, secure and performing as expected.
- Proactively engage and work in alignment with cross-functional colleagues to understand their requirements, contributing to and providing suitable supporting solutions.
- Develop and introduce systems to aid and facilitate rapid growth including implementation of deployment policies, designing and implementing new procedures, configuration management and planning of patches and for capacity upgrades
- Observability: ensure suitable levels of monitoring and alerting are in place to keep engineers aware of issues.
- Establish runbooks and procedures to keep outages to a minimum. Jump in before users notice that things are off track, then automate it for the future.
- Automate everything so that nothing is ever done manually in production.
- Identify and mitigate reliability and security risks. Make sure we are prepared for peak times,
- DDoS attacks and fat fingers.
- Troubleshoot issues across the whole stack - software, applications and network.
- Manage individual project priorities, deadlines, and deliverables as part of a self-organizing team.
- Learn and unlearn every day by exchanging knowledge and new insights, conducting constructive code reviews, and participating in retrospectives.
Requirements
- 2+ years extensive experience of Linux server administration include patching, packaging (rpm), performance tuning, networking, user management, and security.
- 2+ years of implementing systems that are highly available, secure, scalable, and self-healingon Azure cloud platform
- Strong understanding of networking, especially in cloud environments along with a good understanding of CICD.
- Prior experience implementing industry standard security best practices, including those recommended by Azure
- Proficiency with Bash, and any high-level scripting language.
- Basic working knowledge of observability stacks like ELK, prometheus, grafana, Signoz etc
- Proficiency with Infrastructure as Code and Infrastructure Testing, preferably using Pulumi/Terraform.
- Hands-on experience in building and administering VMs and Containers using tools such as Docker/Kubernetes.
- Excellent communication skills, spoken as well as written, with a demonstrated ability to articulate technical problems and projects to all stakeholders.

Key Responsibilities
- Data Architecture & Pipeline Development
- Design, implement, and optimize ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
- Integrate structured, semi-structured, and unstructured data from multiple sources.
- Data Storage & Management
- Develop and maintain Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake solutions.
- Ensure proper indexing, partitioning, and storage optimization for performance.
- Data Governance & Security
- Implement role-based access control, data encryption, and compliance with GDPR/CCPA.
- Ensure metadata management and data lineage tracking with Azure Purview or similar tools.
- Collaboration & Stakeholder Engagement
- Work closely with BI developers, analysts, and business teams to translate requirements into data solutions.
- Provide technical guidance and best practices for data integration and transformation.
- Monitoring & Optimization
- Set up monitoring and alerting for data pipelines.

Job Type : Contract
Location : Bangalore
Experience : 5+yrs
The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.
Required Skills:
- 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
- Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
- Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
- Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
- Experience in Policy-as-code (Rego) and OPA platform.
- Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
- Deep understanding of DevOps processes and workflows.
- Working knowledge of the Secure SDLC process
- Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
- Familiarity with Logging and data pipeline concepts and architectures in cloud.
- Strong in scripting languages such as PowerShell or Python or Bash or Go.
- Knowledge of Agile best practices and methodologies
- Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
- Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
- Experience in ITSM.
- Ability to articulate complex technical concepts to non-technical stakeholders.
- Experience with risk control frameworks and engagements with risk and regulatory functions
- Experience in the financial industry would be a plus.
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.
The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
- Design, build, and maintain scalable data pipelines for structured and unstructured data sources
- Develop ETL processes to collect, clean, and transform data from internal and external systems
- Support integration of data into dashboards, analytics tools, and reporting systems
- Collaborate with data analysts and software developers to improve data accessibility and performance
- Document workflows and maintain data infrastructure best practices
- Assist in identifying opportunities to automate repetitive data tasks
Job Title : Senior System Administrator
Experience : 7 to 12 Years
Location : Bangalore (Whitefield/Domlur) or Coimbatore
Work Mode :
- First 3 Months : Work From Office (5 Days)
- Post-Probation : Hybrid (3 Days WFO)
- Shift : Rotational (Day & Night)
- Notice Period : Immediate to 30 Days
- Salary : Up to ₹24 LPA (including 8% variable), slightly negotiable
Role Overview :
Seeking a Senior System Administrator with strong experience in server administration, virtualization, automation, and hybrid infrastructure. The role involves managing Windows environments, scripting, cloud/on-prem operations, and ensuring 24x7 system availability.
Mandatory Skills :
Windows Server, Virtualization (Citrix/VMware/Nutanix/Hyper-V), Office 365, Intune, PowerShell, Terraform/Ansible, CI/CD, Hybrid Cloud (Azure), Monitoring, Backup, NOC, DCO.
Key Responsibilities :
- Manage physical/virtual Windows servers and core services (AD, DNS, DHCP).
- Automate infrastructure using Terraform/Ansible.
- Administer Office 365, Intune, and ensure compliance.
- Support hybrid on-prem + Azure environments.
- Handle monitoring, backups, disaster recovery, and incident response.
- Collaborate on DevOps pipelines and write automation scripts (PowerShell).
Nice to Have :
MCSA/MCSE/RHCE, Azure admin experience, team leadership background
Interview Rounds :
L1 – Technical (Platform)
L2 – Technical
L3 – Techno-Managerial
L4 – HR
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks
Position Responsibilities :
- Develop and maintain architectural frameworks, standards and processes for Deltek cloud platform services
- Conceptual, logical and physical designs of cloud solutions and services
- Ensure all designs follow well-architected principles to meet security, reliability, performance and cost optimization requirements
- Collaborate with internal teams for end to end service delivery and support
- Work closely with other architects and technical leaders to create and refine roadmaps for the cloud platform
- Stay up to date with emerging cloud technologies and leverage them to continuously improve service quality and supportability
- Create and maintain technical design documents and participate in peer design reviews
Qualifications :
- B.S. in Computer Science, Engineering or related experience.
- Extensive knowledge and experience with public cloud providers: AWS, Azure, OCI
- 8+ years of experience in cloud design and implementation
- Strong hands on experience with Authentication Service, DNS, SMTP, SFTP, NFS, monitoring tools and products, backup and recovery
- Solid understanding of container orchestration, serverless architecture, CI/CD concepts and technologies
- Comprehensive knowledge and understanding of web, database, networking, and security standards and technologies
- Proven ability to work cross-functionally and collaboratively
- Strong analytical and communication skills, attention to detail
- Experience with SOC, NIST, GDPR, and FedRAMP compliance standards
Job Title: Sr. Node.js Developer
Location: Ahmedabad, Gujarat
Job Type: Full Time
Department: MEAN Stack
About Simform:
Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market.
Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow.
Role Overview:
We are looking for a Sr. Node Developer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued) We're currently seeking a seasoned Senior Node.js Engineer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued)
Key Responsibilities:
- Develop reusable, testable, maintainable, and scalable code with a focus on unit testing.
- Implement robust security measures and data protection mechanisms across projects.
- Champion the implementation of design patterns such as Test-Driven Development (TDD) and Behavior-Driven Development (BDD).
- Actively participate in architecture design sessions and sprint planning meetings, contributing valuable insights.
- Lead code reviews, providing insightful comments and guidance to team members.
- Mentor team members, assisting in debugging complex issues and providing optimal solutions.
Required Skills & Qualifications:
- Excellent written and verbal communication skills.
- Experience: 4+yrs
- Advanced knowledge of JavaScript and TypeScript, including core concepts and best practices.
- Extensive experience in developing highly scalable services and APIs using various protocols.
- Proficiency in data modeling and optimizing database performance in both SQL and NoSQL databases.
- Hands-on experience with PostgreSQL and MongoDB, leveraging technologies like TypeORM, Sequelize, or Knex.
- Proficient in working with frameworks like NestJS, LoopBack, Express, and other TypeScript-based frameworks.
- Strong familiarity with unit testing libraries such as Jest, Mocha, and Chai.
- Expertise in code versioning using Git or Bitbucket.
- Practical experience with Docker for building and deploying microservices.
- Strong command of Linux, including familiarity with server configurations.
- Familiarity with queuing protocols and asynchronous messaging systems.
Preferred Qualification:
- Experience with frontend JavaScript concepts and frameworks such as ReactJS.
- Proficiency in designing and implementing cloud architectures, particularly on AWS services.
- Knowledge of GraphQL and its associated libraries like Apollo and Prisma.
- Hands-on experience with deployment pipelines and CI/CD processes.
- Experience with document, key/value, or other non-relational database systems like Elasticsearch, Redis, and DynamoDB.
- Ability to build AI-centric applications and work with machine learning models, Langchain, vector databases, embeddings, etc.
Why Join Us:
- Young Team, Thriving Culture
- Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture.
- Well-balanced learning and growth opportunities
- Free health insurance.
- Office facilities with a game zone, in-office kitchen with affordable lunch service, and free snacks.
- Sponsorship for certifications/events and library service.
- Flexible work timing, leaves for life events, WFH, and hybrid options


Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics

What you’ll do here:
• Define and evolve the architecture of the Resulticks platform, ensuring alignment with business strategy and scalability requirements.
• Lead the design of highly scalable, fault-tolerant, secure, and performant systems.
• Provide architectural oversight across application layers—UI, services, data, and integration.
• Drive modernization of the platform using cloud-native, microservices, and API-first approaches.
• Collaborate closely with product managers, developers, QA, and DevOps to ensure architectural integrity across development lifecycles. • Identify and address architectural risks, tech debt, and scalability challenges early in the design process.
• Guide the selection and integration of third-party technologies and platforms.
• Define and enforce architectural best practices, coding standards, and technology governance.
• Contribute to roadmap planning by assessing feasibility and impact of new features or redesigns.
• Participate in code reviews and architectural discussions, mentoring developers and technical leads.
• Stay current on emerging technologies, architecture patterns, and industry best practices to maintain platform competitiveness.
• Ensure security, compliance, and data privacy are embedded in architectural decisions, especially for industries like BFSI and telecom.
What you will need to thrive:
• 15+ years of experience in software/product development with at least 5 years in a senior architecture role.
• Proven experience architecting SaaS or large-scale B2B platforms, ideally in MarTech, AdTech, or CRM domains.
• Deep expertise in cloud architecture (AWS, Azure, or GCP), containerization (Docker, Kubernetes), and server less technologies. Copyright © RESULTICKS Solution Inc 2
• Strong command of modern backend and frontend frameworks (e.g., .NET Core, Java, Python, React).
• Excellent understanding of data architecture including SQL/NoSQL, event streaming, and analytics pipelines.
• Familiarity with CI/CD, DevSecOps, and monitoring frameworks.
• Strong understanding of security protocols, compliance standards (e.g., GDPR, ISO 27001), and authentication/authorization frameworks (OAuth, SSO, etc.).
• Effective communication and leadership skills, with experience influencing C-level and cross functional stakeholders.
• Strong analytical and problem-solving abilities, with a strategic mindset.


Job Title : Senior .NET Developer
Experience : 8+ Years
Location : Trivandrum / Kochi
Notice Period : Immediate
Working Hours : 12 PM – 9 PM IST (4-hour mandatory overlap with EST)
Job Summary :
We are hiring a Senior .NET Developer with strong hands-on experience in .NET Core (6/8+), C#, Azure Cloud Services, Azure DevOps, and SQL Server. This is a client-facing role for a US-based client, requiring excellent communication and coding skills, along with experience in cloud-based enterprise application development.
Mandatory Skills :
.NET Core 6/8+, C#, Entity Framework/Core, REST APIs, JavaScript, jQuery, MS SQL Server, Azure Cloud Services (Functions, Service Bus, Event Grid, Key Vault, SQL Azure), Azure DevOps (CI/CD), Unit Testing (XUnit/MSTest), Strong Communication Skills.
Key Responsibilities :
- Design, develop, and maintain scalable applications using .NET Core, C#, REST APIs, SQL Server
- Work with Azure Services: Functions, Durable Functions, Service Bus, Event Grid, Key Vault, Storage Queues, SQL Azure
- Implement and manage CI/CD pipelines using Azure DevOps
- Participate in Agile/Scrum ceremonies, collaborate with cross-functional teams
- Perform troubleshooting, debugging, and performance tuning
- Ensure high-quality code through unit testing and technical documentation
Primary Skills (Must-Have) :
- .NET Core 6/8+, C#, Entity Framework / EF Core
- REST APIs, JavaScript, jQuery
- SQL Server: Stored Procedures, Views, Functions
- Azure Cloud (2+ years): Functions, Service Bus, Event Grid, Blob Storage, SQL Azure, Monitoring
- Unit Testing (XUnit, MSTest), CI/CD (Classic/YAML pipelines)
- Strong knowledge of design patterns, architecture, and microservices
- Excellent communication and leadership skills
Secondary Skills (Nice to Have) :
- AngularJS/ReactJS
- Azure APIM, ADF, Logic Apps
- Azure Kubernetes Service (AKS)
- Application support & operational monitoring
Certifications (Preferred) :
- Microsoft Certified: Azure Fundamentals
- Microsoft Certified: Azure Developer Associate
- Relevant Azure/.NET/Cloud certifications
Position : Senior IT Analyst – IAM & Office 365
Experience : 4 to 8 Years
Location : Gurgaon (Hybrid : 3 days office, 2 days WFH)
Notice Period : Immediate to 15–20 days (Buyout option available)
Communication : Excellent communication skills mandatory (client-facing role)
About the Role :
We're looking for a skilled IT Security professional with strong experience in Identity and Access Management (IAM) and Office 365 (O365/M365). The role involves securing access, managing user identities, and administering O365 services across a hybrid cloud environment.
Mandatory Skills :
O365/M365, Azure AD, IAM, Conditional Access, MFA, Defender for O365, RBAC, DLP, PowerShell, Strong Communication
Key Responsibilities :
- Manage Azure AD, Conditional Access, MFA, and RBAC policies.
- Administer O365 services – SharePoint, Teams, PowerApps.
- Configure Defender for O365, DLP policies, and compliance settings.
- Automate tasks using PowerShell, support user lifecycle processes.
- Handle access audits, incident response, and IAM/O365 tool upgrades.
- Collaborate with internal teams on security and operational enhancements.
Good to Have :
- Microsoft Certification (e.g., MS-102)
- Experience with Microsoft Graph API and SIEM log integration

Role Overview:
We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.
The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.
Key Responsibilities:
- Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
- Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
- Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
- Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
- Mentor junior engineers, perform code reviews, and promote engineering best practices.
- Stay current with evolving technologies in cloud, big data, and healthcare data standards.
- Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).
Required Skills & Qualifications:
- 4+ years of hands-on experience in data engineering roles.
- Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
- Proficient in Python for data processing and automation.
- Experience with Azure Databricks (or readiness to ramp up quickly).
- Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
- Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
- Familiarity with containerization tools like Docker and orchestration using Kubernetes.
- Exposure to CI/CD pipelines for data applications.
- Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
- Excellent problem-solving abilities and a proactive mindset.
- Strong communication and interpersonal skills to work in cross-functional teams.

- Work with a team to provide end to end solutions including coding, unit testing and defect fixes.
- Work to build scalable solutions and work with quality assurance and control teams to analyze and fix issues
- Develop and maintain APIs and Services in Node.js/Python
- Develop and maintain web-based UI’s using front-end frameworks
- Participate in code reviews, unit testing and integration testing
- Participate in the full software development lifecycle, from concept and design to implementation and support
- Ensure application performance, scalability, and security through best practices in coding, testing and deployment
- Collaborate with DevOps team for troubleshooting deployment issues
Qualification
● 1-5 years of experience as a Software Engineer or similar, focusing on software development and system integration
● Proficiency in Node.js, Typescript, React, Express framework
● In-depth knowledge of databases such as MongoDB
● Proficient in HTML5, CSS3, and responsive UI design
● Proficiency in any Python development framework is a plus
● Strong direct experience in functional and object oriented programming using Javascript
● Experience with cloud platforms (Azure preferred)
● Microservices architecture and containerization
● Expertise in performance monitoring, tuning, and optimization
● Understanding of DevOps practices for automated deployments
● Understanding of software design patterns and best practices
● Practical experience working in Agile developments (scrum)
● Excellent critical thinking skills and the ability to mentor junior team members
● Effectively communicate and collaborate with cross-functional teams
● Strong capability to work independently and deliver results within tight deadlines
● Strong problem-solving abilities and attention to detail


Employment type- Contract basis
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using PySpark and distributed computing frameworks.
- Implement ETL processes and integrate data from structured and unstructured sources into cloud data warehouses.
- Work across Azure or AWS cloud ecosystems to deploy and manage big data workflows.
- Optimize performance of SQL queries and develop stored procedures for data transformation and analytics.
- Collaborate with Data Scientists, Analysts, and Business teams to ensure reliable data availability and quality.
- Maintain documentation and implement best practices for data architecture, governance, and security.
⚙️ Required Skills
- Programming: Proficient in PySpark, Python, and SQL, MongoDB
- Cloud Platforms: Hands-on experience with Azure Data Factory, Databricks, or AWS Glue/Redshift.
- Data Engineering Tools: Familiarity with Apache Spark, Kafka, Airflow, or similar tools.
- Data Warehousing: Strong knowledge of designing and working with data warehouses like Snowflake, BigQuery, Synapse, or Redshift.
- Data Modeling: Experience in dimensional modeling, star/snowflake schema, and data lake architecture.
- CI/CD & Version Control: Exposure to Git, Terraform, or other DevOps tools is a plus.
🧰 Preferred Qualifications
- Bachelor's or Master's in Computer Science, Engineering, or related field.
- Certifications in Azure/AWS are highly desirable.
- Knowledge of business intelligence tools (Power BI, Tableau) is a bonus.


Requirements:
- 7+ years in enterprise application development
- Proven track record of delivering complex digital solutions
- Advanced knowledge of React hooks, context API, and component and global state management
- Experience with atomic design, component libraries, and TypeScript
- Proficiency in React performance optimization and modern features
- Strong experience with modern .NET (6+)
- Proven track record applying Clean Architecture & Clean Code & SOLID principles & DDD patterns
- Expertise in building scalable REST APIs and microservices
- Experience with Azure Service Bus, Event Grid, and message-based architectures
- Proficiency in resources like App Insights, Function Apps, Key Vaults, and App Services
- Expertise in cloud development and deployment strategies
- Strong understanding of business needs, ability to meet them by creating cutting-edge solutions
- Brilliant communication abilities, knowing how to explain technical details and processes to a non-tech person
- Fluency in English
Nice to have:
- Test automation (Playwright/Cypress)
- Building CI/CD pipelines in Azure DevOps
- API-first development and OpenAPI specifications
- Experience with agile at scale (SAFe/Spotify)
- Proficiency with AI-powered development tools (GitHub Copilot, Cursor, etc.) to enhance productivity
- Bachelor's or Master's degree in Computer Science or a related field
Responsibilities:
- Front-end, API, and back-end development, ensuring the successful delivery of digital solutions
- Drive innovation by staying informed about emerging technologies, industry trends, and best practices
- Collaborate with the Product Manager, UX/UI Designers, and Engineering Manager to define agile technical requirements, provide technical estimation, and prioritize backlogs based on business needs and technical feasibility
- Participate in sprint planning, backlog grooming, and release planning meetings to ensure alignment between technical implementation and product roadmap
- Participate in hands-on development activities, including coding, debugging, and troubleshooting, to deliver high-quality applications
- Design, architecture, development of digital applications, ensuring adherence to best practices, coding standards, and architectural principles
- Design scalable architectures for multi-region deployment
- Implement test automation strategies and frameworks to automate testing processes and ensure the quality and reliability of applications
- Automate test cases and integrate testing into the CI/CD pipeline
- Conduct code reviews to ensure adherence to coding standards, best practices, and architectural guidelines
- Define and implement code best practices, development standards, and documentation processes to maintain code quality and readability
- Integrate digital applications with existing digital assets, backend systems, and third-party APIs, ensuring seamless data exchange and interoperability
- Collaborate with integration teams to design and implement integration solutions that meet business requirements and architectural standards
- Communicate technical concepts and solutions to non-technical stakeholders in a clear and understandable manner


Requirements:
- 7+ years in enterprise application development
- Proven track record of delivering complex digital solutions
- Advanced knowledge of React hooks, context API, and component and global state management
- Experience with atomic design, component libraries, and TypeScript
- Proficiency in React performance optimization and modern features
- Strong experience with modern .NET (6+)
- Proven track record applying Clean Architecture & Clean Code & SOLID principles & DDD patterns
- Expertise in building scalable REST APIs and microservices
- Experience with Azure Service Bus, Event Grid, and message-based architectures
- Proficiency in resources like App Insights, Function Apps, Key Vaults, and App Services
- Expertise in cloud development and deployment strategies
- Strong understanding of business needs, ability to meet them by creating cutting-edge solutions
- Brilliant communication abilities, knowing how to explain technical details and processes to a non-tech person
- Fluency in English
Nice to have:
- Test automation (Playwright/Cypress)
- Building CI/CD pipelines in Azure DevOps
- API-first development and OpenAPI specifications
- Experience with agile at scale (SAFe/Spotify)
- Proficiency with AI-powered development tools (GitHub Copilot, Cursor, etc.) to enhance productivity
- Bachelor's or Master's degree in Computer Science or a related field
Responsibilities:
- Front-end, API, and back-end development, ensuring the successful delivery of digital solutions
- Drive innovation by staying informed about emerging technologies, industry trends, and best practices
- Collaborate with the Product Manager, UX/UI Designers, and Engineering Manager to define agile technical requirements, provide technical estimation, and prioritize backlogs based on business needs and technical feasibility
- Participate in sprint planning, backlog grooming, and release planning meetings to ensure alignment between technical implementation and product roadmap
- Participate in hands-on development activities, including coding, debugging, and troubleshooting, to deliver high-quality applications
- Design, architecture, development of digital applications, ensuring adherence to best practices, coding standards, and architectural principles
- Design scalable architectures for multi-region deployment
- Implement test automation strategies and frameworks to automate testing processes and ensure the quality and reliability of applications
- Automate test cases and integrate testing into the CI/CD pipeline
- Conduct code reviews to ensure adherence to coding standards, best practices, and architectural guidelines
- Define and implement code best practices, development standards, and documentation processes to maintain code quality and readability
- Integrate digital applications with existing digital assets, backend systems, and third-party APIs, ensuring seamless data exchange and interoperability
- Collaborate with integration teams to design and implement integration solutions that meet business requirements and architectural standards
- Communicate technical concepts and solutions to non-technical stakeholders in a clear and understandable manner



Data Scientist
Job Id: QX003
About Us:
QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Position Overview:
We are seeking a collaborative and analytical Data Scientist who can bridge the gap between business needs and data science capabilities. In this role, you will lead and support projects that apply machine learning, AI, and statistical modeling to generate actionable insights and drive business value.
Key Responsibilities:
- Collaborate with stakeholders to define and translate business challenges into data science solutions.
- Conduct in-depth data analysis on structured and unstructured datasets.
- Build, validate, and deploy machine learning models to solve real-world problems.
- Develop clear visualizations and presentations to communicate insights.
- Drive end-to-end project delivery, from exploration to production.
- Contribute to team knowledge sharing and mentorship activities.
Must-Have Skills:
- 3+ years of progressive experience in data science, applied analytics, or a related quantitative role, demonstrating a proven track record of delivering impactful data-driven solutions.
- Exceptional programming proficiency in Python, including extensive experience with core libraries such as Pandas, NumPy, Scikit-learn, NLTK and XGBoost.
- Expert-level SQL skills for complex data extraction, transformation, and analysis from various relational databases.
- Deep understanding and practical application of statistical modeling and machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and dimensionality reduction.
- Proven expertise in end-to-end machine learning model development lifecycle, including robust feature engineering, rigorous model validation and evaluation (e.g., A/B testing), and model deployment strategies.
- Demonstrated ability to translate complex business problems into actionable analytical frameworks and data science solutions, driving measurable business outcomes.
- Proficiency in advanced data analysis techniques, including Exploratory Data Analysis (EDA), customer segmentation (e.g., RFM analysis), and cohort analysis, to uncover actionable insights.
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
Good-to-Have Skills:
- Experience with cloud platforms (Azure, AWS, GCP) and specific services like Azure ML, Synapse, Azure Kubernetes and Databricks.
- Familiarity with big data processing tools like Apache Spark or Hadoop.
- Exposure to MLOps tools and practices (e.g., MLflow, Docker, Kubeflow) for model lifecycle management.
- Knowledge of deep learning libraries (TensorFlow, PyTorch) or experience with Generative AI (GenAI) and Large Language Models (LLMs).
- Proficiency with business intelligence and data visualization tools such as Tableau, Power BI, or Plotly.
- Experience working within Agile project delivery methodologies.
Competencies:
· Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications.
· Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels.
· Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm.
· Customer Focus - Building strong customer relationships and delivering customer-centric solutions.
· Optimizes Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.


Key Responsibilities:
● Design, develop, and maintain scalable web applications using .NET Core, .NET
Framework, C#, and related technologies.
● Participate in all phases of the SDLC, including requirements gathering, architecture
design, coding, testing, deployment, and support.
● Build and integrate RESTful APIs, and work with SQL Server, Entity Framework, and
modern front-end technologies such as Angular, React, and JavaScript.
● Conduct thorough code reviews, write unit tests, and ensure adherence to coding
standards and best practices.
● Lead or support .NET Framework to .NET Core migration initiatives, ensuring
minimal disruption and optimal performance.
● Implement and manage CI/CD pipelines using tools like Azure DevOps, Jenkins, or
GitLab CI/CD.
● Containerize applications using Docker and deploy/manage them on orchestration
platforms like Kubernetes or GKE.
● Lead and execute database migration projects, particularly transitioning from SQL
Server to PostgreSQL.
● Manage and optimize Cloud SQL for PostgreSQL, including configuration, tuning, and
ongoing maintenance.
● Leverage Google Cloud Platform (GCP) services such as GKE, Cloud SQL, Cloud
Run, and Dataflow to build and maintain cloud-native solutions.
● Handle schema conversion and data transformation tasks as part of migration and
modernization efforts.
Required Skills & Experience:
● 5+ years of hands-on experience with C#, .NET Core, and .NET Framework.
● Proven experience in application modernization and cloud-native development.
● Strong knowledge of containerization (Docker) and orchestration tools like
Kubernetes/GKE.
● Expertise in implementing and managing CI/CD pipelines.
● Solid understanding of relational databases and experience in SQL Server to
PostgreSQL migrations.
● Familiarity with cloud infrastructure, especially GCP services relevant to application
hosting and data processing.
● Excellent problem-solving, communication,


Job Description:
We are looking for a skilled and motivated Full Stack Developer with 2–6 years of experience in designing and developing scalable web applications using .NET Core, C#, ReactJS, and MS SQL, with exposure to Microsoft Azure. The ideal candidate should be comfortable working across the full technology stack and demonstrate strong problem-solving abilities in a fast-paced, Agile environment.
Key Responsibilities:
- Design, develop, and maintain robust full stack applications using .NET Core, C#, ReactJS, and MS SQL.
- Build and consume RESTful APIs to support scalable frontend/backend integration.
- Collaborate with product owners, architects, and other developers to deliver high-quality software solutions.
- Participate in code reviews, ensure adherence to best coding practices, and contribute to continuous improvement.
- Write unit and integration tests to maintain code quality and ensure high test coverage.
- Deploy and manage applications in Microsoft Azure and contribute to improving CI/CD pipelines.
- Actively participate in Agile ceremonies such as sprint planning, stand-ups, and retrospectives.
- Troubleshoot and resolve technical issues and performance bottlenecks.
Required Skills:
- 2–6 years of hands-on experience with .NET Core and C# development.
- Proficient in ReactJS and modern front-end technologies (HTML5, CSS3, JavaScript/TypeScript).
- Experience in building and integrating RESTful APIs.
- Solid understanding of object-oriented programming, data structures, and software design principles.
- Experience working with MS SQL Server, including writing complex queries and stored procedures.
- Familiarity with Azure services such as App Services, Azure SQL, Azure Functions, etc.
Preferred Skills:
- Exposure to DevOps practices and tools such as Azure DevOps, Git, and CI/CD pipelines.
- Basic understanding of containerization (e.g., Docker) and orchestration tools like Kubernetes (K8s).
- Prior experience working in Agile/Scrum teams.
- Good communication skills and ability to work collaboratively in a cross-functional team.
- Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
- Strong understanding of the Spring framework and its various modules.
- Experience with JDBC for database connectivity and manipulation
- Utilize database management systems to store and retrieve data efficiently.
- Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
- Experience in in working with relational and nosql databases.
- Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes)
- Perform debugging and troubleshooting of applications using log analysis techniques.
- Understand multi-service flow and integration between components.
- Handle large-scale data processing tasks efficiently and effectively.
- Hands on experience using Spark is an added advantage.
- Good problem-solving and analytical abilities.
- Collaborate with cross-functional teams to identify and solve complex technical problems.
- Knowledge of Agile methodologies such as Scrum or Kanban
- Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies.

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀
Job description
🔍 Job Title: Data Engineer
📍 Location: Ahmedabad
🚀 Work Mode: On-Site Opportunity
📅 Experience: 4+ Years
🕒 Employment Type: Full-Time
⏱️ Availability : Immediate Joiner Preferred
Join Our Team as a Data Engineer
We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.
As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.
Your Key Responsibilities
Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.
Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.
Implement data validation, transformation, and quality monitoring processes.
Collaborate with cross-functional teams to deliver impactful, data-driven solutions.
Proactively identify bottlenecks and optimize existing workflows and processes.
Provide guidance and mentorship to junior engineers in the team.
Skills & Expertise We’re Looking For
3+ years of hands-on experience in Data Engineering or related roles.
Strong expertise in Python and data pipeline design.
Experience working with Big Data tools like Hadoop, Spark, Hive.
Proficiency with SQL, NoSQL databases, and data warehousing solutions.
Solid experience in cloud platforms - Azure
Familiar with distributed computing, data modeling, and performance tuning.
Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.
Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.
Qualifications
Bachelor’s degree in Computer Science, Data Science, or a related field.

A fast-growing, tech-driven loyalty programs and benefits business is looking to hire a Technical Architect with expertise in:
Key Responsibilities:
1. Architectural Design & Governance
• Define, document, and maintain the technical architecture for projects and product modules.
• Ensure architectural decisions meet scalability, performance, and security requirements.
2. Solution Development & Technical Leadership
• Translate product and client requirements into robust technical solutions, balancing short-term deliverables with long-term product viability.
• Oversee system integrations, ensuring best practices in coding standards, security, and performance optimization.
3. Collaboration & Alignment
• Work closely with Product Managers and Project Managers to prioritize and plan feature development.
• Facilitate cross-team communication to ensure technical feasibility and timely execution of features or client deliverables.
4. Mentorship & Code Quality
• Provide guidance to senior developers and junior engineers through code reviews, design reviews, and technical coaching.
• Advocate for best-in-class engineering practices, encouraging the use of CI/CD, automated testing, and modern development tooling.5. Risk Management & Innovation
• Proactively identify technical risks or bottlenecks, proposing mitigation strategies.
• Investigate and recommend new technologies, frameworks, or tools that enhance product capabilities and developer productivity.
6. Documentation & Standards
• Maintain architecture blueprints, design patterns, and relevant documentation to align the team on shared standards.
• Contribute to the continuous improvement of internal processes, ensuring streamlined development and deployment workflows.
Skills:
1. Technical Expertise
• 7–10 years of overall experience in software development with at least a couple of years in senior or lead roles.
• Strong proficiency in at least one mainstream programming language (e.g., Golang,
Python, JavaScript).
• Hands-on experience with architectural patterns (microservices, monolithic systems, event-driven architectures).
• Good understanding of Cloud Platforms (AWS, Azure, or GCP) and DevOps practices
(CI/CD pipelines, containerization with Docker/Kubernetes).
• Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Location: Saket, Delhi (Work from Office)
Schedule: Monday – Friday
Experience : 7-10 Years
Compensation: As per industry standards

Job Title : Senior Consultant (Java / NodeJS + Temporal)
Experience : 5 to 12 Years
Location : Bengaluru, Chennai, Hyderabad, Pune, Mumbai, Gurugram, Coimbatore
Work Mode : Remote (Must be open to travel for occasional team meetups)
Notice Period : Immediate Joiners or Serving Notice
Interview Process :
- R1 : Tech Interview (60 mins)
- R2 : Technical Interview
- R3 : (Optional) Interview with Client
Job Summary :
We are seeking a Senior Backend Consultant with strong hands-on expertise in Temporal (BPM/Workflow Engine) and either Node.js or Java.
The ideal candidate will have experience in designing and developing microservices and process-driven applications, as well as orchestrating complex workflows using Temporal.io.
You will work on high-scale systems, collaborating closely with cross-functional teams.
Mandatory Skills :
Temporal.io, Node.js (or Java), React.js, Keycloak IAM, PostgreSQL, Terraform, Kubernetes, Azure, Jest, OpenAPI
Key Responsibilities :
- Design and implement scalable backend services using Node.js or Java.
- Build and manage complex workflow orchestrations using Temporal.io.
- Integrate with IAM solutions like Keycloak for role-based access control.
- Work with React (v17+), TypeScript, and component-driven frontend design.
- Use PostgreSQL for structured data persistence and optimized queries.
- Manage infrastructure using Terraform and orchestrate via Kubernetes.
- Leverage Azure Services like Blob Storage, API Gateway, and AKS.
- Write and maintain API documentation using Swagger/Postman/Insomnia.
- Conduct unit and integration testing using Jest.
- Participate in code reviews and contribute to architectural decisions.
Must-Have Skills :
- Temporal.io – BPMN modeling, external task workers, Operate, Tasklist
- Node.js + TypeScript (preferred) or strong Java experience
- React.js (v17+) and component-driven UI development
- Keycloak IAM, PostgreSQL, and modern API design
- Infrastructure automation with Terraform, Kubernetes
- Experience in using GitFlow, OpenAPI, Jest for testing
Nice-to-Have Skills :
- Blockchain integration experience for secure KYC/identity flows
- Custom Camunda Connectors or exporter plugin development
- CI/CD experience using Azure DevOps or GitHub Actions
- Identity-based task completion authorization enforcement

Senior Generative AI Engineer
Job Id: QX016
About Us:
The QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for the enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights; businesses will continue to face challenges to better understand their customers and even lose them.
Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Job Summary:
We seek a highly experienced Senior Generative AI Engineer who focus on the development, implementation, and engineering of Gen AI applications using the latest LLMs and frameworks. This role requires hands-on expertise in Python programming, cloud platforms, and advanced AI techniques, along with additional skills in front-end technologies, data modernization, and API integration. The Senior Gen AI engineer will be responsible for building applications from the ground up, ensuring robust, scalable, and efficient solutions.
Responsibilities:
· Build GenAI solutions such as virtual assistant, data augmentation, automated insights and predictive analytics
· Design, develop, and fine-tune generative AI models (GANs, VAEs, Transformers).
· Handle data preprocessing, augmentation, and synthetic data generation.
· Work with NLP, text generation, and contextual comprehension tasks.
· Develop backend services using Python or .NET for LLM-powered applications.
· Build and deploy AI applications on cloud platforms (Azure, AWS, GCP).
· Optimize AI pipelines and ensure scalability.
· Stay updated with advancements in AI and ML.
Skills & Requirements:
- Strong knowledge of machine learning, deep learning, and NLP.
- Proficiency in Python, TensorFlow, PyTorch, and Keras.
- Experience with cloud services, containerization (Docker, Kubernetes), and AI model deployment.
- Understanding of LLMs, embeddings, and retrieval-augmented generation (RAG).
- Ability to work independently and as part of a team.
- Bachelor’s degree in Computer Science, Mathematics, Engineering, or a related field.
- 6+ years of experience in Gen AI, or related roles.
- Experience with AI/ML model integration into data pipelines.
Core Competencies for Generative AI Engineers:
1. Programming & Software Development
a. Python – Proficiency in writing efficient and scalable code with strong knowledge with NumPy, Pandas, TensorFlow, PyTorch and Scikit-learn.
b. LLM Frameworks – Experience with Hugging Face Transformers, LangChain, OpenAI API, and similar tools for building and deploying large language models.
c. API integration such as FastAPI, Flask, RESTful API, WebSockets or Django.
d. Knowledge of Version Control, containerization, CI/CD Pipelines and Unit Testing.
2. Vector Database & Cloud AI Solutions
a. Pinecone, FAISS, ChromaDB, Neo4j
b. Azure Redis/ Cognitive Search
c. Azure OpenAI Service
d. Azure ML Studio Models
e. AWS (Relevant Services)
3. Data Engineering & Processing
- Handling large-scale structured & unstructured datasets.
- Proficiency in SQL, NoSQL (PostgreSQL, MongoDB), Spark, and Hadoop.
- Feature engineering and data augmentation techniques.
4. NLP & Computer Vision
- NLP: Tokenization, embeddings (Word2Vec, BERT, T5, LLaMA).
- CV: Image generation using GANs, VAEs, Stable Diffusion.
- Document Embedding – Experience with vector databases (FAISS, ChromaDB, Pinecone) and embedding models (BGE, OpenAI, SentenceTransformers).
- Text Summarization – Knowledge of extractive and abstractive summarization techniques using models like T5, BART, and Pegasus.
- Named Entity Recognition (NER) – Experience in fine-tuning NER models and using pre-trained models from SpaCy, NLTK, or Hugging Face.
- Document Parsing & Classification – Hands-on experience with OCR (Tesseract, Azure Form Recognizer), NLP-based document classifiers, and tools like LayoutLM, PDFMiner.
5. Model Deployment & Optimization
- Model compression (quantization, pruning, distillation).
- Deployment using Azure CI/CD, ONNX, TensorRT, OpenVINO on AWS, GCP.
- Model monitoring (MLflow, Weights & Biases) and automated workflows (Azure Pipeline).
- API integration with front-end applications.
6. AI Ethics & Responsible AI
- Bias detection, interpretability (SHAP, LIME), and security (adversarial attacks).
7. Mathematics & Statistics
- Linear Algebra, Probability, and Optimization (Gradient Descent, Regularization, etc.).
8. Machine Learning & Deep Learning
a. Expertise in supervised, unsupervised, and reinforcement learning.
a. Proficiency in TensorFlow, PyTorch, and JAX.
b. Experience with Transformers, GANs, VAEs, Diffusion Models, and LLMs (GPT, BERT, T5).
Personal Attributes:
- Strong problem-solving skills with a passion for data architecture.
- Excellent communication skills with the ability to explain complex data concepts to non-technical stakeholders.
- Highly collaborative, capable of working with cross-functional teams.
- Ability to thrive in a fast-paced, agile environment while managing multiple priorities effectively.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.
Ready to make an impact? Apply today and become part of the QX impact team!


🚀 Job Title : Python AI/ML Engineer
💼 Experience : 3+ Years
📍 Location : Gurgaon (Work from Office, 5 Days/Week)
📅 Notice Period : Immediate
Summary :
We are looking for a Python AI/ML Engineer with strong experience in developing and deploying machine learning models on Microsoft Azure.
🔧 Responsibilities :
- Build and deploy ML models using Azure ML.
- Develop scalable Python applications with cloud-first design.
- Create data pipelines using Azure Data Factory, Blob Storage & Databricks.
- Optimize performance, fix bugs, and ensure system reliability.
- Collaborate with cross-functional teams to deliver intelligent features.
✅ Requirements :
- 3+ Years of software development experience.
- Strong Python skills; experience with scikit-learn, pandas, NumPy.
- Solid knowledge of SQL and relational databases.
- Hands-on with Azure ML, Data Factory, Blob Storage.
- Familiarity with Git, REST APIs, Docker.
Solution Engineer
Primary Responsibilities
● Serve as the primary resource during the client implementation/onboarding phase
● Identify, document, and define customer business and technical needs
● Develop clear user documentation, instructions, and standard procedures
● Deliver training sessions on solution administration and usage
● Participate in customer project calls and serve as a subject matter expert on solutions
● Coordinate tasks across internal and client project teams, ensuring accountability and progress tracking
● Perform hands-on configuration, scripting, data imports, testing, and knowledge transfer activities
● Translate business requirements into technical specifications for product configuration or enhancements
● Collaborate with global team members across multiple time zones, including the U.S., India, and China
● Build and maintain strong customer relationships to gather and validate requirements
● Contribute to the development of implementation best practices and suggest improvements to processes
● Execute other tasks and duties as assigned
Note: Salary offered will depend on the candidate's qualifications and experience.
Required Skills & Experience
● Proven experience leading software implementation projects from presales through delivery
● Strong organizational skills with the ability to manage multiple detailed and interdependent tasks
● 2–5 years of experience in JavaScript and web development, including prior implementation work in a software company
● Proficiency in some or all of the following:
○ JavaScript, PascalScript, MS SQL Script, RESTful APIs, Azure, Postman
○ Embarcadero RAD Studio, Delphi
○ Basic SQL and debugging
○ SMS integration and business intelligence tools
● General knowledge of database structures and data migration processes
● Familiarity with project management tools and methodologies
● Strong interpersonal skills with a focus on client satisfaction and relationship-building
● Self-starter with the ability to work productively in a remote, distributed team environment
● Experience in energy efficiency retrofits, construction, or utility demand-side management is a plus

Responsibilities:
● Technical Leadership:
○ Architect and design complex software systems
○ Lead the development team in implementing software solutions
○ Ensure adherence to coding standards and best practices
○ Conduct code reviews and provide constructive feedback
○ Troubleshoot and resolve technical issues
● Project Management:
○ Collaborate with project managers to define project scope and requirements
○ Estimate project timelines and resource needs
○ Track project progress and ensure timely delivery
○ Manage risks and identify mitigation strategies
● Team Development:
○ Mentor and coach junior developers
○ Foster a collaborative and supportive team environment
○ Conduct performance evaluations and provide feedback
○ Identify training and development opportunities for team members
● Innovation:
○ Stay abreast of emerging technologies and industry trends
○ Evaluate and recommend new technologies for adoption
○ Encourage experimentation and innovation within the team
Qualifications
● Experience:
○ 12+ years of experience in software development
○ 4+ years of experience in a leadership role
○ Proven track record of delivering successful software projects
● Skills:
○ Strong proficiency in C# programming languages
○ Good knowledge on Java for reporting
○ Strong on SQL - Microsoft azure
○ Expertise in software development methodologies (e.g., Agile, Scrum)
○ Excellent problem-solving and analytical skills
○ Strong communication and interpersonal skills
○ Ability to work independently and as part of a team

Job Role - React UI Lead
Experience Required - 8+ Years
Location - Indore/Pune
Hybrid Work Model (Wed and Thurs WFO)
Work Timings - 12.30 to 9.30 PM
As a React UI Lead at Infobeans, you will play a pivotal role in driving our digital transformation journey. You will work with a team of talented front-end developers to deliver robust and intuitive user interfaces that enhance our clients' digital banking experience. Your expertise in React.js and front-end development will be crucial in shaping our applications' architecture, ensuring scalability, performance, and maintainability.
Responsibilities:
- Lead a team of front-end developers in designing and implementing user interfaces using React.js.
- Collaborate closely with product managers, UX designers, and backend engineers to deliver high-quality, scalable applications.
- Architect efficient and reusable front-end systems that drive complex web applications.
- Mentor and coach team members, fostering a culture of continuous learning and technical excellence.
- Conduct code reviews and ensure adherence to coding best practices and standards.
- Stay updated on emerging technologies and industry trends to drive innovation in digital banking solutions.
Requirements:
- Proven experience 7+ years as a React.js developer with expertise in building large-scale applications.
- Prior GRC/Banking experience highly desired
- Solid understanding of web development fundamentals including HTML5, CSS3, and JavaScript.
- Experience with state management libraries (e.g., Redux, MobX) and modern front-end build pipelines and tools.
- Knowledge of RESTful APIs and asynchronous request handling.
- Bachelor’s degree in Computer Science or a related field.
Job Title : Automation Quality Engineer (Gen AI)
Experience : 3 to 5+ Years
Location : Bangalore / Chennai / Pune
Role Overview :
We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.
You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.
Key Responsibilities :
- Develop and maintain test strategies for AI models, APIs, and user interfaces.
- Build automation frameworks and integrate into CI/CD pipelines.
- Validate model accuracy, robustness, and monitor model drift.
- Perform regression, performance, load, and security testing.
- Log and track issues; collaborate with developers to resolve them.
- Ensure compliance with data privacy and ethical AI standards.
- Document QA processes and testing outcomes.
Mandatory Skills :
- Test Automation : Selenium, Playwright, or Deep Eval
- Programming/Scripting : Python, JavaScript
- API Testing : Postman, REST Assured
- Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
- Performance Testing : JMeter
- Bug Tracking : Azure DevOps
- Methodologies : Agile delivery experience
- Soft Skills : Strong communication and problem-solving abilities