Cutshort logo
Windows Azure Jobs in Bangalore (Bengaluru)

50+ Windows Azure Jobs in Bangalore (Bengaluru) | Windows Azure Job openings in Bangalore (Bengaluru)

Apply to 50+ Windows Azure Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Windows Azure Job opportunities across top companies like Google, Amazon & Adobe.

icon
JobTwine

at JobTwine

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4 - 5 yrs
Upto ₹25L / yr (Varies
)
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconDocker
skill iconKubernetes
Shell Scripting
+2 more

About JobTwine

JobTwine is an AI-powered platform offering Interview as a Service, helping companies hire 50% faster while doubling the quality of hire. AI Interviews, Human Decisions, Zero Compromises We leverage AI with human expertise to discover, assess, and hire top talent. JobTwine automates scheduling, uses an AI Copilot to guide human interviewers for consistency and generates structured, high-quality automated feedback.


Role Overview

We are looking for a Senior DevOps Engineer with 4–5 years of experience, a product-based mindset, and the ability to thrive in a startup environment


Key Skills & Requirements

  • 4–5 years of hands-on DevOps experience
  • Experience in product-based companies and startups
  • Strong expertise in CI/CD pipelines
  • Hands-on experience with AWS / GCP / Azure
  • Experience with Docker & Kubernetes
  • Strong knowledge of Linux and Shell scripting
  • Infrastructure as Code: Terraform / CloudFormation
  • Monitoring & logging: Prometheus, Grafana, ELK stack
  • Experience in scalability, reliability and automation


What You Will Do

  • Work closely with Sandip, CTO of JobTwine on Gen AI DevOps initiatives
  • Build, optimize, and scale infrastructure supporting AI-driven products
  • Ensure high availability, security and performance of production systems
  • Collaborate with engineering teams to improve deployment and release processes


Why Join JobTwine ?

  • Direct exposure to leadership and real product decision-making
  • Steep learning curve with high ownership and accountability
  • Opportunity to build and scale a core B2B SaaS produc
Read more
MIC Global

at MIC Global

3 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
10yrs+
Upto ₹50L / yr (Varies
)
SQL
skill iconPython
PowerBI
Stakeholder management
skill iconData Analytics
+4 more

About Us

MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.

We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.


About the Team 

As a Lead Data Specialist at MIC Global, you will play a key role in transforming data into actionable insights that inform strategic and operational decisions. You will work closely with Product, Engineering, and Business teams to analyze trends, build dashboards, and ensure that data pipelines and reporting structures are accurate, automated, and scalable.

This is a hands-on, analytical, and technically focused role ideal for someone experienced in data analytics and engineering practices. You will use SQL, Python, and modern BI tools to interpret large datasets, support pricing models, and help shape the data-driven culture across MIC Global


Key Roles and Responsibilities 

Data Analytics & Insights

  • Analyze complex datasets to identify trends, patterns, and insights that support business and product decisions.
  • Partner with Product, Operations, and Finance teams to generate actionable intelligence on customer behavior, product performance, and risk modeling.
  • Contribute to the development of pricing models, ensuring accuracy and commercial relevance.
  • Deliver clear, concise data stories and visualizations that drive executive and operational understanding.
  • Develop analytical toolkits for underwriting, pricing and claims 

Data Engineering & Pipeline Management

  • Design, implement, and maintain reliable data pipelines and ETL workflows.
  • Write clean, efficient scripts in Python for data cleaning, transformation, and automation.
  • Ensure data quality, integrity, and accessibility across multiple systems and environments.
  • Work with Azure data services to store, process, and manage large datasets efficiently.

Business Intelligence & Reporting

  • Develop, maintain, and optimize dashboards and reports using Power BI (or similar tools).
  • Automate data refreshes and streamline reporting processes for cross-functional teams.
  • Track and communicate key business metrics, providing proactive recommendations.

Collaboration & Innovation

  • Collaborate with engineers, product managers, and business leads to align analytical outputs with company goals.
  • Support the adoption of modern data tools and agentic AI frameworks to improve insight generation and automation.
  • Continuously identify opportunities to enhance data-driven decision-making across the organization.

Ideal Candidate Profile

  • 10+ years of relevant experience in data analysis or business intelligence, ideally
  • within product-based SaaS, fintech, or insurance environments.
  • Proven expertise in SQL for data querying, manipulation, and optimization.
  • Hands-on experience with Python for data analytics, automation, and scripting.
  • Strong proficiency in Power BI, Tableau, or equivalent BI tools.
  • Experience working in Azure or other cloud-based data ecosystems.
  • Solid understanding of data modeling, ETL processes, and data governance.
  • Ability to translate business questions into technical analysis and communicate findings effectively.

Preferred Attributes

  • Experience in insurance or fintech environments, especially operations, and claims analytics.
  • Exposure to agentic AI and modern data stack tools (e.g., dbt, Snowflake, Databricks).
  • Strong attention to detail, analytical curiosity, and business acumen.
  • Collaborative mindset with a passion for driving measurable impact through data.

Benefits

  • 33 days of paid holiday
  • Competitive compensation well above market average
  • Work in a high-growth, high-impact environment with passionate, talented peers
  • Clear path for personal growth and leadership development
Read more
MIC Global

at MIC Global

3 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Best in industry
skill iconPython
SQL
ETL
DBA
Windows Azure
+1 more

About Us

MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.

We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.


About the Team 

We're seeking a mid-level Data Engineer with strong DBA experience to join our insurtech data analytics team. This role focuses on supporting various teams including infrastructure, reporting, and analytics. You'll be responsible for SQL performance optimization, building data pipelines, implementing data quality checks, and helping teams with database-related challenges. You'll work closely with the infrastructure team on production support, assist the reporting team with complex queries, and support the analytics team in building visualizations and dashboards.


Key Roles and Responsibilities 

Database Administration & Optimization

  • Support infrastructure team with production database issues and troubleshooting
  • Debug and resolve SQL performance issues, identify bottlenecks, and optimize queries
  • Optimize stored procedures, functions, and views for better performance
  • Perform query tuning, index optimization, and execution plan analysis
  • Design and develop complex stored procedures, functions, and views
  • Support the reporting team with complex SQL queries and database design

Data Engineering & Pipelines

  • Design and build ETL/ELT pipelines using Azure Data Factory and Python
  • Implement data quality checks and validation rules before data enters pipelines
  • Develop data integration solutions to connect various data sources and systems
  • Create automated data validation, quality monitoring, and alerting mechanisms
  • Develop Python scripts for data processing, transformation, and automation
  • Build and maintain data models to support reporting and analytics requirements

Support & Collaboration

  • Help data analytics team build visualizations and dashboards by providing data models and queries
  • Support reporting team with data extraction, transformation, and complex reporting queries
  • Collaborate with development teams to support application database requirements
  • Provide technical guidance and best practices for database design and query optimization

Azure & Cloud

  • Work with Azure services including Azure SQL Database, Azure Data Factory, Azure Storage, Azure Functions, and Azure ML
  • Implement cloud-based data solutions following Azure best practices
  • Support cloud database migrations and optimizations
  • Work with Agentic AI concepts and tools to build intelligent data solutions

Ideal Candidate Profile

Essential

  • 5-8 years of experience in data engineering and database administration
  • Strong expertise in MS SQL Server (2016+) administration and development
  • Proficient in writing complex SQL queries, stored procedures, functions, and views
  • Hands-on experience with Microsoft Azure services (Azure SQL Database, Azure Data Factory, Azure Storage)
  • Strong Python scripting skills for data processing and automation
  • Experience with ETL/ELT design and implementation
  • Knowledge of database performance tuning, query optimization, and indexing strategies
  • Experience with SQL performance debugging tools (XEvents, Profiler, or similar)
  • Understanding of data modeling and dimensional design concepts
  • Knowledge of Agile methodology and experience working in Agile teams
  • Strong problem-solving and analytical skills
  • Understanding of Agentic AI concepts and tools
  • Excellent communication skills and ability to work with cross-functional teams

Desirable

  • Knowledge of insurance or financial services domain
  • Experience with Azure ML and machine learning pipelines
  • Experience with Azure DevOps and CI/CD pipelines
  • Familiarity with data visualization tools (Power BI, Tableau)
  • Experience with NoSQL databases (Cosmos DB, MongoDB)
  • Knowledge of Spark, Databricks, or other big data technologies
  • Azure certifications (Azure Data Engineer Associate, Azure Database Administrator Associate)
  • Experience with version control systems (Git, Azure Repos)

Tech Stack

  • MS SQL Server 2016+, Azure SQL Database, Azure Data Factory, Azure ML, Azure Storage, Azure Functions, Python, T-SQL, Stored Procedures, ETL/ELT, SQL Performance Tools (XEvents, Profiler), Agentic AI Tools, Azure DevOps, Power BI, Agile, Git

Benefits

  • 33 days of paid holiday
  • Competitive compensation well above market average
  • Work in a high-growth, high-impact environment with passionate, talented peers
  • Clear path for personal growth and leadership development
Read more
Borderless Access

at Borderless Access

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
13yrs+
₹32L - ₹35L / yr
skill iconPython
skill iconJava
skill iconNodeJS (Node.js)
skill iconSpring Boot
skill iconJavascript
+14 more

About Borderless Access

Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.

We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.

Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.

The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.


Key Responsibilities

  • Lead, mentor, and grow a cross-functional team of engineers specializing.
  • Foster a culture of collaboration, accountability, and continuous learning.
  • Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
  • Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
  • Promote clean, maintainable, and well-documented code across the team.
  • Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
  • Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
  • Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
  • Ensure timely delivery of high-quality software aligned with business goals.
  • Work closely with DevOps to ensure platform reliability, scalability, and observability.
  • Conduct regular 1:1s, performance reviews, and career development planning.
  • Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
  • Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.


Added Responsibilities

  • Defining and adhering to the development process.
  • Taking part in regular external audits and maintaining artifacts.
  • Identify opportunities for automation to reduce repetitive tasks.
  • Mentor and coach team members in the teams.
  • Continuously optimize application performance and scalability.
  • Collaborate with the Marketing team to understand different user journeys.


Growth and Development

The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:

  • Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
  • Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
  • Drive business objectives – Become part of defining and taking actions to meet the business objectives.


About You

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in software development.
  • Experience with microservices architecture and container orchestration.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
  • Solid understanding of data structures, algorithms, and software design patterns.
  • Solid understanding of enterprise system architecture patterns.
  • Experience in managing a small to medium-sized team with varied experiences.
  • Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
  • Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
  • Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
  • Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
  • Knowledge of containerization technologies Docker and Kubernetes.


Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
STOCKHOLM (Sweden), Bengaluru (Bangalore)
8yrs+
Best in industry
DevOps
Microsoft Windows Server
Microsoft IIS administration
Windows Azure
Powershell
+2 more

About Tarento:

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Scope of Work:

  • Support the migration of applications from Windows Server 2008 to Windows Server 2019 or 2022 in an IaaS environment.
  • Migrate IIS websites, Windows Services, and related application components.
  • Assist with migration considerations for SQL Server connections, instances, and basic data-related dependencies.
  • Evaluate and migrate message queues (MSMQ or equivalent technologies).
  • Document the existing environment, migration steps, and post-migration state.
  • Work closely with DevOps, development, and infrastructure teams throughout the project.


Required Skills & Experience:

  • Strong hands-on experience with IIS administration, configuration, and application migration.
  • Proven experience migrating workloads between Windows Server versions, ideally legacy to modern.
  • Knowledge of Windows Services setup, configuration, and troubleshooting.
  • Practical understanding of SQL Server (connection strings, service accounts, permissions).
  • Experience with queues IBM/MSMQ or similar) and their migration considerations.
  • Ability to identify migration risks, compatibility constraints, and remediation options.
  • Strong troubleshooting and analytical skills.
  • Familiarity with Microsoft technologies (.Net, etc)
  • Networking and Active Directory related knowledge

Desirable / Nice-to-Have

  • Exposure to CI/CD tools, especially TeamCity and Octopus Deploy.
  • Familiarity with Azure services and related tools (Terraform, etc)
  • PowerShell scripting for automation or configuration tasks.
  • Understanding enterprise change management and documentation practices.
  • Security

Soft Skills

  • Clear written and verbal communication.
  • Ability to work independently while collaborating with cross-functional teams.
  • Strong attention to detail and a structured approach to execution.
  • Troubleshooting
  • Willingness to learn.


Location & Engagement Details

We are looking for a Senior DevOps Consultant for an onsite role in Stockholm (Sundbyberg office). This opportunity is open to candidates currently based in Bengaluru who are willing to relocate to Sweden for the assignment.

The role will start with an initial 6-month onsite engagement, with the possibility of extension based on project requirements and performance.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+7 more

About Tarento:

 

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Job Summary:

We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.


Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.


Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD tools, DevOps processes, and Git-based workflows.


Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.


Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
MyYogaTeacher

at MyYogaTeacher

1 video
7 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
5yrs+
Upto ₹32L / yr (Varies
)
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconPostgreSQL
skill iconMongoDB
+1 more

Senior Backend Engineer

As a Senior Backend Engineer, you will play a critical role in designing and building highly scalable, reliable

backend systems that power our global Yoga & Fitness platform. You will own backend architecture, make

key technical decisions, and ensure our systems perform reliably at scale in production environments.


Responsibilities

● Design, build, and own scalable, high-performance backend systems and APIs.

● Make key architectural and technical decisions with long-term scalability and reliability in mind.

● Lead backend development from system design to production rollout.

● Own production systems including monitoring, performance tuning, and incident response.

● Handle scale, performance, and availability challenges across distributed systems.

● Ensure strong security, data protection, and compliance standards.

● Collaborate closely with product and frontend teams to deliver impactful features.

● Mentor engineers and lead knowledge-sharing sessions.

● Champion AI-assisted development and automation to improve engineering productivity.


Qualifications

● 5+ years of experience in backend software engineering with production-scale systems.

● Strong expertise in Go (Golang) and backend service development.

● Experience designing scalable, distributed system architectures.

● Strong experience with SQL and NoSQL databases and performance optimization.

● Hands-on experience with AWS, GCP, or Azure and cloud-native architectures.

● Strong understanding of scalability, performance, and reliability engineering.

● Excellent communication and technical decision-making skills.


Required Skills

● Go (Golang)

● Backend system design and architecture

● Distributed systems and scalability

● Database design and optimization

● Cloud-native development

● Production debugging and performance tuning

Preferred Skills

● Experience with AI-assisted development tools and workflows

● Knowledge of agentic workflows or MCP

● Experience scaling consumer-facing or marketplace platforms

● Strong DevOps and observability experience


About the Company

● MyYogaTeacher is a fast-growing health-tech startup focused on improving physical and mental

well-being worldwide.

● We connect highly qualified Yoga and Fitness coaches with customers globally through personalized

1-on-1 live sessions.

● 200,000+ customers, 335,000+ 5-star reviews, and 95% sessions rated 5 stars.

● Headquartered in California with operations in Bangalore.

Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
8 - 12 yrs
₹18L - ₹25L / yr
Dot Net
Windows Azure
SQL
skill iconC#
Web api
+2 more

Skills required:

  • Strong expertise in .NET Core / ASP.NET MVC
  • Candidate must have 8+ years of experience in Dot Net.
  • Candidate must have experience with Angular.
  • Hands-on experience with Entity Framework & LINQ
  • Experience with SQL Server (performance tuning, stored procedures, indexing)
  • Understanding of multi-tenancy architecture
  • Experience with Microservices / API development (REST, GraphQL)
  • Hands-on experience in Azure Services (App Services, Azure SQL, Blob Storage, Key Vault, Functions, etc.)
  • Experience in CI/CD pipelines with Azure DevOps
  • Knowledge of security best practices in cloud-based applications
  • Familiarity with Agile/Scrum methodologies
  • Flexible to use copilot or any other AI tool to write automated test cases and faster code writing

Roles and Responsibilities:

- Good communication Skills is must.

- Develop features across multiple subsystems within our applications, including collaboration in requirements definition, prototyping, design, coding, testing, and deployment.

- Understand how our applications operate, are structured, and how customers use them

- Provide engineering support (when necessary) to our technical operations staff when they are building, deploying, configuring, and supporting systems for customers.

Read more
Appiness Interactive
Bengaluru (Bangalore)
8 - 14 yrs
₹14L - ₹20L / yr
DevOps
Windows Azure
Powershell
cicd
yaml
+1 more

Job Description :


We are looking for an experienced DevOps Engineer with strong expertise in Azure DevOps, CI/CD pipelines, and PowerShell scripting, who has worked extensively with .NET-based applications in a Windows environment.


Mandatory Skills

  • Strong hands-on experience with Azure DevOps
  • GIT version control
  • CI/CD pipelines (Classic & YAML)
  • Excellent experience in PowerShell scripting
  • Experience working with .NET-based applications
  • Understanding of Solutions, Project files, MSBuild
  • Experience using Visual Studio / MSBuild tools
  • Strong experience in Windows environment
  • End-to-end experience in build, release, and deployment pipelines


Good to Have Skills

  • Terraform (optional / good to have)
  • Experience with JFrog Artifactory
  • SonarQube integration knowledge


Read more
Top MNC

Top MNC

Agency job
Bengaluru (Bangalore)
7 - 12 yrs
₹10L - ₹25L / yr
Windows Azure
DevOps
MLFlow
MLOps


JD :


• Master’s degree in Computer Science, Computational Sciences, Data Science, Machine Learning, Statistics , Mathematics any quantitative field

• Expertise with object-oriented programming (Python, C++)

• Strong expertise in Python libraries like NumPy, Pandas, PyTorch, TensorFlow, and Scikit-learn

• Proven experience in designing and deploying ML systems on cloud platforms (AWS, GCP, or Azure).

• Hands-on experience with MLOps frameworks, model deployment pipelines, and model monitoring tools.

• Track record of scaling machine learning solutions from prototype to production. 

• Experience building scalable ML systems in fast-paced, collaborative environments.

• Working knowledge of adversarial machine learning techniques and their mitigation

• Agile and Waterfall methodologies.

• Personally invested in continuous improvement and innovation.

• Motivated, self-directed individual that works well with minimal supervision.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+5 more

Job Summary:

We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.

Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD toolsDevOps processes, and Git-based workflows.

Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.

Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad, Chennai, Kochi (Cochin), Bengaluru (Bangalore), Trivandrum, Thiruvananthapuram
12 - 15 yrs
₹20L - ₹40L / yr
skill iconJava
DevOps
CI/CD
ReAct (Reason + Act)
skill iconReact.js
+6 more

Role Proficiency:

Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.


Knowledge Examples:

  • Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
  1. Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
  2. Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
  3. Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
  4. Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
  5. Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
  6. Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
  7. Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
  8. Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
  9. Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
  10. Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
  11. Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
  12. Solution Structuring: Demonstrates working knowledge of service offering and products


Additional Comments:

Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:

• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.

• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.

• Expertise in cloud-based applications on Azure, leveraging key Azure services.

• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.

• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.

• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.

• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.

• Excellent communication skills

• Mentor team members providing guidance on technical challenges and helping them grow their skill set.

• Good to have experience in GCP and retail domain.

 

Skills: Devops, Azure, Java


Must-Haves

Java (12+ years), React, Azure, DevOps, Cloud Architecture

Strong Java architecture and design experience.

Expertise in Azure cloud services.

Hands-on experience with React and front-end integration.

Proven track record in DevOps practices (CI/CD, automation).

Notice period - 0 to 15days only

Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum

Excellent communication and leadership skills.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹20L / yr
Microsoft Dynamics
skill iconC#
Office 365
skill iconGit
Microsoft Dynamics CRM
+13 more

Job Description

We are seeking a skilled Microsoft Dynamics 365 Developer with 4–7 years of hands-on experience in designing, customizing, and developing solutions within the Dynamics 365 ecosystem. The ideal candidate should have strong technical expertise, solid understanding of CRM concepts, and experience integrating Dynamics 365 with external systems.

 

Key Responsibilities

  • Design, develop, and customize solutions within Microsoft Dynamics 365 CE.
  • Work on entity schema, relationships, form customizations, and business logic components.
  • Develop custom plugins, workflow activities, and automation.
  • Build and enhance integrations using APIs, Postman, and related tools.
  • Implement and maintain security models across roles, privileges, and access levels.
  • Troubleshoot issues, optimize performance, and support deployments.
  • Collaborate with cross-functional teams and communicate effectively with stakeholders.
  • Participate in version control practices using GIT.

 

Must-Have Skills

Core Dynamics 365 Skills

  • Dynamics Concepts (Schema, Relationships, Form Customization): Advanced
  • Plugin Development: Advanced (writing and optimizing plugins, calling actions, updating related entities)
  • Actions & Custom Workflows: Intermediate
  • Security Model: Intermediate
  • Integrations: Intermediate (API handling, Postman, error handling, authorization & authentication, DLL merging)

 

Coding & Versioning

  • C# Coding Skills: Intermediate (Able to write logic using if-else, switch, loops, error handling)
  • GIT: Basic

 

Communication

  • Communication Skills: Intermediate (Ability to clearly explain technical concepts and work with business users)

 

Good-to-Have Skills (Any 3 or More)

Azure & Monitoring

  • Azure Functions: Basic (development, debugging, deployment)
  • Azure Application Insights: Intermediate (querying logs, pushing logs)

 

Reporting & Data

  • Power BI: Basic (building basic reports)
  • Data Migration: Basic (data import with lookups, awareness of migration tools)

 

Power Platform

  • Canvas Apps: Basic (building basic apps using Power Automate connector)
  • Power Automate: Intermediate (flows & automation)
  • PCF (PowerApps Component Framework): Basic

 

Skills: Microsoft Dynamics, Javascript, Plugins


Must-Haves

Microsoft Dynamics 365 (4-7 years), Plugin Development (Advanced), C# (Intermediate), Integrations (Intermediate), GIT (Basic)

Core Dynamics 365 Skills

Dynamics Concepts (Schema, Relationships, Form Customization): Advanced

Plugin Development: Advanced (writing and optimizing plugins, calling actions, updating related entities)

Actions & Custom Workflows: Intermediate

Security Model: Intermediate

Integrations: Intermediate

(API handling, Postman, error handling, authorization & authentication, DLL merging)

Coding & Versioning

C# Coding Skills: Intermediate

(Able to write logic using if-else, switch, loops, error handling)

GIT: Basic


Notice period - Immediate to 15 days

Locations: Bangalore only

(Ability to clearly explain technical concepts and work with business users)


Nice to Haves

(Any 3 or More)

Azure & Monitoring

Azure Functions: Basic (development, debugging, deployment)

Azure Application Insights: Intermediate (querying logs, pushing logs)

Reporting & Data

Power BI: Basic (building basic reports)

Data Migration: Basic

(data import with lookups, awareness of migration tools)

Power Platform

Canvas Apps: Basic (building basic apps using Power Automate connector)

Power Automate: Intermediate (flows & automation)

PCF (PowerApps Component Framework): Basic

Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
BlogVault

at BlogVault

3 candid answers
1 recruiter
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 6 yrs
Upto ₹35L / yr (Varies
)
skill iconRuby
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconReact.js
skill iconAngular (2+)
+3 more

We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.


What You’ll Do

  • Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
  • Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
  • Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
  • Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
  • Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
  • Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.


What Makes You a Strong Fit

  • You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
  • You think in systems and second-order effects—not just in ticket-by-ticket outputs.
  • You prefer well-reasoned defaults over overengineering.
  • You take ownership—not just of code, but of the outcomes it enables.
  • You work cleanly, write clear code, and make life easier for those who come after you.
  • You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.


Bonus if You Have Experience With

  • Building tools or workflows that accelerate other developers.
  • Working with AI coding tools and integrating them meaningfully into your workflow.
  • Building for SaaS products, especially those with large user bases or self-serve motions.
  • Working in small, fast-moving product teams with a high bar for ownership.


Why Join Us

  • A small team that values craftsmanship, curiosity, and momentum.
  • A product-driven culture where engineering decisions are informed by customer outcomes.
  • A chance to work on multiple zero-to-one opportunities with strong PMF.
  • No vanity perks—just meaningful work with people who care.
Read more
AI company

AI company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data architecture
Data engineering
SQL
Data modeling
GCS
+21 more

Review Criteria

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred

  • Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Job Specific Criteria

  • CV Attachment is mandatory
  • How many years of experience you have with Dremio?
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate

  • Bachelor’s or master’s in computer science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.
Read more
Product Innovation Company

Product Innovation Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹10L / yr
Project Management
Program Management
Stakeholder management
IT program management
Software project management
+25 more

ROLES AND RESPONSIBILITIES:

Standardization and Governance:

  • Establishing and maintaining project management standards, processes, and methodologies.
  • Ensuring consistent application of project management policies and procedures.
  • Implementing and managing project governance processes.


Resource Management:

  • Facilitating the sharing of resources, tools, and methodologies across projects.
  • Planning and allocating resources effectively.
  • Managing resource capacity and forecasting future needs.


Communication and Reporting:

  • Ensuring effective communication and information flow among project teams and stakeholders.
  • Monitoring project progress and reporting on performance.
  • Communicating strategic work progress, including risks and benefits.


Project Portfolio Management:

  • Supporting strategic decision-making by aligning projects with organizational goals.
  • Selecting and prioritizing projects based on business objectives.
  • Managing project portfolios and ensuring efficient resource allocation across projects.


Process Improvement:

  • Identifying and implementing industry best practices into workflows.
  • Improving project management processes and methodologies.
  • Optimizing project delivery and resource utilization.


Training and Support:

  • Providing training and support to project managers and team members.
  • Offering project management tools, best practices, and reporting templates.


Other Responsibilities:

  • Managing documentation of project history for future reference.
  • Coaching project teams on implementing project management steps.
  • Analysing financial data and managing project costs.
  • Interfacing with functional units (Domain, Delivery, Support, Devops, HR etc).
  • Advising and supporting senior management.


IDEAL CANDIDATE:

  • 3+ years of proven experience in Project Management roles with strong exposure to PMO processes, standards, and governance frameworks.
  • Demonstrated ability to manage project status tracking, risk assessments, budgeting, variance analysis, and defect tracking across multiple projects.
  • Proficient in Project Planning and Scheduling using tools like MS Project and Advanced Excel (e.g., Gantt charts, pivot tables, macros).
  • Experienced in developing project dashboards, reports, and executive summaries for senior management and stakeholders.
  • Active participant in Agile environments, attending and contributing to Scrum calls, sprint planning, and retrospectives.
  • Holds a Bachelor’s degree in a relevant field (e.g., Engineering, Business, IT, etc.).
  • Preferably familiar with Jira, Azure DevOps, and Power BI for tracking and visualization of project data.
  • Exposure to working in product-based companies or fast-paced, innovation-driven environments is a strong advantage.
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Gurugram, Bhopal, Jaipur, Bengaluru (Bangalore)
2 - 4 yrs
₹5L - ₹12L / yr
Windows Azure
SQL
Data Structures
databricks

 Hiring: Azure Data Engineer

⭐ Experience: 2+ Years

📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Bangalore

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

Passport: Mandatory & Valid

(Only immediate joiners & candidates serving notice period)


Mandatory Skills:

Azure Synapse, Azure Databricks, Azure Data Factory (ADF), SQL, Delta Lake, ADLS, ETL/ELT,Pyspark .


Responsibilities:

  • Build and maintain data pipelines using ADF, Databricks, and Synapse.
  • Develop ETL/ELT workflows and optimize SQL queries.
  • Implement Delta Lake for scalable lakehouse architecture.
  • Create Synapse data models and Spark/Databricks notebooks.
  • Ensure data quality, performance, and security.
  • Collaborate with cross-functional teams on data requirements.


Nice to Have:

Azure DevOps, Python, Streaming (Event Hub/Kafka), Power BI, Azure certifications (DP-203).


Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹10L / yr
DevOps
Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
One of Leading Software Company

One of Leading Software Company

Agency job
via Harel Consulting by Shantpriya Chandra
Bengaluru (Bangalore), Pune, Hyderabad, Chennai
10 - 20 yrs
₹25L - ₹35L / yr
Internet of Things (IOT)
Cloud Computing
skill iconAmazon Web Services (AWS)
Windows Azure
Message Queuing Telemetry Transport (MQTT)

We are Looking for "IoT Migration Architect (Azure to AWS)"- Contract to Hire role.


"IoT Migration Architect (Azure to AWS)" – Role 1

Salary between 28LPA -33 LPA -Fixed

 

We have Other Positions as well in IOT.

  1. IoT Solutions Engineer - Role 2
  2. IoT Architect – 8+ Yrs - Role -3

Designed end to end IoT Architecture, Define Strategy, Integrate Hardware, /Software /Cloud Components.

Skills -Cloud Platform, AWS IoT, Azure IoT, Networking Protocols,

Experience in Large Scale IoT Deployment.



Contract to Hire role.

Location – Pune/Hyderabad/Chennai/ Bangalore

Work Mode -2-3 days Hybrid from Office in week.

Duration -Long Term, With Potential for full time conversion based on Performance and Business needs.

 

How much notice period we can consider.

15-25 Days( Not more than that)


Client Company – One of Leading Technology Consulting

Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east.


Highlights of this role.

•            It’s a long term role.

•            High Possibility of conversion within 6 Months or After 6 months ( if you perform well).

•            Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai.


Point to be remember.

1.           You should have valid experience cum relieving letters of your all past employer.

2.           Must have available to join within 15 days’ time.

3.           Must be ready to work 2-3 days from Client Office in a week.

4.           Must have PF service history of last 4 years in Continuation


What we offer During the role.

  • Competitive Salary
  • Flexible working hours and hybrid work mode.
  • Potential for full time conversion, Including Comprehensive Benefits, PF, Gratuity, Paid leave, Paid Holiday (as per client), Health Insurance and form 16.


How to Apply.

  1. Pls fill the given below summary sheet.
  2. Pls provide UAN Service history
  3. Latest Photo.



IoT Migration Architect (Azure to AWS) - Job Description

Job Title: IoT Migration Architect (Azure to AWS)

Experience Range: 10+ Years


Role Summary

The IoT Migration Architect is a senior-level technical expert responsible for providing architecture leadership, design, and hands-on execution for migrating complex Internet of Things (IoT) applications and platforms from Microsoft Azure to Amazon Web Services (AWS). This role requires deep expertise in both Azure IoT and the entire AWS IoT ecosystem, ensuring a seamless, secure, scalable, and cost-optimized transition with minimal business disruption.


Required Technical Skills & Qualifications

10+ years of progressive experience in IT architecture, with a minimum of 4+ years focused on IoT Solution Architecture and Cloud Migrations.

Deep, hands-on expertise in the AWS IoT ecosystem, including design, implementation, and operations (AWS IoT Core, Greengrass, Device Management, etc.).

Strong, hands-on experience with Azure IoT services, specifically Azure IoT Hub, IoT Edge, and related data/compute services (e.g., Azure Stream Analytics, Azure Functions).

Proven experience in cloud-to-cloud migration projects, specifically moving enterprise-grade applications and data, with a focus on the unique challenges of IoT device and data plane migration.

Proficiency with IoT protocols such as MQTT, AMQP, HTTPS, and securing device communication (X.509).

Expertise in Cloud-Native Architecture principles, microservices, containerization (Docker/Kubernetes/EKS), and Serverless technologies (AWS Lambda).

Solid experience with CI/CD pipelines and DevOps practices in a cloud environment (e.g., Jenkins, AWS Code Pipeline, GitHub Actions).

Strong knowledge of database technologies, both relational (e.g., RDS) and NoSQL (e.g., DynamoDB).

Certifications Preferred: AWS Certified Solutions Architect (Professional level highly desired), or other relevant AWS/Azure certifications.



Your full Name ( Full Name means full name) –

Contact NO –

Alternate Contact No-

Email ID –

Alternate Email ID-

Total Experience –

Experience in IoT –

Experience in AWS IoT-

Experience in Azure IoT –

Experience in Kubernetes –

Experience in Docker –

Experience in EKS-

Do you have valid passport –

Current CTC –

Expected CTC –

What is your notice period in your current Company-

Are you currently working or not-

If not working then when you have left your last company –

Current location –

Preferred Location –

It’s a Contract to Hire role, Are you ok with that-

Highest Qualification –

Current Employer (Payroll Company Name)

Previous Employer (Payroll Company Name)-

2nd Previous Employer (Payroll Company Name) –

3rd Previous Employer (Payroll Company Name)-

Are you holding any Offer –

Are you Expecting any offer -

Are you open to consider Contract to Hire role , It’s a C2H Role-

PF Deduction is happening in Current Company –

PF Deduction happened in 2nd last Employer-

PF Deduction happened in 3 last Employer –

Latest Photo –

UAN Service History -


Shantpriya Chandra

Director & Head of Recruitment.

Harel Consulting India Pvt Ltd

https://www.linkedin.com/in/shantpriya/

www.harel-consulting.com

Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹5L - ₹13L / yr
Linux administration
Active Directory
DNS
DHCP
VMWare
+4 more

Job Title: Infrastructure Engineer

Experience: 4.5Years +

Location: Bangalore

Employment Type: Full-Time

Joining: Immediate Joiner Preferred


💼 Job Summary

We are looking for a skilled Infrastructure Engineer to manage, maintain, and enhance our on-premise and cloud-based systems. The ideal candidate will have strong experience in server administration, virtualization, hybrid cloud environments, and infrastructure automation. This role requires hands-on expertise, strong troubleshooting ability, and the capability to collaborate with cross-functional teams.


Roles & Responsibilities

  • Install, configure, and manage Windows and Linux servers.
  • Maintain and administer Active Directory, DNS, DHCP, and file servers.
  • Manage virtualization platforms such as VMware or Hyper-V.
  • Monitor system performance, logs, and uptime to ensure high availability.
  • Provide L2/L3 support, diagnose issues, and maintain detailed technical documentation.
  • Deploy and manage cloud servers and resources in AWS, Azure, or Google Cloud.
  • Design, build, and maintain hybrid environments (on-premises + cloud).
  • Administer data storage systems and implement/test backup & disaster recovery plans.
  • Handle cloud services such as cloud storage, networking, and identity (IAM, Azure AD).
  • Ensure compliance with security standards like ISO, SOC, GDPR, PCI DSS.
  • Integrate and manage monitoring and alerting tools.
  • Support CI/CD pipelines and automation for infrastructure deployments.
  • Collaborate with Developers, DevOps, and Network teams for seamless system integration.
  • Troubleshoot and resolve complex infrastructure & system-level issues.

Key Skills Required

  • Windows Server & Linux Administration
  • VMware / Hyper-V / Virtualization technologies
  • Active Directory, DNS, DHCP administration
  • Knowledge of CI/CD and Infrastructure as Code
  • Hands-on experience in AWS, Azure, or GCP
  • Experience with cloud migration and hybrid cloud setups
  • Proficiency in backup, replication, and disaster recovery tools
  • Familiarity with automation tools (Terraform, Ansible, etc. preferred)
  • Strong troubleshooting and documentation skills
  • Understanding of networking concepts (TCP/IP, VPNs, firewalls, routing) is an added advantage


Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
10 - 15 yrs
₹18L - ₹32L / yr
ASP.NET
MVC Framework
dot net core
skill iconAngular (2+)
Entity Framework
+6 more

Knowledge / Skills / Abilities


  • Bachelor’s Degree or equivalent in Computer Science or related numerate
  • Strong application development experience within a Microsoft .NET based environment
  • Demonstrates good written and verbal communication skills in leading a small group of Developers and liaising with Business and IT stakeholders on agreed delivery commitments
  • Demonstrates capability of proactive end-to-end ownership of product delivery including delivery to estimates, robust hosting, smooth release management, and consistency in quality to ensure overall satisfied product enhancement experience for the Product Owner and the wider, global end-user community
  • Demonstrates good proficiency with established ageing tech stack but has also gained familiarity with new technologies as per market trends, which will be key to mutual success for the candidate and the organization in context with ongoing Business and IT transformation.


Essential skills


  • Proficiency in C#, ASP.NET Webforms, MVC, WebAPI, jQuery, Angular, Entity Framework, SQL Server, XML, XSLT, JSON, .NET Core
  • Familiarity with WPF, WCF, SSIS, SSRS, Azure DevOps, Cloud Technologies (Azure)
  • Good analytical and communication skills
  • Proficiency with Agile and Waterfall methodologies


Desirable skills


Familiarity with hosting and server infrastructure

Required Experience 10+

Read more
GLOBAL DIGITAL TRANSFORMATION SOLUTIONS PROVIDER

GLOBAL DIGITAL TRANSFORMATION SOLUTIONS PROVIDER

Agency job
via Peak Hire Solutions by Dhara Thakkar
Thiruvananthapuram, Trivandrum, Bengaluru (Bangalore), Mumbai, Navi Mumbai, Ahmedabad, Chennai, Coimbatore, Gurugram, Hyderabad, Kochi (Cochin), Kolkata, Calcutta, Noida, Pune
8 - 12 yrs
₹20L - ₹40L / yr
skill icon.NET
Agile/Scrum
skill iconVue.js
Software Development
API
+21 more

Job Position: Lead II - Software Engineering

Domain: Information technology (IT)

Location: India - Thiruvananthapuram

Salary: Best in Industry

Job Positions: 1

Experience: 8 - 12 Years

Skills: .Net, Sql Azure, Rest Api, Vue.Js

Notice Period: Immediate – 30 Days


Job Summary:

We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.


Key Responsibilities:

  • Design, develop, and maintain scalable and secure .NET backend APIs.
  • Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
  • Lead and contribute to Agile software delivery processes (Scrum, Kanban).
  • Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
  • Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
  • Implement unit and integration testing to ensure code quality and system stability.
  • Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
  • Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
  • Mentor and coach engineers within a co-located or distributed team environment.
  • Maintain best practices in code versioning, testing, and documentation.


Mandatory Skills:

  • 7+ years of .NET development experience, including API design and development
  • Strong experience with Azure Cloud services, including:
  • Web/Container Apps
  • Azure Functions
  • Azure SQL Server
  • Solid understanding of Agile development methodologies (Scrum/Kanban)
  • Experience in CI/CD pipeline design and implementation
  • Proficient in Infrastructure as Code (IaC) – preferably Terraform
  • Strong knowledge of RESTful services and JSON-based APIs
  • Experience with unit and integration testing techniques
  • Source control using Git
  • Strong understanding of HTML, CSS, and cross-browser compatibility


Good-to-Have Skills:

  • Experience with Kubernetes and Docker
  • Knowledge of JavaScript UI frameworks, ideally Vue.js
  • Familiarity with JIRA and Agile project tracking tools
  • Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
  • Experience mentoring or coaching junior developers
  • Strong problem-solving and communication skills
Read more
Bluecopa

Bluecopa

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
Shell Scripting
CI/CD
skill iconKubernetes
+3 more

Role: DevOps Engineer


Exp: 4 - 7 Years

CTC: up to 28 LPA


Key Responsibilities

•   Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)

•   Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)

•   Develop and maintain CI/CD pipelines for multiple services and environments

•   Manage infrastructure as code using tools like Terraform and/or Pulumi

•   Automate operations with Python and shell scripting for deployment, monitoring, and maintenance

•   Ensure high availability and performance of production systems and troubleshoot incidents effectively

•   Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.

•   Collaborate with development, security, and product teams to align infrastructure with business needs

•   Apply best practices in cloud networking, Linux administration, and configuration management

•   Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)

•   Participate in on-call rotations and incident response activities


If Interested kindly share your updated resume on 82008 31681

Read more
Bluecopa

Bluecopa

Agency job
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Role: DevOps Engineer


Exp: 4 - 7 Years

CTC: up to 28 LPA


Key Responsibilities

•   Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)

•   Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)

•   Develop and maintain CI/CD pipelines for multiple services and environments

•   Manage infrastructure as code using tools like Terraform and/or Pulumi

•   Automate operations with Python and shell scripting for deployment, monitoring, and maintenance

•   Ensure high availability and performance of production systems and troubleshoot incidents effectively

•   Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.

•   Collaborate with development, security, and product teams to align infrastructure with business needs

•   Apply best practices in cloud networking, Linux administration, and configuration management

•   Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)

•   Participate in on-call rotations and incident response activities

Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
skill iconJava
skill iconSpring Boot
skill iconPHP
GraphQL
Algorithms
+8 more

Job Title: Java Spring Boot Engineer

📍 Location: Bangalore

🧾 Experience: 3–4 Years

📝 Employment Type: Contract (1 Year + Extendable)


Required Skills & Qualifications:

  • Strong expertise in Java, Spring Boot, and backend development.
  • Hands-on experience with PHP.
  • Good understanding of data structures and algorithms.
  • Experience with GraphQL and RESTful APIs.
  • Proficiency in working with SQL & NoSQL databases.
  • Experience using Git for version control.
  • Familiarity with CI/CD pipelines, Docker, Kubernetes, and cloud platforms (AWS, Azure).
  • Exposure to monitoring and logging tools like Grafana, New Relic, and Splunk.
  • Strong problem-solving skills and ability to work in a collaborative team environment.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹6L - ₹14L / yr
databricks
SQL Azure
Windows Azure

We are hiring a Senior Data Engineer with strong expertise in Databricks, Azure Data Factory, and PySpark.

Must Have:

  • Databricks, ADF, PySpark
  • Mastery: AWS/Azure/SAP, ELT, Data Modeling
  • Skills: Data Integration & Processing, GitHub/GitHub Actions, Azure DevOps, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest

Responsibilities:

  • Build and optimize scalable data pipelines
  • Architect & implement ELT/data models
  • Manage data ingestion, integration, and processing workflows
  • Enable CI/CD with DevOps tools
  • Ensure code quality & reliability


Read more
Digitide
Bengaluru (Bangalore)
6 - 9 yrs
₹5L - ₹15L / yr
Windows Azure
Data engineering
databricks
Data Factory

1. Design, develop, and maintain data pipelines using Azure Data Factory

2. ⁠Create and manage data models in PostgreSQL, ensuring efficient data storage and retrieval.

3. ⁠Optimize query performance and database performance in PostgreSQL, including indexing, query tuning, and performance monitoring.

4. Strong knowledge on data modeling and mapping from various sources to data model.

5. ⁠Develop and maintain logging mechanisms in Azure Data Factory to monitor and troubleshoot data pipelines.

6. Strong knowledge on Key Vault, Azure Data lake, PostgreSQL

7. ⁠Manage file handling within Azure Data Factory, including reading, writing, and transforming data from various file formats.

8. Strong SQL query skills with the ability to handle multiple scenarios and optimize query performance.

9. ⁠Excellent problem-solving skills and ability to handle complex data scenarios.

10. Collaborate with Business stakeholder, data architects and PO's to understand and meet their data requirements.

11. Ensure data quality and integrity through validation and quality checks.

12. Having Power BI knowledge, creating and configuring semantic models & reports would be preferred.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Amita Soni
Posted by Amita Soni
Pune, Mumbai, Bengaluru (Bangalore)
8 - 22 yrs
Best in industry
Windows Azure
Microsoft Windows Azure
VMWare
Active Directory
IaaS
+1 more

About Wissen Technology

Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.

 

Here’s why Wissen Technology stands out:

 

Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.

Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.

Recognitions: Great Place to Work® Certified.

Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).

Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.

Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.

 

For more details:

 

Website: www.wissen.com 

Wissen Thought leadership : https://www.wissen.com/articles/ 

 

LinkedIn: Wissen Technology



We are seeking a highly skilled Azure Cloud professional to manage, design, and engineer scalable and secure cloud infrastructure on Microsoft Azure. The ideal candidate will have a strong background in cloud infrastructure, system administration and architecture design.


Responsibilities

  • Manage and maintain Azure resources including Virtual Machines, Virtual Networks, Storage Accounts, Azure AD, and Resource Groups.
  • Monitor and optimize resource usage and performance.
  • Implement and manage backup, disaster recovery, and high availability.
  • Ensure security and compliance using Azure Security Center, Defender, and Azure Policies.
  • Design scalable, resilient, and cost-effective cloud architectures aligned with business goals.
  • Define and document Azure architecture standards, governance models, and best practices.
  • Collaborate with stakeholders to translate business requirements into technical solutions.
  • Plan and lead cloud migration strategies from on-premises or other cloud providers to Azure.

Required Skills & Qualifications:

  • 5–10 years of experience with Microsoft Azure platform in admin, architect, or engineer roles.
  • Strong understanding of networking, identity, security, and governance in Azure.
  • Hands-on experience with Azure IaaS, PaaS, and hybrid cloud environments.
  • Proficiency with PowerShell, Azure CLI, and scripting for automation.
  • Experience in monitoring and performance tuning using Azure Monitor, Log Analytics.
  • Familiarity with Azure Cost Management and optimization strategies.

Preferred Qualifications:

  • Microsoft certifications such as:
  • AZ-104: Microsoft Azure Administrator
  • AZ-305: Azure Solutions Architect Expert
  • AZ-400: Azure DevOps Engineer Expert
  • Knowledge of compliance standards (ISO, HIPAA, etc.)
  • Experience with hybrid infrastructure (on-prem + Azure)
  • Familiarity with other cloud platforms (AWS, GCP) is a plus.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Bengaluru (Bangalore), Mumbai, Gurugram, Hyderabad, Chennai
8 - 10 yrs
₹10L - ₹30L / yr
skill iconReact.js
skill icon.NET
Windows Azure
DevOps
skill iconKubernetes
+1 more

🚀 Hiring: Dot net full stack at Deqode

⭐ Experience: 8+ Years

📍 Location: Bangalore | Mumbai | Pune | Gurgaon | Chennai | Hyderabad

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


We’re looking for an experienced Dotnet Full Stack Developer with strong hands-on skills in ReactJS, .NET Core, and Azure Cloud Services (Azure Functions, Azure SQL, APIM, etc.).


⭐ Must-Have Skills:-

➡️ Design and develop scalable web applications using ReactJS, C#, and .NET Core.

➡️Azure (Functions, App Services, SQL, APIM, Service Bus)

➡️Familiarity with DevOps practices, CI/CD pipelines, Docker, and Kubernetes.

➡️Advanced experience in Entity Framework Core and SQL Server.

➡️Expertise in RESTful API development and microservices.


Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Bengaluru (Bangalore)
3 - 8 yrs
₹17L - ₹25L / yr
skill iconMongoDB
skill iconPython
skill iconFlask
skill iconDjango
Windows Azure
+4 more

Employment type- Contract basis


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines using PySpark and distributed computing frameworks.
  • Implement ETL processes and integrate data from structured and unstructured sources into cloud data warehouses.
  • Work across Azure or AWS cloud ecosystems to deploy and manage big data workflows.
  • Optimize performance of SQL queries and develop stored procedures for data transformation and analytics.
  • Collaborate with Data Scientists, Analysts, and Business teams to ensure reliable data availability and quality.
  • Maintain documentation and implement best practices for data architecture, governance, and security.

⚙️ Required Skills

  • Programming: Proficient in PySpark, Python, and SQL, MongoDB
  • Cloud Platforms: Hands-on experience with Azure Data Factory, Databricks, or AWS Glue/Redshift.
  • Data Engineering Tools: Familiarity with Apache Spark, Kafka, Airflow, or similar tools.
  • Data Warehousing: Strong knowledge of designing and working with data warehouses like Snowflake, BigQuery, Synapse, or Redshift.
  • Data Modeling: Experience in dimensional modeling, star/snowflake schema, and data lake architecture.
  • CI/CD & Version Control: Exposure to Git, Terraform, or other DevOps tools is a plus.

🧰 Preferred Qualifications

  • Bachelor's or Master's in Computer Science, Engineering, or related field.
  • Certifications in Azure/AWS are highly desirable.
  • Knowledge of business intelligence tools (Power BI, Tableau) is a bonus.
Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Bengaluru (Bangalore)
3 - 8 yrs
₹9L - ₹15L / yr
Salesforce development
Oracle Application Express (APEX)
Salesforce Lightning
SQL
ETL
+6 more

1. Software Development Engineer - Salesforce

What we ask for

We are looking for strong engineers to build best in class systems for commercial &

wholesale banking at Bank, using Salesforce service cloud. We seek experienced

developers who bring deep understanding of salesforce development practices, patterns,

anti-patterns, governor limits, sharing & security model that will allow us to architect &

develop robust applications.

You will work closely with business, product teams to build applications which provide end

users with intuitive, clean, minimalist, easy to navigate experience

Develop systems by implementing software development principles and clean code

practices scalable, secure, highly resilient, have low latency

Should be open to work in a start-up environment and have confidence to deal with complex

issues keeping focus on solutions and project objectives as your guiding North Star


Technical Skills:

● Strong hands-on frontend development using JavaScript and LWC

● Expertise in backend development using Apex, Flows, Async Apex

● Understanding of Database concepts: SOQL, SOSL and SQL

● Hands-on experience in API integration using SOAP, REST API, graphql

● Experience with ETL tools , Data migration, and Data governance

● Experience with Apex Design Patterns, Integration Patterns and Apex testing

framework

● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,

bitbucket

● Should have worked with at least one programming language - Java, python, c++

and have good understanding of data structures


Preferred qualifications

● Graduate degree in engineering

● Experience developing with India stack

● Experience in fintech or banking domain

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Noida, Bengaluru (Bangalore), Pune
6 - 9 yrs
₹10L - ₹18L / yr
Windows Azure
SQL Azure
SQL
Data Warehouse (DWH)
skill iconData Analytics
+3 more

Hybrid work mode


(Azure) EDW Experience working in loading Star schema data warehouses using framework

architectures including experience loading type 2 dimensions. Ingesting data from various

sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.

Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),

Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
2 - 4 yrs
₹4L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
DevOps
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Hyderabad
5 - 10 yrs
₹10L - ₹18L / yr
skill iconData Analytics
SQL
databricks
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Position : Senior Data Analyst

Experience Required : 5 to 8 Years

Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)

Shift Timing : 11:00 AM – 8:00 PM IST

Notice Period : Immediate Joiners Only


Job Summary :

We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.

The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.


Key Responsibilities :

  • Lead the design, execution, and delivery of advanced data analysis projects.
  • Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
  • Create and maintain interactive dashboards, reports, and visualizations.
  • Perform root cause analysis and uncover meaningful patterns from large datasets.
  • Present analytical findings to senior leaders and non-technical audiences.
  • Maintain data integrity, quality, and governance in all reporting and analytics solutions.
  • Mentor junior analysts and support their professional development.
  • Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.

Must-Have Skills :

  • Strong proficiency in SQL and Databricks
  • Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
  • Sound understanding of data warehousing concepts and BI best practices

Good-to-Have :

  • Experience with AWS
  • Exposure to machine learning and predictive analytics
  • Industry-specific analytics experience (preferred but not mandatory)
Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore), Surat
3 - 5 yrs
₹4.8L - ₹11L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)

Job Title: Lead DevOps Engineer

Experience Required: 4 to 5 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
Bengaluru and chennai based tech startup

Bengaluru and chennai based tech startup

Agency job
via Recruit Square by Priyanka choudhary
Bengaluru (Bangalore), Chennai
6 - 12 yrs
₹19L - ₹35L / yr
Linux/Unix
TCP/IP
Windows Azure
skill iconAmazon Web Services (AWS)
SaaS
+2 more

Has substantial expertise in Linux OS, Https, Proxy knowledge, Perl, Python scripting & hands-on

Is responsible for the identification and selection of appropriate network solutions to design and deploy in environments based on business objectives and requirements.

Is skilled in developing, deploying, and troubleshooting network deployments, with deep technical knowledge, especially around Bootstrapping & Squid Proxy, Https, scripting equivalent knowledge. Further align the network to meet the Company’s objectives through continuous developments, improvements and automation.

Preferably 10+ years of experience in network design and delivery of technology centric, customer-focused services.

Preferably 3+ years in modern software-defined network and preferably, in cloud-based environments.

Diploma or bachelor’s degree in engineering, Computer Science/Information Technology, or its equivalent.

Preferably possess a valid RHCE (Red Hat Certified Engineer) certification

Preferably possess any vendor Proxy certification (Forcepoint/ Websense/ bluecoat / equivalent)

Must possess advanced knowledge in TCP/IP concepts and fundamentals.  Good understanding and working knowledge of Squid proxy, Https protocol / Certificate management.

Fundamental understanding of proxy & PAC file.

Integration experience and knowledge between modern networks and cloud service providers such as AWS, Azure and GCP will be advantageous.

Knowledge in SaaS, IaaS, PaaS, and virtualization will be advantageous.

Coding skills such as Perl, Python, Shell scripting will be advantageous.

Excellent technical knowledge, troubleshooting, problem analysis, and outside-the-box thinking.

Excellent communication skills – oral, written and presentation, across various types of target audiences.

Strong sense of personal ownership and responsibility in accomplishing the organization’s goals and objectives. Exudes confidence, able to cope under pressure and will roll-up his/her sleeves to drive a project to success in a challenging environment.

Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune, Jaipur, Kolkata, Indore
4 - 6 yrs
₹5L - ₹18L / yr
skill icon.NET
skill iconC#
skill iconAngular (2+)
Windows Azure
skill iconAmazon Web Services (AWS)

Job Description:

Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.

Key Responsibilities:

  • Develop and maintain web applications using .NET Core, C#, and Angular.
  • Design and implement RESTful APIs and integrate with front-end components.
  • Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
  • Deploy and manage applications on cloud platforms (AWS or Azure).
  • Write clean, scalable, and efficient code following best practices.
  • Participate in code reviews and provide constructive feedback.
  • Troubleshoot and debug applications to ensure optimal performance.
  • Stay updated with emerging technologies and propose improvements to existing systems.

Required Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Minimum of 4 years of professional experience in software development.
  • Proficiency in .NET Core, C#, and Angular.
  • Experience with cloud services (either AWS or Azure).
  • Strong understanding of RESTful API design and implementation.
  • Familiarity with version control systems like Git.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work independently and collaboratively in a team environment.

Preferred Qualifications:

  • Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
  • Knowledge of CI/CD pipelines and DevOps practices.
  • Familiarity with Agile/Scrum methodologies.
  • Strong communication and interpersonal skills.

What We Offer:

  • Competitive salary and performance-based incentives.
  • Flexible working hours and remote work options.
  • Opportunities for professional growth and career advancement.
  • Collaborative and inclusive work environment.
  • Access to the latest tools and technologies.


Read more
TechMynd Consulting

at TechMynd Consulting

2 candid answers
Suraj N
Posted by Suraj N
Bengaluru (Bangalore), Gurugram, Mumbai
4 - 8 yrs
₹10L - ₹24L / yr
skill iconData Science
skill iconPostgreSQL
skill iconPython
Apache
skill iconAmazon Web Services (AWS)
+5 more

Senior Data Engineer


Location: Bangalore, Gurugram (Hybrid)


Experience: 4-8 Years


Type: Full Time | Permanent


Job Summary:


We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.


This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.


Key Responsibilities:


PostgreSQL & Data Modeling


· Design and optimize complex SQL queries, stored procedures, and indexes


· Perform performance tuning and query plan analysis


· Contribute to schema design and data normalization


Data Migration & Transformation


· Migrate data from multiple sources to cloud or ODS platforms


· Design schema mapping and implement transformation logic


· Ensure consistency, integrity, and accuracy in migrated data


Python Scripting for Data Engineering


· Build automation scripts for data ingestion, cleansing, and transformation


· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)


· Maintain reusable script modules for operational pipelines


Data Orchestration with Apache Airflow


· Develop and manage DAGs for batch/stream workflows


· Implement retries, task dependencies, notifications, and failure handling


· Integrate Airflow with cloud services, data lakes, and data warehouses


Cloud Platforms (AWS / Azure / GCP)


· Manage data storage (S3, GCS, Blob), compute services, and data pipelines


· Set up permissions, IAM roles, encryption, and logging for security


· Monitor and optimize cost and performance of cloud-based data operations


Data Marts & Analytics Layer


· Design and manage data marts using dimensional models


· Build star/snowflake schemas to support BI and self-serve analytics


· Enable incremental load strategies and partitioning


Modern Data Stack Integration


· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka


· Support modular pipeline design and metadata-driven frameworks


· Ensure high availability and scalability of the stack


BI & Reporting Tools (Power BI / Superset / Supertech)


· Collaborate with BI teams to design datasets and optimize queries


· Support development of dashboards and reporting layers


· Manage access, data refreshes, and performance for BI tools




Required Skills & Qualifications:


· 4–6 years of hands-on experience in data engineering roles


· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)


· Advanced Python scripting skills for automation and ETL


· Proven experience with Apache Airflow (custom DAGs, error handling)


· Solid understanding of cloud architecture (especially AWS)


· Experience with data marts and dimensional data modeling


· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)


· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI


· Version control (Git) and CI/CD pipeline knowledge is a plus


· Excellent problem-solving and communication skills

Read more
Kenscio
Parikshith D B
Posted by Parikshith D B
Bengaluru (Bangalore)
1 - 4 yrs
₹4L - ₹10L / yr
skill iconNodeJS (Node.js)
MySQL
TypeScript
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

A backend developer is an engineer who can handle all the work of databases, servers,

systems engineering, and clients. Depending on the project, what customers need may

be a mobile stack, a Web stack, or a native application stack.


You will be responsible for:


 Build reusable code and libraries for future use.

 Own & build new modules/features end-to-end independently.

 Collaborate with other team members and stakeholders.


Required Skills :


 Thorough understanding of Node.js and Typescript.

 Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.

 Basic architectural understanding of modern day web applications

 Diligence for coding standards

 Must be good with git and git workflow

 Experience of external integrations is a plus

 Working knowledge of AWS or GCP or Azure - Expertise with linux based systems

 Experience with CI/CD tools like jenkins is a plus.

 Experience with testing and automation frameworks.

 Extensive understanding of RDBMS systems

Read more
Bengaluru (Bangalore), Pune, Chennai, Coimbatore
5 - 10 yrs
₹7L - ₹18L / yr
ASP.NET
Windows Azure
skill iconReact.js

Job Title: Full Stack Developer

Job Description:

We are looking for a skilled Full Stack Developer with hands-on experience in building scalable web applications using .NET Core and ReactJS. The ideal candidate will have a strong understanding of backend development, cloud services, and modern frontend technologies.

Key Skills:

  • .NET Core, C#
  • SQL Server
  • React JS
  • Azure (Functions, Services)
  • Entity Framework
  • Microservices Architecture

Responsibilities:

  • Design, develop, and maintain full-stack applications
  • Build scalable microservices using .NET Core
  • Implement and consume Azure Functions and Services
  • Develop efficient database queries with SQL Server
  • Integrate front-end components using ReactJS
  • Collaborate with cross-functional teams to deliver high-quality solutions


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 8 yrs
₹12L - ₹22L / yr
skill iconPython
skill iconDjango
skill iconAmazon Web Services (AWS)
skill iconFlask
Windows Azure

About the Role:


  • We are looking for a highly skilled and experienced Senior Python Developer to join our dynamic team based in Manyata Tech Park, Bangalore. The ideal candidate will have a strong background in Python development, object-oriented programming, and cloud-based application development. You will be responsible for designing, developing, and maintaining scalable backend systems using modern frameworks and tools.
  • This role is hybrid, with a strong emphasis on working from the office to collaborate effectively with cross-functional teams.


Key Responsibilities:

  • Design, develop, test, and maintain backend services using Python.
  • Develop RESTful APIs and ensure their performance, responsiveness, and scalability.
  • Work with popular Python frameworks such as Django or Flask for rapid development.
  • Integrate and work with cloud platforms (AWS, Azure, GCP or similar).
  • Collaborate with front-end developers and other team members to establish objectives and design cohesive code.
  • Apply object-oriented programming principles to solve real-world problems efficiently.
  • Implement and support event-driven architectures where applicable.
  • Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues.
  • Write clean, maintainable, and reusable code with proper documentation.
  • Contribute to system architecture and code review processes.


Required Skills and Qualifications:


  • Minimum of 5 years of hands-on experience in Python development.
  • Strong understanding of Object-Oriented Programming (OOP) and Data Structures.
  • Proficiency in building and consuming REST APIs.
  • Experience working with at least one cloud platform such as AWS, Azure, or Google Cloud Platform.
  • Hands-on experience with Python frameworks like Django, Flask, or similar.
  • Familiarity with event-driven programming and asynchronous processing.
  • Excellent problem-solving, debugging, and troubleshooting skills.
  • Strong communication and collaboration abilities to work effectively in a team environment.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill icon.NET
Windows Azure
SQL
skill iconC#

JD:

The Senior Software Engineer works closely with our development team, product manager, dev-ops and business analysts to build our SaaS platform to support efficient, end-to-end business processes across the industry using modern flexible technologies such as GraphQL, Kubernetes and React.


Technical Skills : C#, Angular, Azure with preferably .Net


Responsibilities

· Develops and maintains back-end, front-end applications and cloud services using C#, . Angular, Azure

· Accountable for delivering high quality results

· Mentors less experienced members of the team

· Thrives in a test-driven development organization with high quality standards

· Contributes to architecture discussions as needed

· Collaborates with Business Analyst to understand user stories and requirements to meet functional needs

· Supports product team’s efforts to produce product roadmap by providing estimates for enhancements

· Supports user acceptance testing and user story approval processes on development items

· Participates in sessions to resolve product issues

· Escalates high priority issues to appropriate internal stakeholders as necessary and appropriate

· Maintains a professional, friendly, open, approachable, positive attitude


Location : Bangalore

Ideal Work Experience and Skills

· At least 7 - 15 years’ experience working in a software development environment

· Prefer Bachelor’s degree in software development or related field

· Development experience with Angular and .NET is beneficial but not required

· Highly self-motivated and able to work effectively with virtual teams of diverse backgrounds

· Strong desire to learn and grow professionally

· A track record of following through on commitments; Excellent planning, organizational, and time management skills


Read more
Wekan Enterprise Solutions
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconMongoDB
NestJS
TypeScript
Microservices
+5 more

Backend - Software Development Engineer II

 

Experience - 4+ yrs 

 

About Wekan Enterprise Solutions


Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.

Job Description

We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?

You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.

Location - Bangalore

Basic qualifications:


  • Good problem solving skills
  • Deep understanding of software development life cycle
  • Excellent verbal and written communication skills
  • Strong focus on quality of work delivered
  • Relevant experience of 4+ years building high-performance backend applications with, at least 2 or more projects implemented using the required technologies

 

Required Technical Skills:


  • Extensive hands-on experience building high-performance web back-ends using Node.Js. Having 3+ hands-on experience in Node.JS and Javascript/Typescript and minimum
  • Hands-on project experience with Nest.Js
  • Strong experience with Express.Js framework
  • Hands-on experience in data modeling and schema design in MongoDB 
  • Experience integrating with any 3rd party services such as cloud SDKs, payments, push notifications, authentication etc…
  • Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
  • Strong experience writing and maintaining clear documentation

 

Good to have skills:

  • Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
  • Experience with microservice architecture
  • Experience working with other Relational and NoSQL Databases
  • Experience with technologies such as Kafka and Redis
  • Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies


Read more
Gipfel & Schnell Consultings Pvt Ltd
Bengaluru (Bangalore)
5 - 12 yrs
Best in industry
DevOps
azure
Terraform
Powershell
Apache Kafka
+1 more

Mandatory Skills:


  • AZ-104 (Azure Administrator) experience
  • CI/CD migration expertise
  • Proficiency in Windows deployment and support
  • Infrastructure as Code (IaC) in Terraform
  • Automation using PowerShell
  • Understanding of SDLC for C# applications (build/ship/run strategy)
  • Apache Kafka experience
  • Azure web app


Good to Have Skills:


  • AZ-400 (Azure DevOps Engineer Expert)
  • AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
  • Apache Pulsar
  • Windows containers
  • Active Directory and DNS
  • SAST and DAST tool understanding
  • MSSQL database
  • Postgres database
  • Azure security
Read more
BlueCloud

at BlueCloud

1 recruiter
Samyaka Lokhande
Posted by Samyaka Lokhande
Bengaluru (Bangalore)
4 - 10 yrs
₹20L - ₹32L / yr
CI/CD
Windows Azure
skill iconMongoDB

Work Mode: Hybrid (2 days WFO)


We are looking for a Data Engineer who is a self-starter to work in a diverse and fast-paced environment within our Enterprise Data team. This is an individual contributor role that is responsible for designing and developing of data solutions that are strategic for the business and built on the latest technologies and patterns,regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience.


Responsibilities


· Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements


· Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data


· Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development


· Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc


· Unify, enrich, and analyze variety of data to derive insights and opportunities


· Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities


· Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate


· Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform


· Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org


· Mentor other members in the team and organization and contribute to organizations’ growth.


Soft Skills


· Independent and able to manage, prioritize & lead workloads


· Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams


· Strong communication and collaboration skills, with the ability to work effectively in a team environment.


Technical Skills


· 8+ years’ work experience and bachelor’s degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science.


· Hands-on engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security, etc.


· 5 or more years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search, Azure Data factory and Azure synapse analytics, Git Integration with Azure DevOps, etc.


· Experience with designing & developing data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities


· Experience with building, testing and enhancing data curation pipelines and integrating data from a wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity


· Experience with maintaining the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensuring high availability of the platform; monitoring workload demands; working with Infrastructure Engineering teams to maintain the data platform; serve as an SME of one or more application


· 3+ years of experience working with source code control systems and Continuous Integration/Continuous Deployment tools                                                 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort