50+ Windows Azure Jobs in Bangalore (Bengaluru) | Windows Azure Job openings in Bangalore (Bengaluru)
Apply to 50+ Windows Azure Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Windows Azure Job opportunities across top companies like Google, Amazon & Adobe.

Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls, and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams, including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 3-6 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
Job Title: Lead DevOps Engineer
Experience Required: 4 to 5 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Job Title: Actuate Developer (Reporting Tool Developer)
Type: On-site (Hyderabad or Bangalore)
Experience: 4+ Years
Contract Duration: 1 Year
Timings: 9am - 6pm
Job Overview:
We are looking for an experienced Actuate Developer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining server-side solutions using Actuate technologies (such as OpenText BIRT or Actuate iServer). The role requires expertise in server-side report generation, deployment, and performance optimization for large-scale applications.
Key Responsibilities:
Design, develop, and maintain server-side Actuate applications, including report generation, scheduling, and management.
Configure and administer Actuate iServer or BIRT Integration Server for optimal performance and scalability.
Develop and deploy report templates, custom report engines, and extensions on the server-side to meet business needs.
Integrate Actuate solutions with back-end systems, databases (SQL/NoSQL), and data sources.
Work closely with business analysts to translate reporting requirements into server-side solutions.
Optimize report generation, performance, and data flow to handle large volumes of data.
Automate the generation and distribution of reports via email or web portals.
Troubleshoot and resolve server-side issues related to report generation, data processing, and integration with other systems.
Monitor server-side applications for performance and reliability, making necessary adjustments and improvements.
Ensure high levels of security, availability, and disaster recovery for Actuate-based reporting solutions.
Maintain and support Actuate environments, including backups, upgrades, and patches.
Collaborate with cross-functional teams to develop and implement business intelligence solutions.
Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
Proven experience in server-side development using Actuate (OpenText BIRT, Actuate iServer, or similar technologies).
Strong proficiency in Java and JavaScript, including the development of server-side components.
Advanced knowledge of SQL and experience with relational databases such as Oracle, MySQL, or MS SQL Server.
Familiarity with server-side technologies like Apache Tomcat, WebLogic, or other application servers.
Experience with integrating Actuate reporting solutions into enterprise applications and data sources.
Hands-on experience in configuring Actuate iServer or BIRT Integration Server for report scheduling, security, and performance tuning.
Knowledge of RESTful APIs, web services, and data integration techniques.
Strong understanding of report generation, data modeling, and optimization for high-performance data processing.
Experience with version control systems (e.g., Git) and development tools.
Ability to write and optimize complex queries for large-scale reporting systems.
Preferred Qualifications:
Familiarity with cloud environments such as AWS, Azure, or Google Cloud for hosting Actuate solutions.
Knowledge of additional BI tools and frameworks such as Tableau, Power BI, or similar tools.
Experience in deploying Actuate solutions in a multi-server or clustered environment.
Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
Experience with Agile software development methodologies.
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and Develop AI-Powered Solutions: Architect and implement scalable AI/ML systems, focusing on Large Language Models (LLMs) and other deep learning applications.
- End-to-End Model Development: Lead the entire lifecycle of AI models—from data collection and preprocessing to training, fine-tuning, evaluation, and deployment.
- Fine-Tuning & Customization: Leverage techniques like LoRA (Low-Rank Adaptation) and Q-LoRA to efficiently fine-tune large models for specific business applications.
- Reasoning Model Implementation: Work with advanced reasoning models such as DeepSeek-R1, exploring their applications in enterprise AI workflows.
- Data Engineering & Dataset Creation: Design and curate high-quality datasets optimized for fine-tuning AI models, ensuring robust training and validation processes.
- Performance Optimization & Efficiency: Optimize model inference, computational efficiency, and resource utilization for large-scale AI applications.
- MLOps & CI/CD Pipelines: Implement best practices for MLOps, ensuring automated training, deployment, monitoring, and continuous improvement of AI models.
- Cloud & Edge AI Deployment: Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP) and explore edge AI deployment where applicable.
- API Development & Microservices: Develop RESTful APIs and microservices to integrate AI models seamlessly into enterprise applications.
- Security, Compliance & Ethical AI: Ensure AI solutions comply with industry standards, data privacy laws (e.g., GDPR, HIPAA), and ethical AI guidelines.
- Collaboration & Stakeholder Engagement: Work closely with product managers, data engineers, and business teams to translate business needs into AI-driven solutions.
- Mentorship & Technical Leadership: Guide and mentor junior engineers, fostering best practices in AI/ML development, model fine-tuning, and software engineering.
- Research & Innovation: Stay updated with emerging AI trends, conduct experiments with cutting-edge architectures and fine-tuning techniques, and drive innovation within the team.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 5-8 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and implement software solutions that power machine learning models, particularly in LLMs
- Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects
- Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges
- Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility
- Design and implement a scale infrastructure for developing and deploying GenAI solutions
- Support model deployment and API integration to ensure interaction with existing enterprise systems.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 3-5 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
Has substantial expertise in Linux OS, Https, Proxy knowledge, Perl, Python scripting & hands-on
Is responsible for the identification and selection of appropriate network solutions to design and deploy in environments based on business objectives and requirements.
Is skilled in developing, deploying, and troubleshooting network deployments, with deep technical knowledge, especially around Bootstrapping & Squid Proxy, Https, scripting equivalent knowledge. Further align the network to meet the Company’s objectives through continuous developments, improvements and automation.
Preferably 10+ years of experience in network design and delivery of technology centric, customer-focused services.
Preferably 3+ years in modern software-defined network and preferably, in cloud-based environments.
Diploma or bachelor’s degree in engineering, Computer Science/Information Technology, or its equivalent.
Preferably possess a valid RHCE (Red Hat Certified Engineer) certification
Preferably possess any vendor Proxy certification (Forcepoint/ Websense/ bluecoat / equivalent)
Must possess advanced knowledge in TCP/IP concepts and fundamentals. Good understanding and working knowledge of Squid proxy, Https protocol / Certificate management.
Fundamental understanding of proxy & PAC file.
Integration experience and knowledge between modern networks and cloud service providers such as AWS, Azure and GCP will be advantageous.
Knowledge in SaaS, IaaS, PaaS, and virtualization will be advantageous.
Coding skills such as Perl, Python, Shell scripting will be advantageous.
Excellent technical knowledge, troubleshooting, problem analysis, and outside-the-box thinking.
Excellent communication skills – oral, written and presentation, across various types of target audiences.
Strong sense of personal ownership and responsibility in accomplishing the organization’s goals and objectives. Exudes confidence, able to cope under pressure and will roll-up his/her sleeves to drive a project to success in a challenging environment.

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.


Job Description:
Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.
Key Responsibilities:
- Develop and maintain web applications using .NET Core, C#, and Angular.
- Design and implement RESTful APIs and integrate with front-end components.
- Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
- Deploy and manage applications on cloud platforms (AWS or Azure).
- Write clean, scalable, and efficient code following best practices.
- Participate in code reviews and provide constructive feedback.
- Troubleshoot and debug applications to ensure optimal performance.
- Stay updated with emerging technologies and propose improvements to existing systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum of 4 years of professional experience in software development.
- Proficiency in .NET Core, C#, and Angular.
- Experience with cloud services (either AWS or Azure).
- Strong understanding of RESTful API design and implementation.
- Familiarity with version control systems like Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently and collaboratively in a team environment.
Preferred Qualifications:
- Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Knowledge of CI/CD pipelines and DevOps practices.
- Familiarity with Agile/Scrum methodologies.
- Strong communication and interpersonal skills.
What We Offer:
- Competitive salary and performance-based incentives.
- Flexible working hours and remote work options.
- Opportunities for professional growth and career advancement.
- Collaborative and inclusive work environment.
- Access to the latest tools and technologies.


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills
We're on the hunt for a Backend Developer who not only writes clean, efficient code but also thinks in systems and structures. If you enjoy crafting microservices, solving real-world problems using solid design principles, and love optimizing performance — this one’s for you!
🧠 Responsibilities:
- Design, develop, and maintain scalable and high-performance backend services
- Build and manage RESTful APIs and microservices
- Architect and implement Low-Level Design (LLD) for core backend features
- Apply Data Structures and Algorithms (DSA) to write optimal, scalable solutions
- Collaborate with frontend and product teams to integrate user-facing elements
- Ensure code quality through reviews, unit tests, and automation
- Optimize applications for speed, performance, and scalability
- Troubleshoot, debug, and upgrade existing systems
🛠️ Required Skills:
- 2+ years of experience in Java / Python / Node.js / GoLang
- Strong knowledge of Object-Oriented Programming (OOP) and Design Patterns
- Good grasp of Low-Level Design (LLD) and System Design fundamentals
- Proficient in Data Structures and Algorithms (DSA) — must know how to use them, not just define them 😎
- Experience with REST APIs and Microservices Architecture
- Good understanding of SQL and/or NoSQL Databases (e.g., MySQL, MongoDB, PostgreSQL)
- Familiarity with version control systems like Git
⭐ Nice-to-Haves:
- Experience with cloud platforms (AWS, GCP, Azure)
- Familiarity with Docker, Kubernetes, or container orchestration
- Exposure to CI/CD pipelines
Location: Bangalore Indiranagar
Position: Project Manager
Location: Bengaluru, India (Hybrid/Remote flexibility available)
Company: PGAGI Consultancy Pvt. Ltd
About PGAGI
At PGAGI, we are building the future where human and artificial intelligence coexist to solve complex problems, accelerate innovation, and power sustainable growth. We develop and deploy advanced AI solutions across industries, making AI not just a tool but a transformational force for businesses and society.
Position Summary
PGAGI is seeking a dynamic and experienced Project Manager to lead cross-functional engineering teams and drive the successful execution of multiple AI/ML-centric projects. The ideal candidate is a strategic thinker with a solid background in engineering-led product/project management, especially in AI/ML product lifecycles. This role is crucial to scaling our technical operations, ensuring seamless collaboration, timely delivery, and high-impact results across initiatives.
Key Responsibilities
• Lead Engineering Teams Across AI/ML Projects: Manage and mentor cross-functional teams of ML engineers, DevOps professionals, and software developers through agile delivery cycles, ensuring timely and high-quality execution of AI-focused initiatives.
• Drive Agile Project Execution: Define project scope, objectives, timelines, and deliverables using Agile/Scrum methodologies. Ensure continuous sprint planning, backlog grooming, and milestone tracking via tools like Jira or GitHub Projects.
• Manage Multiple Concurrent Projects: Oversee the full lifecycle of multiple high-priority projects—ranging from AI model development and infrastructure integration to client delivery and platform enhancements.
• Collaborate with Technical and Business Stakeholders: Act as the bridge between engineering, research, and client-facing teams, translating complex requirements into actionable tasks and product features.
• Maintain Engineering and Infrastructure Quality: Uphold rigorous engineering standards across deployments. Coordinate testing, model performance validation, version control, and CI/CD operations.
• Budget and Resource Allocation: Optimize resource distribution across teams, track project costs, and ensure effective use of cloud infrastructure and personnel to maximize project ROI.
• Risk Management & Mitigation: Identify risks proactively across technical and operational layers. Develop mitigation plans and troubleshoot issues that may impact timelines or performance.
• Monitor KPIs and Delivery Metrics: Establish and monitor performance indicators such as sprint velocity, deployment frequency, incident response times, and customer satisfaction for each release.
• Support Continuous Improvement: Foster a culture of feedback and iteration. Champion retrospectives and process reviews to continually refine development practices and workflows.
Qualifications:
• Education: Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field.
• Experience: Minimum 5 years of experience as a Project Manager, with at least 2 years managing AI/ML or software engineering teams.
• Tech Expertise: Familiarity with AI/ML lifecycles, cloud platforms (AWS, GCP, or Azure), and DevOps pipelines (Docker, Kubernetes, GitHub Actions, Jenkins).
• Tools: Strong experience with Jira, Confluence, and project tracking/reporting tools.
• Leadership: Proven success leading high-performing engineering teams in a fast-paced, innovative environment.
• Communication: Excellent written and verbal skills to interface with both technical and non-technical stakeholders.
• Certifications (Preferred): PMP, CSM, or certifications in AI/ML project management or cloud technologies.
Why Join PGAGI?
• Lead cutting-edge AI/ML product teams building scalable, impactful solutions.
• Be part of a fast-growing, innovation-driven startup environment.
• Enjoy a collaborative, intellectually stimulating workplace with growth opportunities.
• Competitive compensation and performance-based rewards.
• Access to learning resources, mentoring, and AI/DevOps communities.
A backend developer is an engineer who can handle all the work of databases, servers,
systems engineering, and clients. Depending on the project, what customers need may
be a mobile stack, a Web stack, or a native application stack.
You will be responsible for:
Build reusable code and libraries for future use.
Own & build new modules/features end-to-end independently.
Collaborate with other team members and stakeholders.
Required Skills :
Thorough understanding of Node.js and Typescript.
Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.
Basic architectural understanding of modern day web applications
Diligence for coding standards
Must be good with git and git workflow
Experience of external integrations is a plus
Working knowledge of AWS or GCP or Azure - Expertise with linux based systems
Experience with CI/CD tools like jenkins is a plus.
Experience with testing and automation frameworks.
Extensive understanding of RDBMS systems


About the Company – Gruve
Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.
As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.
Why Gruve
At Gruve, we foster a culture of:
- Innovation, collaboration, and continuous learning
- Diversity and inclusivity, where everyone is encouraged to thrive
- Impact-focused work — your ideas will shape the products we build
We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.
Position Summary
We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.
You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.
Key Responsibilities
- Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
- Build and maintain automation to track:
- Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
- Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
- Cloud Assets: Public cloud services, process registry, database resources.
- Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
- Optimize performance and scalability to handle large-scale asset data in real-time.
- Document system architecture, implementation, and usage.
- Generate reports for compliance and auditing.
- Ensure integration with existing systems for streamlined asset management.
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related field
- 3–6 years of experience in software development
- Strong proficiency in Golang and Python
- Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
- Deep understanding of automation solutions and parallel computing principles
Preferred Qualifications
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork skills

Job Title: Full Stack Developer
Job Description:
We are looking for a skilled Full Stack Developer with hands-on experience in building scalable web applications using .NET Core and ReactJS. The ideal candidate will have a strong understanding of backend development, cloud services, and modern frontend technologies.
Key Skills:
- .NET Core, C#
- SQL Server
- React JS
- Azure (Functions, Services)
- Entity Framework
- Microservices Architecture
Responsibilities:
- Design, develop, and maintain full-stack applications
- Build scalable microservices using .NET Core
- Implement and consume Azure Functions and Services
- Develop efficient database queries with SQL Server
- Integrate front-end components using ReactJS
- Collaborate with cross-functional teams to deliver high-quality solutions


About the Role:
- We are looking for a highly skilled and experienced Senior Python Developer to join our dynamic team based in Manyata Tech Park, Bangalore. The ideal candidate will have a strong background in Python development, object-oriented programming, and cloud-based application development. You will be responsible for designing, developing, and maintaining scalable backend systems using modern frameworks and tools.
- This role is hybrid, with a strong emphasis on working from the office to collaborate effectively with cross-functional teams.
Key Responsibilities:
- Design, develop, test, and maintain backend services using Python.
- Develop RESTful APIs and ensure their performance, responsiveness, and scalability.
- Work with popular Python frameworks such as Django or Flask for rapid development.
- Integrate and work with cloud platforms (AWS, Azure, GCP or similar).
- Collaborate with front-end developers and other team members to establish objectives and design cohesive code.
- Apply object-oriented programming principles to solve real-world problems efficiently.
- Implement and support event-driven architectures where applicable.
- Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues.
- Write clean, maintainable, and reusable code with proper documentation.
- Contribute to system architecture and code review processes.
Required Skills and Qualifications:
- Minimum of 5 years of hands-on experience in Python development.
- Strong understanding of Object-Oriented Programming (OOP) and Data Structures.
- Proficiency in building and consuming REST APIs.
- Experience working with at least one cloud platform such as AWS, Azure, or Google Cloud Platform.
- Hands-on experience with Python frameworks like Django, Flask, or similar.
- Familiarity with event-driven programming and asynchronous processing.
- Excellent problem-solving, debugging, and troubleshooting skills.
- Strong communication and collaboration abilities to work effectively in a team environment.


JD:
The Senior Software Engineer works closely with our development team, product manager, dev-ops and business analysts to build our SaaS platform to support efficient, end-to-end business processes across the industry using modern flexible technologies such as GraphQL, Kubernetes and React.
Technical Skills : C#, Angular, Azure with preferably .Net
Responsibilities
· Develops and maintains back-end, front-end applications and cloud services using C#, . Angular, Azure
· Accountable for delivering high quality results
· Mentors less experienced members of the team
· Thrives in a test-driven development organization with high quality standards
· Contributes to architecture discussions as needed
· Collaborates with Business Analyst to understand user stories and requirements to meet functional needs
· Supports product team’s efforts to produce product roadmap by providing estimates for enhancements
· Supports user acceptance testing and user story approval processes on development items
· Participates in sessions to resolve product issues
· Escalates high priority issues to appropriate internal stakeholders as necessary and appropriate
· Maintains a professional, friendly, open, approachable, positive attitude
Location : Bangalore
Ideal Work Experience and Skills
· At least 7 - 15 years’ experience working in a software development environment
· Prefer Bachelor’s degree in software development or related field
· Development experience with Angular and .NET is beneficial but not required
· Highly self-motivated and able to work effectively with virtual teams of diverse backgrounds
· Strong desire to learn and grow professionally
· A track record of following through on commitments; Excellent planning, organizational, and time management skills
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
About SAP Fioneer
Innovation is at the core of SAP Fioneer. We were spun out of SAP to drive agility, innovation, and delivery in financial services. With a foundation in cutting-edge technology and deep industry expertise, we elevate financial services through digital business innovation and cloud technology.
A rapidly growing global company with a lean and innovative team, SAP Fioneer offers an environment where you can accelerate your future.
Product Technology Stack
- Languages: PowerShell, MgGraph, Git
- Storage & Databases: Azure Storage, Azure Databases
Role Overview
As a Senior Cloud Solutions Architect / DevOps Engineer, you will be part of our cross-functional IT team in Bangalore, designing, implementing, and managing sophisticated cloud solutions on Microsoft Azure.
Key Responsibilities
Architecture & Design
- Design and document architecture blueprints and solution patterns for Azure-based applications.
- Implement hierarchical organizational governance using Azure Management Groups.
- Architect modern authentication frameworks using Azure AD/EntraID, SAML, OpenID Connect, and Azure AD B2C.
Development & Implementation
- Build closed-loop, data-driven DevOps architectures using Azure Insights.
- Apply code-driven administration practices with PowerShell, MgGraph, and Git.
- Deliver solutions using Infrastructure as Code (IaC), CI/CD pipelines, GitHub Actions, and Azure DevOps.
- Develop IAM standards with RBAC and EntraID.
Leadership & Collaboration
- Provide technical guidance and mentorship to a cross-functional Scrum team operating in sprints with a managed backlog.
- Support the delivery of SaaS solutions on Azure.
Required Qualifications & Skills
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in cloud solutions architecture and DevOps engineering.
- Extensive expertise in Azure services, core web technologies, and security best practices.
- Hands-on experience with IaC, CI/CD, Git, and pipeline automation tools.
- Strong understanding of IAM, security best practices, and governance models in Azure.
- Experience working in Scrum-based environments with backlog management.
- Bonus: Experience with Jenkins, Terraform, Docker, or Kubernetes.
Benefits
- Work with some of the brightest minds in the industry on innovative projects shaping the financial sector.
- Flexible work environment encouraging creativity and innovation.
- Pension plans, private medical insurance, wellness cover, and additional perks like celebration rewards and a meal program.
Diversity & Inclusion
At SAP Fioneer, we believe in the power of innovation that every employee brings and are committed to fostering diversity in the workplace.
Backend - Software Development Engineer II
Experience - 4+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Bangalore
Basic qualifications:
- Good problem solving skills
- Deep understanding of software development life cycle
- Excellent verbal and written communication skills
- Strong focus on quality of work delivered
- Relevant experience of 4+ years building high-performance backend applications with, at least 2 or more projects implemented using the required technologies
Required Technical Skills:
- Extensive hands-on experience building high-performance web back-ends using Node.Js. Having 3+ hands-on experience in Node.JS and Javascript/Typescript and minimum
- Hands-on project experience with Nest.Js
- Strong experience with Express.Js framework
- Hands-on experience in data modeling and schema design in MongoDB
- Experience integrating with any 3rd party services such as cloud SDKs, payments, push notifications, authentication etc…
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Experience with microservice architecture
- Experience working with other Relational and NoSQL Databases
- Experience with technologies such as Kafka and Redis
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
Mandatory Skills:
- AZ-104 (Azure Administrator) experience
- CI/CD migration expertise
- Proficiency in Windows deployment and support
- Infrastructure as Code (IaC) in Terraform
- Automation using PowerShell
- Understanding of SDLC for C# applications (build/ship/run strategy)
- Apache Kafka experience
- Azure web app
Good to Have Skills:
- AZ-400 (Azure DevOps Engineer Expert)
- AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
- Apache Pulsar
- Windows containers
- Active Directory and DNS
- SAST and DAST tool understanding
- MSSQL database
- Postgres database
- Azure security
Work Mode: Hybrid (2 days WFO)
We are looking for a Data Engineer who is a self-starter to work in a diverse and fast-paced environment within our Enterprise Data team. This is an individual contributor role that is responsible for designing and developing of data solutions that are strategic for the business and built on the latest technologies and patterns,regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience.
Responsibilities
· Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements
· Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data
· Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development
· Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc
· Unify, enrich, and analyze variety of data to derive insights and opportunities
· Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities
· Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate
· Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform
· Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org
· Mentor other members in the team and organization and contribute to organizations’ growth.
Soft Skills
· Independent and able to manage, prioritize & lead workloads
· Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams
· Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Technical Skills
· 8+ years’ work experience and bachelor’s degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science.
· Hands-on engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security, etc.
· 5 or more years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search, Azure Data factory and Azure synapse analytics, Git Integration with Azure DevOps, etc.
· Experience with designing & developing data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities
· Experience with building, testing and enhancing data curation pipelines and integrating data from a wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity
· Experience with maintaining the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensuring high availability of the platform; monitoring workload demands; working with Infrastructure Engineering teams to maintain the data platform; serve as an SME of one or more application
· 3+ years of experience working with source code control systems and Continuous Integration/Continuous Deployment tools
Job description
We are seeking a highly skilled and experienced IT Department Head with strong communication skills, a technical background, and leadership capabilities to manage our IT team. The ideal candidate will be responsible for overseeing the organization's IT infrastructure, ensuring the security and efficiency of our systems, and maintaining compliance with relevant industry standards. The role requires an
in-depth understanding of cloud technologies , server management, network security, managed IT services, and strong problem-solving capabilities.
Key Responsibilities:-
The Information Technology Manager is a proactive and hands-on IT Manager to oversee and evolve our technology infrastructure
· In this role, the Manager will manage all aspects of our IT operations, from maintaining our current tech stack to strategizing and implementing future developments
· This position will ensure that our technology systems are modern, secure, and efficient, aligning IT initiatives with our business goals
· IT Strategy & Leadership: Develop and execute an IT strategy that supports the company's objectives, ensuring scalability and security
· Infrastructure Management: Oversee the maintenance and optimization of our Azure Cloud infrastructure, AWS Cloud, and Cisco Meraki networking systems
· Software & Systems Administration: Manage Microsoft 365 administration.
· Cybersecurity: Enhance our cybersecurity posture using tools like Sentinel One, Sophos Firewall and other tools
· Project Management: Lead IT projects, including system upgrades and optimizations, ensuring timely delivery and adherence to budgets
· Team Leadership: Mentor and guide a small IT team, fostering a culture of continuous improvement and professional development
· Vendor Management: Collaborate with external vendors and service providers to ensure optimal performance and cost-effectiveness
· Technical Support: Provide high-level technical support and troubleshooting for IT-related issues across the organization and client in USA Other duties as needed
· IT Audit & Compliance: Conduct regular audits to ensure IT processes are compliant with security regulations and best practices (GDPR, SOC2, ISO 27001), ensuring readiness for internal and external audit.
· Documentation: Maintain thorough and accurate documentation for all systems, processes, and procedures to ensure clarity and consistency in IT operations.
Preferred Skills:-
. Experience with SOC 2, ISO 27001, or similar security frameworks.
. Experience with advanced firewall configurations and network
architecture.
Job Type: Full-time
Benefits:
- Paid sick time
Shift:
- Day shift
Work Days:
- Monday to Friday
Experience:
- IT management: 2 years (Required)
Work Location: In person
About the Role:
We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.
Key Responsibilities:
Cloud Management:
- Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
- Ensure high availability, scalability, and security of cloud resources.
Containerization & Orchestration:
- Develop and manage containerized applications using Docker.
- Deploy, scale, and manage Kubernetes clusters.
CI/CD Pipelines:
- Build and maintain robust CI/CD pipelines to automate the software delivery process.
- Implement monitoring and alerting to ensure pipeline efficiency.
Version Control & Collaboration:
- Manage code repositories and workflows using Git.
- Collaborate with development teams to optimize branching strategies and code reviews.
Automation & Scripting:
- Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
- Write scripts to optimize and maintain workflows.
Monitoring & Logging:
- Implement and maintain monitoring solutions to ensure system health and performance.
- Analyze logs and metrics to troubleshoot and resolve issues.
Required Skills & Qualifications:
- 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
- Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
- Hands-on experience building and managing CI/CD pipelines.
- Proficient in using Git for version control.
- Experience with scripting languages such as Bash, Python, or PowerShell.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Solid understanding of networking, security, and system administration.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and teamwork skills.
Preferred Qualifications:
- Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with serverless architectures and microservices.
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Secondary Skills:
- Docker or Kubernetes
Requirements:
• Bachelor’s degree in computer science, Engineering, or a related field.
• Strong understanding of distributed data processing platforms like Databricks and BigQuery.
• Proficiency in Python, PySpark, and SQL programming languages.
• Experience with performance optimization for large datasets.
• Strong debugging and problem-solving skills.
• Fundamental knowledge of cloud services, preferably Azure or GCP.
• Excellent communication and teamwork skills.
Nice to Have:
• Experience in data migration projects.
• Understanding of technologies like Delta Lake/warehouse.
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
Experience:
○ 2-4 years of hands-on experience with Microsoft Power Automate (Flow).
○ Experience with Power Apps, Power BI, and Power Platform technologies.
○ Experience in integrating REST APIs, SOAP APIs, and custom connectors.
○ Proficiency in using tools like Microsoft SharePoint, Azure, and Dataverse.
○ Familiarity with Microsoft 365 apps like Teams, Outlook, and Excel.
● Technical Skills:
○ Knowledge of JSON, OData, HTML, JavaScript, and other web-based technologies.
○ Strong understanding of automation, data integration, and process optimization.
○ Experience with D365 (Dynamics 365) and Azure Logic Apps is a plus.
○ Proficient in troubleshooting, problem-solving, and debugging automation workflows.
● Soft Skills:
○ Excellent communication skills to liaise with stakeholders and technical teams.
○ Strong analytical and problem-solving abilities.
○ Self-motivated and capable of working independently as well as part of a team.
Educational Qualifications:
● Bachelor's Degree in Computer Science, Information Technology, Engineering, or a related field (or equivalent practical
experience).
Good to have Qualifications:
● Microsoft Certified: Power Platform certifications (e.g., Power Platform Functional Consultant, Power Automate RPA
Developer) would be advantageous.
● Experience with Agile or Scrum methodologies.
Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
- Bachelor of Computer Science or Equivalent Education
- At least 5 years of experience in a relevant technical position.
- Azure and/or AWS experience
- Strong in CI/CD concepts and technologies like GitOps (Argo CD)
- Hands-on experience with DevOps Tools (Jenkins, GitHub, SonarQube, Checkmarx)
- Experience with Helm Charts for package management
- Strong in Kubernetes, OpenShift, and Container Network Interface (CNI)
- Experience with programming and scripting languages (Spring Boot, NodeJS, Python)
- Strong container image management experience using Docker and distroless concepts
- Familiarity with Shared Libraries for code reuse and modularity
- Excellent communication skills (verbal, written, and presentation)
Note: Looking for immediate joiners only.

GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
ROLE AND RESPONSIBILITIES
Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should
be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and
transform data into insights that drive business value, through use of data analytics, data visualization and data
modeling techniques.
QUALIFICATIONS AND EDUCATION REQUIREMENTS
Technical Bachelor’s Degree.
Non-Technical Degree holders should have 1+ years of relevant experience.
We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.
Experience: 4+ Yrs
Responsibilities:
• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.
• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization
• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity
• Collaborate continuously with the product development teams to implement CI/CD pipeline.
• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.
Mandatory Skills:
• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.
• Strong scripting skills (Java or Python) is a must.
• Experience with automation tools such as Ansible, Chef, Puppet etc.
• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle
• Hands-on working experience in developing or deploying microservices is a must.
• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.
• Knowledge about microservices hosted in leading cloud environments
• Experience with containerizing applications (Docker preferred) is a must
• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.
• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software
Mandatory Skills:
• Experience working with Secret management services such as HashiCorp Vault is desirable.
• Experience working with Identity and access management services such as Okta, Cognito is desirable.
• Experience with monitoring systems such as Prometheus, Grafana is desirable.
Educational Qualifications and Experience:
• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)
• 4 to 6 years of hands-on experience in server-side application development & DevOps
FINTECH CANDIDATES ONLY
About the job:
Emint is a fintech startup with the mission to ‘Make the best investing product that Indian consumers love to use, with simplicity & intelligence at the core’. We are creating a platformthat
gives a holistic view of market dynamics which helps our users make smart & disciplined
investment decisions. Emint is founded by a stellar team of individuals who come with decades of
experience of investing in Indian & global markets. We are building a team of highly skilled &
disciplined team of professionals and looking at equally motivated individuals to be part of
Emint. Currently are looking at hiring a Devops to join our team at Bangalore.
Job Description :
Must Have:
• Hands on experience on AWS DEVOPS
• Experience in Unix with BASH scripting is must
• Experience working with Kubernetes, Docker.
• Experience in Gitlab, Github or Bitbucket artifactory
• Packaging, deployment
• CI/CD pipeline experience (Jenkins is preferable)
• CI/CD best practices
Good to Have:
• Startup Experience
• Knowledge of source code management guidelines
• Experience with deployment tools like Ansible/puppet/chef is preferable
• IAM knowledge
• Coding knowledge of Python adds value
• Test automation setup experience
Qualifications:
• Bachelor's degree or equivalent experience in Computer Science or related field
• Graduates from IIT / NIT/ BITS / IIIT preferred
• Professionals with fintech ( stock broking / banking ) preferred
• Experience in building & scaling B2C apps preferred
Job Brief
QwikSkills is seeking an extremely knowledgeable Azure Cloud Engineer with a passion for problem-solving. You will make recommendations and help to create and maintain cloud services for developers that use this infrastructure for their software. You will need great collaboration and communication skills as you will spend a large part of your role interacting with developers and non-technical stakeholders.
Who We Are
QwikSkills is a one-stop platform to learn & practice hands-on cloud skills, cloud certification preparation and practice needs. We offer affordable world-class online certification practice tests and hands-on cloud labs for individuals as well as teams for AWS, Google Cloud, Azure, VMware etc.
As an Azure Cloud Engineer, your responsibilities include:-
- Developing and deploying Cloud solutions in collaboration with the cloud team.
- Managing and maintaining Cloud Labs portfolio.
- Creating, debugging, testing, and documenting Cloud Labs.
- Identifying, analysing, and resolving Cloud infrastructure vulnerabilities and application deployment issues.
- Regularly reviewing existing systems and making recommendations for improvements.
- Generating curriculum for technical training.
- Working with the internal team to design, own, and deliver advanced training courses.
- Handling a group of 50-100 mentees/students and constantly guiding them through the course curriculum.
What We Are Looking for, in a Candidate
- Energy and enthusiasm to work in a fast-paced start-up culture
- Valuable degree in Computer Science or other technical discipline or equivalent
- Good understanding of the various Azure services and how they work together
- Superior programming knowledge, Python needed
- Good creativity and ideas for creating new Lab content that is engaging, relevant, and useful for the learners
- Good understanding of finding quality content for Labs Doc Creation
- Demonstrable problem-solving skills and logical thinking techniques, and have excellent attention to detail
- Excellent teamwork/collaboration skills
- Effective time management skills
- Proven work experience in a similar role
Fringe Benefits of Working with Us
- Assured winsome compensation
- An inclusive team of like-minded professionals
- An energetic and positive working space
- Collateral creative freedom
- Personal and professional development
- A major scope for career progression

we’d love to speak with you. Skills and Qualifications:
Strong experience with continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, TravisCI, or GitLab CI.
Proficiency in scripting languages such as Python, Bash, or Ruby.
Knowledge of infrastructure automation tools such as Ansible, Puppet, or Terraform.
Experience with cloud platforms such as AWS, Azure, or GCP.
Knowledge of container orchestration tools such as Docker, Kubernetes, or OpenShift.
Experience with version control systems such as Git.
Familiarity with Agile methodologies and practices.
Understanding of networking concepts and principles.
Knowledge of database technologies such as MySQL, MongoDB, or PostgreSQL.
Good understanding of security and data protection principles.
Roles and responsibilities:
● Building and setting up new development tools and infrastructure
● Working on ways to automate and improve development and release processes
● Deploy updates and fixes
● Helping to ensure information security best practices
● Provide Level 2 technical support
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
Objectives :
- Building and setting up new development tools and infrastructure
- Working on ways to automate and improve development and release processes
- Testing code written by others and analyzing results
- Ensuring that systems are safe and secure against cybersecurity threats
- Identifying technical problems and developing software updates and ‘fixes’
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
- Planning out projects and being involved in project management decisions
Daily and Monthly Responsibilities :
- Deploy updates and fixes
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications :
- Degree in Computer Science or Software Engineering or BSc in Computer Science, Engineering or relevant field
- 3+ years of experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good logical skills and knowledge of programming concepts(OOPS,Data Structures)
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.
At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.
About this roll* (Responsibilities)
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Partner with development teams to improve services through rigorous testing and release procedures
- Participate in system design consulting, platform management, and capacity planning
- Create sustainable systems and services through automation and uplift
- Balance feature development speed and reliability with well-defined service level objectives
Troubleshooting and Supporting Escalations:
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
- Implement strategies to increase system reliability and performance through on-call rotation and process optimization
- Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again
Do you have the right ingredients? (Requirements)
- Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
- Polyglot technologist/generalist with a thirst for learning
- Deep understanding of cloud and microservice architecture and the JVM
- Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
- Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
- Experience with cloud computing technologies ( AWS cloud provider preferred)
Bread puns are encouraged but not required
Hiring for Azure Data Engineers.
Location: Bangalore
Employment type: Full-time, permanent
website: www.amazech.com
Qualifications:
B.E./B.Tech/M.E./M.Tech in Computer Science, Information Technology, Electrical or Electronic with good academic background.
Experience and Required Skill Sets:
• Minimum 5 years of hands-on experience with Azure Data Lake, Azure Data Factory, SQL Data Warehouse, Azure Blob, Azure Storage Explorer
• Experience in Data warehouse/analytical systems using Azure Synapse.
Proficient in creating Azure Data Factory pipelines for ETL processing; copy activity, custom Azure development, Synapse, etc.
• Knowledge of Azure Data Catalog, Event Grid, Service Bus, SQL, and Purview.
• Good technical knowledge in Microsoft SQL Server BI Suite (ETL, Reporting, Analytics, Dashboards) using SSIS, SSAS, SSRS, Power BI
• Design and develop batch and real-time streaming of data loads to data warehouse systems
Other Requirements:
A Bachelor's or Master's degree (Engineering or computer-related degree preferred)
Strong understanding of Software Development Life Cycles including Agile/Scrum
Responsibilities:
• Ability to create complex, enterprise-transforming applications that meet and exceed client expectations.
• Responsible for the bottom line. Strong project management abilities. Ability to encourage the team to stick to timelines.

Golang Developer
Location: Chennai/ Hyderabad/Pune/Noida/Bangalore
Experience: 4+ years
Notice Period: Immediate/ 15 days
Job Description:
- Must have at least 3 years of experience working with Golang.
- Strong Cloud experience is required for day-to-day work.
- Experience with the Go programming language is necessary.
- Good communication skills are a plus.
- Skills- Aws, Gcp, Azure, Golang
Job Title: AWS-Azure Data Engineer with Snowflake
Location: Bangalore, India
Experience: 4+ years
Budget: 15 to 20 LPA
Notice Period: Immediate joiners or less than 15 days
Job Description:
We are seeking an experienced AWS-Azure Data Engineer with expertise in Snowflake to join our team in Bangalore. As a Data Engineer, you will be responsible for designing, implementing, and maintaining data infrastructure and systems using AWS, Azure, and Snowflake. Your primary focus will be on developing scalable and efficient data pipelines, optimizing data storage and processing, and ensuring the availability and reliability of data for analysis and reporting.
Responsibilities:
- Design, develop, and maintain data pipelines on AWS and Azure to ingest, process, and transform data from various sources.
- Optimize data storage and processing using cloud-native services and technologies such as AWS S3, AWS Glue, Azure Data Lake Storage, Azure Data Factory, etc.
- Implement and manage data warehouse solutions using Snowflake, including schema design, query optimization, and performance tuning.
- Collaborate with cross-functional teams to understand data requirements and translate them into scalable and efficient technical solutions.
- Ensure data quality and integrity by implementing data validation, cleansing, and transformation processes.
- Develop and maintain ETL processes for data integration and migration between different data sources and platforms.
- Implement and enforce data governance and security practices, including access control, encryption, and compliance with regulations.
- Collaborate with data scientists and analysts to support their data needs and enable advanced analytics and machine learning initiatives.
- Monitor and troubleshoot data pipelines and systems to identify and resolve performance issues or data inconsistencies.
- Stay updated with the latest advancements in cloud technologies, data engineering best practices, and emerging trends in the industry.
Requirements:
- Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
- Minimum of 4 years of experience as a Data Engineer, with a focus on AWS, Azure, and Snowflake.
- Strong proficiency in data modelling, ETL development, and data integration.
- Expertise in cloud platforms such as AWS and Azure, including hands-on experience with data storage and processing services.
- In-depth knowledge of Snowflake, including schema design, SQL optimization, and performance tuning.
- Experience with scripting languages such as Python or Java for data manipulation and automation tasks.
- Familiarity with data governance principles and security best practices.
- Strong problem-solving skills and ability to work independently in a fast-paced environment.
- Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams and stakeholders.
- Immediate joiner or notice period less than 15 days preferred.
If you possess the required skills and are passionate about leveraging AWS, Azure, and Snowflake to build scalable data solutions, we invite you to apply. Please submit your resume and a cover letter highlighting your relevant experience and achievements in the AWS, Azure, and Snowflake domains.
- Role: IoT Application Development (Java) Skill Set:
- Proficiency in Java 11.
- Strong knowledge of Spring Boot framework.
- Experience with Kubernetes.
- Familiarity with Kafka.
- Understanding of Azure Cloud services.
1 Experience: 3 to 5 years Location: Bangalore ; Notice period : Immediate Joiners
- Job Description: We are seeking an experienced IoT Application Developer with expertise in Java to join our team in Bangalore. As a Java Developer, you will be responsible for designing, developing, and deploying IoT applications. You should have a solid understanding of Java 11 and the Spring Boot framework. Experience with Kubernetes and Kafka is also required. Familiarity with Azure Cloud services is essential. Your role will involve collaborating with the development team to build scalable and efficient IoT solutions using Java and related technologies.


RESPONSIBILITIES
- Become an expert in our technology and platforms
- Provide top-tier, highly skilled and attentive support to our consumer app teams.
- Perform production support, troubleshooting and maintenance tasks with a focus on quality and timeliness
- Work with the Implementations and Delivery Teams on defect resolution and solution delivery
- As an active member of the backend team you would utilize and promote best practices and standards
- You’ll develop new technical skills and gain industry knowledge
- You’ll provide occasional “after-hours” incident response and support
- Maintain software to integrate with internal back-end systems
- Build tools to reduce occurence of errors and improve customer experience
- Manage system troubleshooting and maintenance.
Required skills and qualifications
• 4+ years’ experience supporting Web and Backend based application software and environments preferred
• Ability to prioritize effectively and handle shifting priorities professionally
• Experience with version management systems like GIT, Subversion
• Knowledge on Payment gateways (Razorpay, Juspay)
• Experience with web application servers, basic idea on AWS and Azure
• Databases (SQL and NoSQL) and SQL query language.
• Good understanding of Networking principles.
• An understanding and background with Object Oriented Programming languages
• Required Full stack development knowledge.
Preferred skills and qualifications
• Bachelor of science degree (or equivalent) in computer science, engineering, or relevant field
About The Company
The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide.
The Role
Power BI front-end developer in the Data Domain (Manufacturing, Sales & Marketing, Purchasing, Logistics, …).Responsible for the Power BI front-end design, development, and delivery of highly visible data-driven applications in the Compressor Technique. You always take a quality-first approach where you ensure the data is visualized in a clear, accurate, and user-friendly manner. You always ensure standards and best practices are followed and ensure documentation is created and maintained. Where needed, you take initiative and make
recommendations to drive improvements. In this role you will also be involved in the tracking, monitoring and performance analysis
of production issues and the implementation of bugfixes and enhancements.
Skills & Experience
• The ideal candidate has a degree in Computer Science, Information Technology or equal through experience.
• Strong knowledge on BI development principles, time intelligence, functions, dimensional modeling and data visualization is required.
• Advanced knowledge and 5-10 years experience with professional BI development & data visualization is preferred.
• You are familiar with data warehouse concepts.
• Knowledge on MS Azure (data lake, databricks, SQL) is considered as a plus.
• Experience and knowledge on scripting languages such as PowerShell and Python to setup and automate Power BI platform related activities is an asset.
• Good knowledge (oral and written) of English is required.
· Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product.
· BlueYonder is seeking a Senior/Principal Architect in the Data Services department (under Luminate Platform ) to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services.
· This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India.
· Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment
· The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools
Our current technical environment:
· Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake
· • Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture
· • Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD)
· Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite