50+ AWS (Amazon Web Services) Jobs in Mumbai | AWS (Amazon Web Services) Job openings in Mumbai
Apply to 50+ AWS (Amazon Web Services) Jobs in Mumbai on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.


5+ years of IT development experience with min 3+ years hands-on experience in Snowflake · Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers. · Strong experience with building productionized data ingestion and data pipelines in Snowflake · Good knowledge of Snowflake's architecture, features likie Zero-Copy Cloning, Time Travel, and performance tuning capabilities · Should have good exp on Snowflake RBAC and data security. · Strong experience in Snowflake features including new snowflake features. · Should have good experience in Python/Pyspark. · Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF) · Should have experience/knowledge in orchestration and scheduling tools experience like Airflow · Should have good understanding on ETL or ELT processes and ETL tools.


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.


Able to manage product with millions hit and low latency
8 years plus overall
5 years plus on MEAN specially Backend ( Node.js,MongoDb)
Experience of 2-3 years as a Lead
Individual contributor
Recruit, Manage team members, write code, take ownership of product!

Job Title : Full Stack Drupal Developer
Experience : Minimum 5 Years
Location : Hyderabad / Bangalore / Mumbai / Pune / Chennai / Gurgaon (Hybrid or On-site)
Notice Period : Immediate to 15 Days Preferred
Job Summary :
We are seeking a skilled and experienced Full Stack Drupal Developer with a strong background in Drupal (version 8 and above) for both front-end and back-end development. The ideal candidate will have hands-on experience in AWS deployments, Drupal theming and module development, and a solid understanding of JavaScript, PHP, and core Drupal architecture. Acquia certifications and contributions to the Drupal community are highly desirable.
Mandatory Skills :
Drupal 8+, PHP, JavaScript, Custom Module & Theming Development, AWS (EC2, Lightsail, S3, CloudFront), Acquia Certified, Drupal Community Contributions.
Key Responsibilities :
- Develop and maintain full-stack Drupal applications, including both front-end (theming) and back-end (custom module) development.
- Deploy and manage Drupal applications on AWS using services like EC2, Lightsail, S3, and CloudFront.
- Work with the Drupal theming layer and module layer to build custom and reusable components.
- Write efficient and scalable PHP code integrated with JavaScript and core JS concepts.
- Collaborate with UI/UX teams to ensure high-quality user experiences.
- Optimize performance and ensure high availability of applications in cloud environments.
- Contribute to the Drupal community and utilize contributed modules effectively.
- Follow best practices for code versioning, documentation, and CI/CD deployment processes.
Required Skills & Qualifications :
- Minimum 5 Years of hands-on experience in Drupal development (Drupal 8 onwards).
- Strong experience in front-end (theming, JavaScript, HTML, CSS) and back-end (custom module development, PHP).
- Experience with Drupal deployment on AWS, including services such as EC2, Lightsail, S3, and CloudFront.
- Proficiency in JavaScript, core JS concepts, and PHP coding.
- Acquia certifications such as:
- Drupal Developer Certification
- Site Management Certification
- Acquia Certified Developer (preferred)
- Experience with contributed modules and active participation in the Drupal community is a plus.
- Familiarity with version control (Git), Agile methodologies, and modern DevOps tools.
Preferred Certifications :
- Acquia Certified Developer.
- Acquia Site Management Certification.
- Any relevant AWS certifications are a bonus.
Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML, and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Docker and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills
Position Overview
We're seeking a skilled Full Stack Developer to build and maintain scalable web applications using modern technologies. You'll work across the entire development stack, from database design to user interface implementation.
Key Responsibilities
- Develop and maintain full-stack web applications using Node.js and TypeScript
- Design and implement RESTful APIs and microservices
- Build responsive, user-friendly front-end interfaces
- Design and optimize SQL databases and write efficient queries
- Collaborate with cross-functional teams on feature development
- Participate in code reviews and maintain high code quality standards
- Debug and troubleshoot application issues across the stack
Required Skills
- Backend: 3+ years experience with Node.js and TypeScript
- Database: Proficient in SQL (PostgreSQL, MySQL, or similar)
- Frontend: Experience with modern JavaScript frameworks (React, Vue, or Angular)
- Version Control: Git and collaborative development workflows
- API Development: RESTful services and API design principles
Preferred Qualifications
- Experience with cloud platforms (AWS, Azure, or GCP)
- Knowledge of containerization (Docker)
- Familiarity with testing frameworks (Jest, Mocha, or similar)
- Understanding of CI/CD pipelines
What We Offer
- Competitive salary and benefits
- Flexible work arrangements
- Professional development opportunities
- Collaborative team environment

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Job Summary:
We are looking for an experienced Java Developer with 4+years of hands-on experience to join our dynamic team. The ideal candidate will have a strong background in Java development, problem-solving skills, and the ability to work independently as well as part of a team. You will be responsible for designing, developing, and maintaining high-performance and scalable applications.
Key Responsibilities:
- Design, develop, test, and maintain Java-based applications.
- Write well-designed, efficient, and testable code following best software development practices.
- Troubleshoot and resolve technical issues during development and production support.
- Collaborate with cross-functional teams including QA, DevOps, and Product teams.
- Participate in code reviews and provide constructive feedback.
- Maintain proper documentation for code, processes, and configurations.
- Support deployment and post-deployment monitoring during night shift hours.
Required Skills:
- Strong programming skills in Java 8 or above.
- Experience with Spring Framework (Spring Boot, Spring MVC, etc.).
- Proficiency in RESTful APIs, Microservices Architecture, and Web Services.
- Familiarity with SQL and relational databases like MySQL, PostgreSQL, or Oracle.
- Hands-on experience with version control systems like Git.
- Understanding of Agile methodologies.
- Experience with build tools like Maven/Gradle.
- Knowledge of unit testing frameworks (JUnit/TestNG).
Preferred Skills (Good to Have):
- Experience with cloud platforms (AWS, Azure, or GCP).
- Familiarity with CI/CD pipelines.
- Basic understanding of frontend technologies like JavaScript, HTML, CSS.
Job Description: We are looking for a talented and motivated Software Engineer with
expertise in both Windows and Linux operating systems and solid experience in Java
technologies. The ideal candidate should be proficient in data structures and algorithms, as
well as frameworks like Spring MVC, Spring Boot, and Hibernate. Hands-on experience
working with MySQL databases is also essential for this role.
Responsibilities:
● Design, develop, test, and maintain software applications using Java technologies.
● Implement robust solutions using Spring MVC, Spring Boot, and Hibernate frameworks.
● Develop and optimize database operations with MySQL.
● Analyze and solve complex problems by applying knowledge of data structures and
algorithms.
● Work with both Windows and Linux environments to develop and deploy solutions.
● Collaborate with cross-functional teams to deliver high-quality products on time.
● Ensure application security, performance, and scalability.
● Maintain thorough documentation of technical solutions and processes.
● Debug, troubleshoot, and upgrade legacy systems when required.
Requirements:
● Operating Systems: Expertise in Windows and Linux environments.
● Programming Languages & Technologies: Strong knowledge of Java (Core Java, Java 8+).
● Frameworks: Proficiency in Spring MVC, Spring Boot, and Hibernate.
● Algorithms and Data Structures: Good understanding and practical application of DSA
concepts.
● Databases: Experience with MySQL – writing queries, stored procedures, and performance
tuning.
● Version Control Systems: Experience with tools like Git.
● Deployment: Knowledge of CI/CD pipelines and tools such as Jenkins, Docker (optional)
About Fundly
- Fundly is building a retailer centric Pharma Supply Chain platform and Marketplace for over 10 million pharma retailers in India
- Founded by experienced industry professionals with cumulative experience of 30+ years
- Has grown to 60+ people in 12 cities in less than 2 years 4. Monthly disbursement of INR 50 Cr 5. Raised venture capital of USD 5M so far from Accel Partners which is biggest VC Fund of India
Opportunity at Fundly
- Building a retailer centric ecosystem in Pharma Supply Chain
- Fast growing– 3000+ retailers, 36000 Transactions and 200+ Cr disbursement in last 2 years
- Build new platform from scratch
Responsibilities -
- Collaborating with Product & Data teams to understand the business flows, information architecture
- Designing and implementing scalable and fault tolerant Data platform for supporting Business Intelligence, Reporting and building AI/ML models
- Ensure data accuracy and quality during ETL/ELT processes
- Monitoring and managing infrastructure, ensuring optimal performance, security, and scalability.
- Troubleshooting and resolving issues related to data pipelines and data accuracy
- Ensuring compliance with industry best practices and organizational policies.
Qualifications
- Bachelor’s degree in computer science, information technology, or a related field.
- 2+ years of experience as a Data Engineer with AWS Data Engineering stack or other relevant Data Engineering technologies/frameworks.
- Strong experience with AWS.
- Strong problem-solving skills and the ability to work under pressure.
- Excellent communication and collaboration skills.
Who are you -
1. Loves to understand and solve problems be it technology, business or people problems
2. Likes take responsibility and accountability
3. Loves to develop excellent technology solutions with modular design and follows good coding practices

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.
Role - ETL Developer
Work Mode - Hybrid
Experience- 4+ years
Location - Pune, Gurgaon, Bengaluru, Mumbai
Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL
Required Skills:
- 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
- Experience in Pyspark, AWS, AWS Glue
- Experience in AWS ,Migration
- Experience with automated scripting and tracking KPIs/metrics for database performance
- Proficiency in shell scripting and ETL.
- Strong communication skills and a collaborative team player
- Knowledge of Python and AWS RDS is a plus
Sr. Devops Engineer – 12+ Years of Experience
Key Responsibilities:
Design, implement, and manage CI/CD pipelines for seamless deployments.
Optimize cloud infrastructure (AWS, Azure, GCP) for high availability and scalability.
Manage and automate infrastructure using Terraform, Ansible, or CloudFormation.
Deploy and maintain Kubernetes, Docker, and container orchestration tools.
Ensure security best practices in infrastructure and deployments.
Implement logging, monitoring, and alerting solutions (Prometheus, Grafana, ELK, Datadog).
Troubleshoot and resolve system and network issues efficiently.
Collaborate with development, QA, and security teams to streamline DevOps processes.
Required Skills:
Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD).
Hands-on experience with cloud platforms (AWS, GCP, or Azure).
Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible).
Experience with containerization and orchestration (Docker, Kubernetes).
Knowledge of networking, security, and monitoring tools.
Proficiency in scripting languages (Python, Bash, Go).
Strong troubleshooting and performance tuning skills.
Preferred Qualifications:
Certifications in AWS, Kubernetes, or DevOps.
Experience with service mesh, GitOps, and DevSecOps.


Job Description:
Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.
Key Responsibilities:
- Develop and maintain web applications using .NET Core, C#, and Angular.
- Design and implement RESTful APIs and integrate with front-end components.
- Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
- Deploy and manage applications on cloud platforms (AWS or Azure).
- Write clean, scalable, and efficient code following best practices.
- Participate in code reviews and provide constructive feedback.
- Troubleshoot and debug applications to ensure optimal performance.
- Stay updated with emerging technologies and propose improvements to existing systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum of 4 years of professional experience in software development.
- Proficiency in .NET Core, C#, and Angular.
- Experience with cloud services (either AWS or Azure).
- Strong understanding of RESTful API design and implementation.
- Familiarity with version control systems like Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently and collaboratively in a team environment.
Preferred Qualifications:
- Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Knowledge of CI/CD pipelines and DevOps practices.
- Familiarity with Agile/Scrum methodologies.
- Strong communication and interpersonal skills.
What We Offer:
- Competitive salary and performance-based incentives.
- Flexible working hours and remote work options.
- Opportunities for professional growth and career advancement.
- Collaborative and inclusive work environment.
- Access to the latest tools and technologies.



Job Title: .NET Developer
Location: Pan India (Hybrid)
Employment Type: Full-Time
Join Date: Immediate / Within 15 Days
Experience: 4+ Years
Deqode is looking for a skilled and passionate Senior .NET Developer to join our growing tech team. The ideal candidate is an expert in building scalable web applications and has hands-on experience with cloud platforms and modern front-end technologies.
Key Responsibilities:
- Design, develop, and maintain scalable web applications using .NET Core.
- Work on RESTful APIs and integrate third-party services.
- Collaborate with UI/UX designers and front-end developers using Angular or React.
- Deploy, monitor, and maintain applications on AWS or Azure.
- Participate in code reviews, technical discussions, and architecture planning.
- Write clean, well-structured, and testable code following best practices.
Must-Have Skills:
- 4+ years of experience in software development using .NET Core.
- Proficiency with Angular or React for front-end development.
- Strong working knowledge of AWS or Microsoft Azure.
- Experience with SQL/NoSQL databases.
- Excellent communication and team collaboration skills.
Education:
- Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.


Job Title: Fullstack Developer
Experience Level: 5+ Years
Location: Borivali, Mumbai
About the Role:
We are seeking a talented and experienced Fullstack Developer to join our dynamic engineering team. The ideal candidate will have at least 5 years of hands-on experience in building scalable web applications using modern technologies. You will be responsible for developing and maintaining both front-end and back-end components, ensuring high performance and responsiveness to requests from the front-end.
Key Responsibilities:
- Design, develop, test, and deploy scalable web applications using Node.js, React, and Python.
- Build and maintain APIs and microservices that support high-volume traffic and data.
- Develop front-end components and user interfaces using React.js.
- Leverage AWS services for deploying and managing applications in a cloud environment.
- Collaborate with cross-functional teams including UI/UX designers, product managers, and QA engineers.
- Participate in code reviews and ensure adherence to best practices in software development.
- Troubleshoot, debug and upgrade existing systems.
- Continuously explore and implement new technologies to maximize development efficiency.
Required Skills & Qualifications:
- 5+ years of experience in fullstack development.
- Strong proficiency in Node.js, React.js, and Python.
- Hands-on experience with AWS (e.g., Lambda, EC2, S3, CloudFormation, RDS).
- Solid understanding of RESTful APIs and web services.
- Familiarity with DevOps practices and CI/CD pipelines is a plus.
- Experience working with relational and NoSQL databases.
- Proficient understanding of code versioning tools, such as Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork abilities.
Nice to Have:
- Experience with serverless architecture.
- Knowledge of TypeScript.
- Exposure to containerization (Docker, Kubernetes).
- Familiarity with agile development methodologies.
Job Title: IT Head – Fintech Industry
Department: Information Technology
Location: Andheri East
Reports to: COO
Job Type: Full-Time
Job Overview:
The IT Head in a fintech company is responsible for overseeing the entire information technology infrastructure, including the development, implementation, and maintenance of IT systems, networks, and software solutions. The role involves leading the IT team, managing technology projects, ensuring data security, and ensuring the smooth functioning of all technology operations. As the company scales, the IT Head will play a key role in enabling digital innovation, optimizing IT processes, and ensuring compliance with relevant regulations in the fintech sector.
Key Responsibilities:
1. IT Strategy and Leadership
- Develop and execute the company’s IT strategy to align with the organization’s overall business goals and objectives, ensuring the integration of new technologies and systems.
- Lead, mentor, and manage a team of IT professionals, setting clear goals, priorities, and performance expectations.
- Stay up-to-date with industry trends and emerging technologies, providing guidance and recommending innovations to improve efficiency and security.
- Oversee the design, implementation, and maintenance of IT systems that support fintech products, customer experience, and business operations.
2. IT Infrastructure Management
- Oversee the management and optimization of the company’s IT infrastructure, including servers, networks, databases, and cloud services.
- Ensure the scalability and reliability of IT systems to support the company’s growth and increasing demand for digital services.
- Manage system updates, hardware procurement, and vendor relationships to ensure that infrastructure is cost-effective, secure, and high-performing.
3. Cybersecurity and Data Protection
- Lead efforts to ensure the company’s IT infrastructure is secure, implementing robust cybersecurity measures to protect sensitive customer data, financial transactions, and intellectual property.
- Develop and enforce data protection policies and procedures to ensure compliance with data privacy regulations (e.g., GDPR, CCPA, RBI, etc.).
- Conduct regular security audits and vulnerability assessments, working with the security team to address potential risks proactively.
4. Software Development and Integration
- Oversee the development and deployment of software applications and tools that support fintech operations, including payment gateways, loan management systems, and customer engagement platforms.
- Collaborate with product teams to identify technological needs, integrate new features, and optimize existing products for improved performance and user experience.
- Ensure the seamless integration of third-party platforms, APIs, and fintech partners into the company’s core systems.
5. IT Operations and Support
- Ensure the efficient day-to-day operation of IT services, including helpdesk support, system maintenance, and troubleshooting.
- Establish service level agreements (SLAs) for IT services, ensuring that internal teams and customers receive timely support and issue resolution.
- Manage incident response, ensuring quick resolution of system failures, security breaches, or service interruptions.
6. Budgeting and Cost Control
- Manage the IT department’s budget, ensuring cost-effective spending on technology, software, hardware, and IT services.
- Analyze and recommend investments in new technologies and infrastructure that can improve business performance while optimizing costs.
- Ensure the efficient use of IT resources and the appropriate allocation of budget to support business priorities.
7. Compliance and Regulatory Requirements
- Ensure IT practices comply with relevant industry regulations and standards, such as financial services regulations, data privacy laws, and cybersecurity guidelines.
- Work with legal and compliance teams to ensure that all systems and data handling procedures meet industry-specific regulatory requirements (e.g., PCI DSS, ISO 27001).
- Provide input and guidance on IT-related regulatory audits and assessments, ensuring the organization is always in compliance.
8. Innovation and Digital Transformation
- Drive innovation by identifying opportunities for digital transformation within the organization, using technology to streamline operations and enhance the customer experience.
- Collaborate with other departments (marketing, customer service, product development) to introduce new fintech products and services powered by cutting-edge technology.
- Oversee the implementation of AI, machine learning, and other advanced technologies to enhance business performance, operational efficiency, and customer satisfaction.
9. Vendor and Stakeholder Management
- Manage relationships with external technology vendors, service providers, and consultants to ensure the company gets the best value for its investments.
- Negotiate contracts, terms of service, and service level agreements (SLAs) with vendors and technology partners.
- Ensure strong communication with business stakeholders, understanding their IT needs and delivering technology solutions that align with company objectives.
Qualifications and Skills:
Education:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field (Master’s degree or relevant certifications like ITIL, PMP, or CISSP are a plus).
Experience:
- 8-12 years of experience in IT management, with at least 4 years in a leadership role, preferably within the fintech, banking, or technology industry.
- Strong understanding of IT infrastructure, cloud computing, database management, and cybersecurity best practices.
- Proven experience in managing IT teams and large-scale IT projects, especially in fast-paced, growth-driven environments.
- Knowledge of fintech products and services, including digital payments, blockchain, and online lending platforms.
Skills:
- Expertise in IT infrastructure management, cloud services (AWS, Azure, Google Cloud), and enterprise software.
- Strong understanding of cybersecurity protocols, data protection laws, and IT governance frameworks.
- Experience with software development and integration, particularly for fintech platforms.
- Strong project management and budgeting skills, with a track record of delivering IT projects on time and within budget.
- Excellent communication and leadership skills, with the ability to manage cross-functional teams and communicate complex technical concepts to non-technical stakeholders.
- Ability to manage multiple priorities in a fast-paced, high-pressure environment.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
· Professional Experience: 5+ years of experience in data engineering or a related field.
· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
· AWS Glue for ETL/ELT.
· S3 for storage.
· Redshift or Athena for data warehousing and querying.
· Lambda for serverless compute.
· Kinesis or SNS/SQS for data streaming.
· IAM Roles for security.
· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
· Version Control: Proficient with Git-based workflows.
· Problem Solving: Excellent analytical and debugging skills.
Optional Skills
· Knowledge of data modeling and data warehouse design principles.
· Experience with data visualization tools (e.g., Tableau, Power BI).
· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
· Exposure to other programming languages like Scala or Java.
Education
· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Why Join Us?
· Opportunity to work on cutting-edge AWS technologies.
· Collaborative and innovative work environment.
Must be:
- Based in Mumbai
- Comfortable with Work from Office
- Available to join immediately
Responsibilities:
- Manage, monitor, and scale production systems across cloud (AWS/GCP) and on-prem.
- Work with Kubernetes, Docker, Lambdas to build reliable, scalable infrastructure.
- Build tools and automation using Python, Go, or relevant scripting languages.
- Ensure system observability using tools like NewRelic, Prometheus, Grafana, CloudWatch, PagerDuty.
- Optimize for performance and low-latency in real-time systems using Kafka, gRPC, RTP.
- Use Terraform, CloudFormation, Ansible, Chef, Puppet for infra automation and orchestration.
- Load testing using Gatling, JMeter, and ensuring fault tolerance and high availability.
- Collaborate with dev teams and participate in on-call rotations.
Requirements:
- B.E./B.Tech in CS, Engineering or equivalent experience.
- 3+ years in production infra and cloud-based systems.
- Strong background in Linux (RHEL/CentOS) and shell scripting.
- Experience managing hybrid infrastructure (cloud + on-prem).
- Strong testing practices and code quality focus.
- Experience leading teams is a plus.
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries.
What will you do at Fynd?
- Run the production environment by monitoring availability and taking a holistic view of system health.
- Improve reliability, quality, and time-to-market of our suite of software solutions
- Be the 1st person to report the incident.
- Debug production issues across services and levels of the stack.
- Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realise it.
- Building automated tools in Python / Java / GoLang / Ruby etc.
- Help Platform and Engineering teams gain visibility into our infrastructure.
- Lead design of software components and systems, to ensure availability, scalability, latency, and efficiency of our services.
- Participate actively in detecting, remediating and reporting on Production incidents, ensuring the SLAs are met and driving Problem Management for permanent remediation.
- Participate in on-call rotation to ensure coverage for planned/unplanned events.
- Perform other task like load-test & generating system health reports.
- Periodically check for all dashboards readiness.
- Engage with other Engineering organizations to implement processes, identify improvements, and drive consistent results.
- Working with your SRE and Engineering counterparts for driving Game days, training and other response readiness efforts.
- Participate in the 24x7 support coverage as needed Troubleshooting and problem-solving complex issues with thorough root cause analysis on customer and SRE production environments
- Collaborate with Service Engineering organizations to build and automate tooling, implement best practices to observe and manage the services in production and consistently achieve our market leading SLA.
- Improving the scalability and reliability of our systems in production.
- Evaluating, designing and implementing new system architectures.
Some specific Requirements:
- B.E./B.Tech. in Engineering, Computer Science, technical degree, or equivalent work experience
- At least 3 years of managing production infrastructure. Leading / managing a team is a huge plus.
- Experience with cloud platforms like - AWS, GCP.
- Experience developing and operating large scale distributed systems with Kubernetes, Docker and and Serverless (Lambdas)
- Experience in running real-time and low latency high available applications (Kafka, gRPC, RTP)
- Comfortable with Python, Go, or any relevant programming language.
- Experience with monitoring alerting using technologies like Newrelic / zybix /Prometheus / Garafana / cloudwatch / Kafka / PagerDuty etc.
- Experience with one or more orchestration, deployment tools, e.g. CloudFormation / Terraform / Ansible / Packer / Chef.
- Experience with configuration management systems such as Ansible / Chef / Puppet.
- Knowledge of load testing methodologies, tools like Gating, Apache Jmeter.
- Work your way around Unix shell.
- Experience running hybrid clouds and on-prem infrastructures on Red Hat Enterprise Linux / CentOS
- A focus on delivering high-quality code through strong testing practices.
What do we offer?
Growth
Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.
Flex University: We help you upskill by organising in-house courses on important subjects
Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.
Culture
Community and Team building activities
Host weekly, quarterly and annual events/parties.
Wellness
Mediclaim policy for you + parents + spouse + kids
Experienced therapist for better mental health, improve productivity & work-life balance
We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Profile: Product Support Engineer
🔴 Experience: 1 year as Product Support Engineer.
🔴 Location: Mumbai (Andheri).
🔴 5 days of working from office.
Skills Required:
🔷 Experience in providing support for ETL or data warehousing is preferred.
🔷 Good Understanding on Unix and Databases concepts.
🔷 Experience working with SQL and No-SQL databases and writing simple
queries to get data for debugging issues.
🔷 Being able to creatively come up with solutions for various problems and
implement them.
🔷 Experience working with REST APIs and debugging requests and
responses using tools like Postman.
🔷 Quick troubleshooting and diagnosing skills.
🔷 Knowledge of customer success processes.
🔷 Experience in document creation.
🔷 High availability for fast response to customers.
🔷 Language knowledge required in one of NodeJs, Python, Java.
🔷 Background in AWS, Docker, Kubernetes, Networking - an advantage.
🔷 Experience in SAAS B2B software companies - an advantage.
🔷 Ability to join the dots around multiple events occurring concurrently and
spot patterns.
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Job Objectives:
- Integration of user- facing elements developed by front- end developers with server- side logic
- Optimize web applications to maximize speed and scale. Support diverse clients from high- powered desktop computers to small mobile devices
- Optimization of the application for maximum speed and scalability - Implementation of security and data protection
- Collaborate with senior management, operations & business team
- Building Rest APIs and maintain Database optimizations
Looking For:
- Great understanding of PHP, Node, react, Express, Socket.io, MVVM framework
- Worked on e- commerce website, server handling and deployments scripts
- The one who has worked on customer facing product for 2 years
- Strong knowledge of MERN stack
- Experience designing scalable, fault tolerant systems and databases.
As a DevOps Engineer, you’ll play a key role in managing our cloud infrastructure, automating deployments, and ensuring high availability across our global server network. You’ll work closely with our technical team to optimize performance and scalability.
Responsibilities
✅ Design, implement, and manage cloud infrastructure (primarily Azure)
✅ Automate deployments using CI/CD pipelines (GitHub Actions, Jenkins, or equivalent)
✅ Monitor and optimize server performance & uptime (100% uptime goal)
✅ Work with cPanel-based hosting environments and ensure seamless operation
✅ Implement security best practices & compliance measures
✅ Troubleshoot system issues, scale infrastructure, and enhance reliability
Requirements
🔹 3-7 years of DevOps experience in cloud environments (Azure preferred)
🔹 Hands-on expertise in CI/CD tools (GitHub Actions, Jenkins, etc.)
🔹 Proficiency in Terraform, Ansible, Docker, Kubernetes
🔹 Strong knowledge of Linux system administration & networking
🔹 Experience with monitoring tools (Prometheus, Grafana, ELK, etc.)
🔹 Security-first mindset & automation-driven approach
Why Join Us?
🚀 Work at a fast-growing startup backed by Microsoft
💡 Lead high-impact DevOps projects in a cloud-native environment
🌍 Hybrid work model with flexibility in Bangalore, Delhi, or Mumbai
💰 Competitive salary ₹12-30 LPA based on experience
How to Apply?
📩 Apply now & follow us for future updates:
🔗 X (Twitter): https://x.com/CygenHost
🔗 LinkedIn: https://www.linkedin.com/company/cygen-host/
🔗 Instagram: https://www.instagram.com/cygenhost
Would you like any modifications before posting this? Or should I move on to the next role? 🚀


Skill Set: 10 years plus as a full stack Java/JavaScript Developer
Micro Services, Distributed Systems
Cloud Services: AWS:(EC2,S3,Lambda,Load Balancing,Serverless)
Programming Backend: Node.js ,
Programming FrontEnd: React.js or ,Angular
Queuing: RabbitMQ /Kafka
Methodologies: Agile Scrum
Responsibilities:
End to end coding ; from software architecture to managing scaling,of high throughput(100000)RPS high volume transactions.
DIscuss business requirements and timelines with management and create a task plan for junior members.
Manage the day to day activities of all team members and report their work progress
Mentoring the team on best coding practices and making sure modules are Live
on time.
Management of security vulnerabilities.
Be a full individual contributor which means can work in a team as well as alone.
Attitude:
Passion for tech innovation and solve problems
GoGetter Attitude
Extremely humble and polite
Experience in Product companies and Managing small teams is a plus


Minimum 7 years experience : Individual Contributor ( Back-end plus Cloud- Infra)
Banking/Payments Domain : Worked on projects from scratch and scaled
Experience with Payment Gateway Companies is a plus
Tech Stack
Java SpringBoot,AWS,GCP,REST
Worked on CI/CD (Manage and setup )
Different SQL and NO-SQL Databases -,PostgreSQL,MongoDB
Responsibilities:
End to end coding ; from software architecture to managing scaling,of high throughput(100000)RPS high volume transactions.
DIscuss business requirements and timelines with management and create a task plan for junior members.
Manage the day to day activities of all team members and report their work progress
Mentoring the team on best coding practices and making sure modules are Live
on time.
Management of security vulnerabilities.
Be a full individual contributor which means can work in a team as well as alone.
Attitude:
Passion for tech innovation and solve problems
GoGetter Attitude
Extremely humble and polite
Experience in Product companies and Managing small teams is a plus


iSchoolConnect is an online platform that makes the University Admissions process hassle-free, fun and accessible to students around the globe. Using our unique AI technology, we allow students to apply to multiple universities with a single application. iSchoolConnect also connects with institutions worldwide and aids them in the transformation of their end-to-end admission processes using our various cutting-edge use cases.
Designation : Senior Fullstack Developer
We are seeking an experienced and highly skilled Senior Full Stack Developer to join our growing development team. The ideal candidate will have extensive experience in building scalable, high-performance web applications and will be responsible for delivering robust backend services and modern, user-friendly frontend solutions. This role will also involve working with cloud services, databases, and ensuring the technical success of projects from inception to deployment.
Responsibilities:
- End-to-End Development: Lead the development and maintenance of both frontend and backend applications. Write clean, scalable, and efficient code for web applications.
- Backend Development: Develop RESTful APIs and microservices using technologies like Node.js, Express.js, and Nest.js.
- Frontend Development: Implement and maintain modern, responsive web applications using frameworks React, Angular, etc
- Database Management: Design and maintain scalable databases, including MongoDB and MySQL, to ensure data consistency, performance, and reliability.
- Cloud Services: Manage cloud infrastructure on AWS and Google Cloud, ensuring optimal performance, scalability, and cost-efficiency.
- Collaboration: Work closely with product managers, designers, and other engineers to deliver new features and improvements.
- Code Quality & Testing: Follow best practices for code quality and maintainability, utilizing Test-Driven Development (TDD), and write unit and integration tests using Jest, and Postman.
- Mentorship: Provide guidance to junior developers, perform code reviews, and ensure high standards of development across the team.
Requirements:
- Experience: 5+ years of hands-on experience in full stack development, with a proven track record in both backend and frontend development.
- Backend Technologies: Proficiency in Node.js, Express.js, and Nest.js for building scalable backend services and APIs.
- Frontend Technologies: Strong experience with React, Angular, etc to build dynamic and responsive user interfaces.
- Databases: Strong knowledge of both relational (MySQL) and NoSQL (MongoDB) databases.
- Cloud Infrastructure: Hands-on experience with AWS and Google Cloud for managing cloud services, databases, and deployments.
- Version Control: Proficient in Git for version control and collaboration.
- Testing: Experience in writing unit and integration tests with Jest, and Postman.
- Problem Solving: Strong analytical and problem-solving skills to work with complex systems.
- Communication: Excellent communication and teamwork skills, with the ability to collaborate cross-functionally.
Nice-to-Have:
- Experience with Docker, Kubernetes, and CI/CD tools.
- Familiarity with GraphQL and Microservices Architecture.
- Experience working in an Agile/Scrum environment.

Job Description below :
Required Skills
BSc/B.E./B.Tech in Computer Science or an equivalent field.
7-10 years' solid commercial experience in software development using experience using Java8, Spring boot, Hibernate, Spring Cloud and related frameworks.
Experience with Angular 8+ versions, B.J‹JS 6, IS/TypeScript
Knowledge of HTML/CSS
Good understanding of Design Patterns
Proficiency with SQL database development, including data modelling and DB performance tuning
Ability to work with customers, gather requirements and create solutions independently
Active participation within and among teams and colleagues distributed globally
Excellent problem-solving skills, in particular a methodical approach to dealing with problems across distributed systems.
Agile development experience
Desired Skills
Experience with angular forms
Experience with dynamic forms/ dynamic angular components
Experience with java Spring boot
Knowledge of Kafka Stream Processing
Understanding of secure software development concepts, especially in a cloud platform
Good communication skills.
Strong organisational skills.
Understanding of test management and automation software (e.g. ALM, Jira, JMeter).
Familiarity with Agile frameworks and Regression testing.
Previous experience within the Financial domain.

Job Description:
Please find below details:
Experience - 5+ Years
Location- Bangalore/Python
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
- Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
- Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
- Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
- Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
- Ensure data quality and consistency by implementing validation and governance practices.
- Work on data security best practices in compliance with organizational policies and regulations.
- Automate repetitive data engineering tasks using Python scripts and frameworks.
- Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
- Professional Experience: 5+ years of experience in data engineering or a related field.
- Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
- AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
- AWS Glue for ETL/ELT.
- S3 for storage.
- Redshift or Athena for data warehousing and querying.
- Lambda for serverless compute.
- Kinesis or SNS/SQS for data streaming.
- IAM Roles for security.
- Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
- Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
- DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
- Version Control: Proficient with Git-based workflows.
- Problem Solving: Excellent analytical and debugging skills.
Optional Skills
- Knowledge of data modeling and data warehouse design principles.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
- Exposure to other programming languages like Scala or Java.
We're looking for a Backend Lead Engineer to join our Engineering Team. The Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles orders, payments, delivery promises, order tracking, logistics integrations to name a few. Our products are actively used by Fynd users, Operations, Delights, and Finance teams. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure for all these users.
What will you do at Fynd?
- Determining project requirements and developing work schedules for the team.
- Managing and mentoring a team of 2-5 engineers, Delegating tasks, and achieving daily, weekly, and monthly goals.
- Build scalable and loosely coupled services to extend our platform
- Build bulletproof API integrations with third-party APIs for various use cases with the team.
- Evolve our Infrastructure and add a few more nines to our overall availability
- Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS
- Give back to the open-source community through contributions on code and blog posts
- This is a startup so everything can change as we experiment with more product improvements
- Write technical documentation
Some Specific Requirements
- Atleast 2+ years of Tech Lead Experience, 5+ years of Development Experience
- You have prior experience developing and working on consumer-facing web/app products
- Good understanding of Data Structures, Algorithms, and Operating Systems
- Hands-on experience in JavaScript / Python / Node.Js. Exceptions can be made if you’re really good at any other language with experience in building web-app based tech products
- Good knowledge of async programming using Callbacks, Promises, and Async/Await
- Experience in at least one of the following frameworks - Flask, Sanic, Fastapi would be plus
- Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io
- Hands-on experience with Frontend codebases using HTML, CSS, and AJAX. Plus knowledge of Frontend Frameworks and Experience in at least one of the following frameworks - React.Js, Angular, Vue.js.
- Deep knowledge of MongoDB, Redis, or MySQL
- Basic knowledge of event-based systems using Kafka or RabbitMQ
- You should have experience with deploying and scaling Microservices.
- You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3
- Deep Knowledge of Cloud-Native Application Architecture: Docker, Kubernetes, Microservices


Role Title
Team Lead| Software Development
Description
The candidate will be required to build towards the future by constantly innovating and problem-solving. Should be passionate about the customer experience journey, highly motivated, intellectually curious and analytical. Conceptualize, rationalize and drive multiple simultaneous projects to deliver engineering work across the portfolio within deadlines and budgets. Experience in managing high performing individuals, building roadmaps, defining processes, delivering projects as well as analyzing cost/benefit of feature selection and communicating results throughout the organization.
Responsibilities Work across teams to organize and accelerate delivery by ensuring all teams are delivering in a coordinated manner.
Understand the business strategy and design approaches within product, program or domain effectively.
Regularly review metrics and proactively seek out new and improved data / mechanisms for visibility, ensuring the program stays aligned with organization objectives.
Proactively identify risks & issues and ensure mitigation efforts are being carried out throughout the software development lifecycle.
Keep abreast of evolving technology landscape.
Will provide technical leadership to software engineers by coaching and monitoring throughout end-to-end software development, maintenance and lifecycle.
Analyse and provide input for the requirements and provide impact assessment for the features or bug fixes.
Collaborate with external partner teams, business teams and other cross-functional teams effectively in driving the projects and the point of contact on the team. Skills Process : Data Structures and Algorithms | API, Version Control , Data structures | DevSecOps | Microservices | Application Security
Technical : FullStack Development – React, Angular, Nodejs, Java, ExpressJS, Passport, Springboot | CI/CD Tools | Cloud Computing – AWS Preferred | User Experience Design | Operating Systems – Linux and Windows | Database – MYSQL, Oracle | Webservers – Nginix, Apache | Containerization, Service Mesh
Product : Corporate Banking | Payments and collections | Treasury | Trade Finance
Human : Mentor & Guide | Creative Vision | Collaboration and Co-operation | Risk Taking Level Information
Education : BTech / M Tech Span of Influence : Multiple Teams


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.


- Looking for candidate who is enthusiastic to work in a Startup environment and build things from Scratch individually
- Candidate has past experience in scalable consumer facing applications managing latency and traffic
- FullStack Individual Contributor with experience to code at speed and take full product ownership
Experience: minimum 8 YRS
Location: Vile parle (E), Mumbai
Skill Set: 8 years plus as a full stack Java/JavaScript Developer
Micro Services, Distributed Systems
Cloud Services: AWS:(EC2,S3,Lambda,Load Balancing,Serverless),Kubernates
Programming Backend: Node.js,Mongodb,Java Spring,PostGreSQL
Programming FrontEnd: ,Angular/React
Queuing: Kafka
Methodologies: Agile Scrum
Responsibilities:
End to end coding ; from software architecture to managing scaling,of high throughput(100000)RPS high volume transactions.
DIscuss business requirements and timelines with management and create a task plan for junior members.
Manage the day to day activities of all team members and report
their work progress
Mentoring the team on best coding practices and making sure modules are Live
on time.
Management of security vulnerabilities.
Be a full individual contributor which means can work in a team as well as alone.
Attitude:
Passion for tech innovation and solve problems
GoGetter Attitude
Extremely humble and polite
Experience in Product companies and Managing small teams is a plus
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
Job Overview
We are seeking an experienced Senior Solution Architect to join our dynamic DevOps organization. The ideal candidate will have a strong background in cloud technologies, with expertise in migration projects across platforms such as GCP, AWS, and Azure. The candidate should possess a deep understanding of DevOps principles, Kubernetes orchestration, Data migration & management and automation tools like CI/CD pipelines and Terraform.The individual should be highly skilled in designing scalable application architectures capable of handling substantial workloads while ensuring the highest standards of quality.
Key Responsibilities
- Lead and drive cloud migration projects from on-premises data centers or other cloud platforms to GCP, AWS, or Azure.
- Design and implement migration strategies that ensure minimal downtime and maximum efficiency.
- Demonstrate proficiency in GCP, AWS, and Azure, with the ability to choose and optimize solutions based on specific business requirements.
- Provide guidance on selecting the appropriate cloud services for various workloads.
- Design, implement, and optimize CI/CD pipelines to streamline software delivery.
- Utilize Terraform for infrastructure as code (IaC) to automate deployment processes.
- Collaborate with development and operations teams to enhance the overall DevOps culture.
- Possess in-depth knowledge and practical experience with Kubernetes orchestration for containerized applications.
- Architect and optimize Kubernetes clusters for high availability and scalability.
- Engage in research and development activities to stay abreast of industry trends and emerging technologies.
- Evaluate and introduce new tools and methodologies to enhance the efficiency and effectiveness of cloud solutions.
- Architect solutions that can handle large-scale workloads and provide guidance on scaling strategies.
- Ensure high-performance levels and reliability in production environments.
- Design scalable and high-performance database architectures tailored to meet business needs.
- Execute database migrations with a keen focus on data consistency, integrity, and performance.
- Develop and implement database pipelines to automate processes such as data migrations, schema changes, and backups.
- Optimize database workflows to enhance efficiency and reliability.
- Work closely with clients to assess and enhance the quality of existing architectures.
- Implement best practices to ensure robust, secure, and well-architected solutions.
- Drive migration projects, collaborating with cross-functional teams to ensure successful execution.
- Provide technical leadership and mentorship to junior team members.
Required Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Relevant industry experience in a Solution Architect role.
- Proven experience in leading cloud migration projects across GCP, AWS, and Azure.
- Expertise in DevOps practices, CI/CD pipelines, and infrastructure automation.
- In-depth knowledge of Kubernetes and container orchestration.
- Strong background in scaling architectures to handle significant workloads.
- Sound knowledge in database migrations
- Excellent communication skills and the ability to articulate complex technical concepts to both technical and non-technical stakeholders.
A) Skills Required
Essential Skills (Two top
skills)
3 possible combinations.
1. Candidate having expertise in both ElasticSearch and Kafka, preferably.
OR
2. Candidate having expertise in ElasticSearch and willing to learn Kafka.
OR
3. Candidate having expertise in Kafka and willing to learn ElasticSearch.
B) Other Information
Educational Qualifications Graduate
Experience Mid-Level (6+ years)
Minimum Qualifications:
ElasticSearch/OpenSearch
· Software Lifecycle/programing skills
· Linux
· Python
· Ingestion tools (logstash, OpenSearch Ingestion, Fluentd, fluentbit, Harness,
CloudFormation, container, images, ECS, lambda).
· SQL query
· Json
· AWS knowledge
Kafka/MSK
· Linux
· In-depth understanding of Kafka broker configurations, zookeepers, and
connectors
· Understand Kafka topic design and creation.
· Good knowledge in replication and high availability for Kafka system
· Good understanding of producers and consumer group
· Understanding Kafka partitions and scaling up
· Kafka latency/lag and throughput
· Integrating Kafka connect with various data sources can be internal or external.
· Kafka security using SSL/Certs
Company Description
Krish is committed to enabling customers to achieve their technological goals by delivering solutions that combine the right technology, people, and costs. Our approach emphasizes building long-term relationships while ensuring customer success through tailored solutions, leveraging the expertise and integrity of our consultants and robust delivery processes.
Location : Mumbai – Tech Data Office
Experience : 5 - 8 years.
Duration : 1-year contract (extendable)
Job Overview
We are seeking a highly skilled Sales Engineer (L2/L3) with in-depth expertise in Palo Alto Networks solutions. This role requires designing, implementing, and supporting cutting-edge network and security solutions to meet customers' technical and business needs. The ideal candidate will have strong experience in sales engineering and advanced skills in deploying, troubleshooting, and optimizing Palo Alto products and related technologies, with the ability to assist in implementation tasks when required.
Key Responsibilities
Solution Design & Technical Consultation:
- Collaborate with sales teams and customers to understand business and technical requirements.
- Design and propose solutions leveraging Palo Alto Networks technologies, including Next-Generation Firewalls (NGFW), Prisma Access, Panorama, SD-WAN, and Threat Prevention.
- Prepare detailed technical proposals, configurations, and proof-of-concept (POC) demonstrations tailored to client needs.
- Optimize existing customer deployments, ensuring alignment with industry best practices.
Customer Engagement & Implementation:
- Present and demonstrate Palo Alto solutions to stakeholders, addressing technical challenges and business objectives.
- Conduct customer and partner workshops, enablement sessions, and product training.
- Provide post-sales support to address implementation challenges and fine-tune deployments.
- Lead and assist with hands-on implementations of Palo Alto Networks products when required.
Support & Troubleshooting:
- Provide L2-L3 level troubleshooting and issue resolution for Palo Alto Networks products, including advanced debugging and system analysis.
- Assist with upgrades, migrations, and integration of Palo Alto solutions with other security and network infrastructures.
- Develop runbooks, workflows, and documentation for post-sales handover to operations teams.
Partner Enablement & Ecosystem Management:
- Collaborate with channel partners to build technical competency and promote adoption of Palo Alto solutions.
- Support certification readiness and compliance for both internal and partner teams.
- Participate in events, workshops, and seminars to showcase technical expertise.
Skills and Qualifications
Technical Skills:
- Advanced expertise in Palo Alto Networks technologies, including NGFW, Panorama, Prisma Access, SD-WAN, and GlobalProtect.
- Strong knowledge of networking protocols (e.g., TCP/IP, BGP, OSPF) and security frameworks (e.g., Zero Trust, SASE).
- Proficiency in troubleshooting and root-cause analysis for complex networking and security issues.
- Experience with security automation tools and integrations (e.g., API scripting, Ansible, Terraform).
Soft Skills:
- Excellent communication and presentation skills, with the ability to convey technical concepts to diverse audiences.
- Strong analytical and problem-solving skills, with a focus on delivering customer-centric solutions.
- Ability to manage competing priorities and maintain operational discipline under tight deadlines.
Experience:
- 5+ years of experience in sales engineering, solution architecture, or advanced technical support roles in the IT security domain.
- Hands-on experience in designing and deploying large-scale Palo Alto Networks solutions in enterprise environments.
Education and Certifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications such as PCNSA, PCNSE, or equivalent vendor certifications (e.g., CCNP Security, NSE4) are highly preferred.
Data Engineer + Integration engineer + Support specialistExp – 5-8 years
Necessary Skills:· SQL & Python / PySpark
· AWS Services: Glue, Appflow, Redshift
· Data warehousing
· Data modelling
Job Description:· Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services
· A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes
· Work with stakeholders to identify business needs and requirements for data-related projects
Strong SQL and/or Python or PySpark knowledge
· Creating data models that can be used to extract information from various sources & store it in a usable format
· Optimize data models for performance and efficiency
· Write SQL queries to support data analysis and reporting
· Monitor and troubleshoot data pipelines
· Collaborate with software engineers to design and implement data-driven features
· Perform root cause analysis on data issues
· Maintain documentation of the data architecture and ETL processes
· Identifying opportunities to improve performance by improving database structure or indexing methods
· Maintaining existing applications by updating existing code or adding new features to meet new requirements
· Designing and implementing security measures to protect data from unauthorized access or misuse
· Recommending infrastructure changes to improve capacity or performance
Experience in Process industry
Data Engineer + Integration engineer + Support specialistExp – 3-5 years
Necessary Skills:· SQL & Python / PySpark
· AWS Services: Glue, Appflow, Redshift
· Data warehousing basics
· Data modelling basics
Job Description:· Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform.
· A strong understanding of data modelling, data structures, databases (Redshift)
Strong SQL and/or Python or PySpark knowledge
· Design and implement ETL processes to load data into the data warehouse
· Creating data models that can be used to extract information from various sources & store it in a usable format
· Optimize data models for performance and efficiency
· Write SQL queries to support data analysis and reporting
· Collaborate with team to design and implement data-driven features
· Monitor and troubleshoot data pipelines
· Perform root cause analysis on data issues
· Maintain documentation of the data architecture and ETL processes
· Maintaining existing applications by updating existing code or adding new features to meet new requirements
· Designing and implementing security measures to protect data from unauthorized access or misuse
· Identifying opportunities to improve performance by improving database structure or indexing methods
· Designing and implementing security measures to protect data from unauthorized access or misuse
· Recommending infrastructure changes to improve capacity or performance
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Responsibilities:
• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
• Deploy and manage Kubernetes clusters using AWS.
• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
• Monitor system performance using Datadog, ELK, and Cloudflare tools.
• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
• Collaborate with development teams to design, implement and test infrastructure changes.
• Troubleshoot and resolve infrastructure issues as they arise.
• Participate in on-call rotation and provide support for production issues.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering or a related field.
• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
• Strong understanding of Linux administration and shell scripting.
• Experience with automation tools such as Terraform, Ansible, or similar.
• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
• Experience with container orchestration platforms such as Kubernetes.
• Familiarity with container technologies such as Docker.
• Experience with cloud providers such as AWS.
• Experience with monitoring tools such as Datadog and ELK.
Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration skills.
• Ability to work independently or in a team environment.
• Strong attention to detail.
• Ability to learn and apply new technologies quickly.
• Ability to work in a fast-paced and dynamic environment.
• Strong understanding of DevOps principles and methodologies.
Kindly apply at https://www.wohlig.com/careers
Job description
We are seeking a highly skilled and experienced IT Department Head with strong communication skills, a technical background, and leadership capabilities to manage our IT team. The ideal candidate will be responsible for overseeing the organization's IT infrastructure, ensuring the security and efficiency of our systems, and maintaining compliance with relevant industry standards. The role requires an
in-depth understanding of cloud technologies , server management, network security, managed IT services, and strong problem-solving capabilities.
Key Responsibilities:-
The Information Technology Manager is a proactive and hands-on IT Manager to oversee and evolve our technology infrastructure
· In this role, the Manager will manage all aspects of our IT operations, from maintaining our current tech stack to strategizing and implementing future developments
· This position will ensure that our technology systems are modern, secure, and efficient, aligning IT initiatives with our business goals
· IT Strategy & Leadership: Develop and execute an IT strategy that supports the company's objectives, ensuring scalability and security
· Infrastructure Management: Oversee the maintenance and optimization of our Azure Cloud infrastructure, AWS Cloud, and Cisco Meraki networking systems
· Software & Systems Administration: Manage Microsoft 365 administration.
· Cybersecurity: Enhance our cybersecurity posture using tools like Sentinel One, Sophos Firewall and other tools
· Project Management: Lead IT projects, including system upgrades and optimizations, ensuring timely delivery and adherence to budgets
· Team Leadership: Mentor and guide a small IT team, fostering a culture of continuous improvement and professional development
· Vendor Management: Collaborate with external vendors and service providers to ensure optimal performance and cost-effectiveness
· Technical Support: Provide high-level technical support and troubleshooting for IT-related issues across the organization and client in USA Other duties as needed
· IT Audit & Compliance: Conduct regular audits to ensure IT processes are compliant with security regulations and best practices (GDPR, SOC2, ISO 27001), ensuring readiness for internal and external audit.
· Documentation: Maintain thorough and accurate documentation for all systems, processes, and procedures to ensure clarity and consistency in IT operations.
Preferred Skills:-
. Experience with SOC 2, ISO 27001, or similar security frameworks.
. Experience with advanced firewall configurations and network
architecture.
Job Type: Full-time
Benefits:
- Paid sick time
Shift:
- Day shift
Work Days:
- Monday to Friday
Experience:
- IT management: 2 years (Required)
Work Location: In person