50+ AWS (Amazon Web Services) Jobs in Mumbai | AWS (Amazon Web Services) Job openings in Mumbai
Apply to 50+ AWS (Amazon Web Services) Jobs in Mumbai on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.

Position Overview
We're seeking a skilled Full Stack Developer to build and maintain scalable web applications using modern technologies. You'll work across the entire development stack, from database design to user interface implementation.
Key Responsibilities
- Develop and maintain full-stack web applications using Node.js and TypeScript
- Design and implement RESTful APIs and microservices
- Build responsive, user-friendly front-end interfaces
- Design and optimize SQL databases and write efficient queries
- Collaborate with cross-functional teams on feature development
- Participate in code reviews and maintain high code quality standards
- Debug and troubleshoot application issues across the stack
Required Skills
- Backend: 3+ years experience with Node.js and TypeScript
- Database: Proficient in SQL (PostgreSQL, MySQL, or similar)
- Frontend: Experience with modern JavaScript frameworks (React, Vue, or Angular)
- Version Control: Git and collaborative development workflows
- API Development: RESTful services and API design principles
Preferred Qualifications
- Experience with cloud platforms (AWS, Azure, or GCP)
- Knowledge of containerization (Docker)
- Familiarity with testing frameworks (Jest, Mocha, or similar)
- Understanding of CI/CD pipelines
What We Offer
- Competitive salary and benefits
- Flexible work arrangements
- Professional development opportunities
- Collaborative team environment

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Job Summary:
We are looking for an experienced Java Developer with 4+years of hands-on experience to join our dynamic team. The ideal candidate will have a strong background in Java development, problem-solving skills, and the ability to work independently as well as part of a team. You will be responsible for designing, developing, and maintaining high-performance and scalable applications.
Key Responsibilities:
- Design, develop, test, and maintain Java-based applications.
- Write well-designed, efficient, and testable code following best software development practices.
- Troubleshoot and resolve technical issues during development and production support.
- Collaborate with cross-functional teams including QA, DevOps, and Product teams.
- Participate in code reviews and provide constructive feedback.
- Maintain proper documentation for code, processes, and configurations.
- Support deployment and post-deployment monitoring during night shift hours.
Required Skills:
- Strong programming skills in Java 8 or above.
- Experience with Spring Framework (Spring Boot, Spring MVC, etc.).
- Proficiency in RESTful APIs, Microservices Architecture, and Web Services.
- Familiarity with SQL and relational databases like MySQL, PostgreSQL, or Oracle.
- Hands-on experience with version control systems like Git.
- Understanding of Agile methodologies.
- Experience with build tools like Maven/Gradle.
- Knowledge of unit testing frameworks (JUnit/TestNG).
Preferred Skills (Good to Have):
- Experience with cloud platforms (AWS, Azure, or GCP).
- Familiarity with CI/CD pipelines.
- Basic understanding of frontend technologies like JavaScript, HTML, CSS.
Job Description: We are looking for a talented and motivated Software Engineer with
expertise in both Windows and Linux operating systems and solid experience in Java
technologies. The ideal candidate should be proficient in data structures and algorithms, as
well as frameworks like Spring MVC, Spring Boot, and Hibernate. Hands-on experience
working with MySQL databases is also essential for this role.
Responsibilities:
● Design, develop, test, and maintain software applications using Java technologies.
● Implement robust solutions using Spring MVC, Spring Boot, and Hibernate frameworks.
● Develop and optimize database operations with MySQL.
● Analyze and solve complex problems by applying knowledge of data structures and
algorithms.
● Work with both Windows and Linux environments to develop and deploy solutions.
● Collaborate with cross-functional teams to deliver high-quality products on time.
● Ensure application security, performance, and scalability.
● Maintain thorough documentation of technical solutions and processes.
● Debug, troubleshoot, and upgrade legacy systems when required.
Requirements:
● Operating Systems: Expertise in Windows and Linux environments.
● Programming Languages & Technologies: Strong knowledge of Java (Core Java, Java 8+).
● Frameworks: Proficiency in Spring MVC, Spring Boot, and Hibernate.
● Algorithms and Data Structures: Good understanding and practical application of DSA
concepts.
● Databases: Experience with MySQL – writing queries, stored procedures, and performance
tuning.
● Version Control Systems: Experience with tools like Git.
● Deployment: Knowledge of CI/CD pipelines and tools such as Jenkins, Docker (optional)
About Fundly
- Fundly is building a retailer centric Pharma Supply Chain platform and Marketplace for over 10 million pharma retailers in India
- Founded by experienced industry professionals with cumulative experience of 30+ years
- Has grown to 60+ people in 12 cities in less than 2 years 4. Monthly disbursement of INR 50 Cr 5. Raised venture capital of USD 5M so far from Accel Partners which is biggest VC Fund of India
Opportunity at Fundly
- Building a retailer centric ecosystem in Pharma Supply Chain
- Fast growing– 3000+ retailers, 36000 Transactions and 200+ Cr disbursement in last 2 years
- Build new platform from scratch
Responsibilities -
- Collaborating with Product & Data teams to understand the business flows, information architecture
- Designing and implementing scalable and fault tolerant Data platform for supporting Business Intelligence, Reporting and building AI/ML models
- Ensure data accuracy and quality during ETL/ELT processes
- Monitoring and managing infrastructure, ensuring optimal performance, security, and scalability.
- Troubleshooting and resolving issues related to data pipelines and data accuracy
- Ensuring compliance with industry best practices and organizational policies.
Qualifications
- Bachelor’s degree in computer science, information technology, or a related field.
- 2+ years of experience as a Data Engineer with AWS Data Engineering stack or other relevant Data Engineering technologies/frameworks.
- Strong experience with AWS.
- Strong problem-solving skills and the ability to work under pressure.
- Excellent communication and collaboration skills.
Who are you -
1. Loves to understand and solve problems be it technology, business or people problems
2. Likes take responsibility and accountability
3. Loves to develop excellent technology solutions with modular design and follows good coding practices

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
AuxoAI is seeking a Senior Platform Engineer (Lead) with strong AWS administration expertise to architect and manage cloud infrastructure while contributing to data engineering initiatives. The ideal candidate will have a deep understanding of cloud platforms, infrastructure as code, and data platform integrations. This role requires both technical leadership and hands-on implementation to ensure scalable, secure, and efficient infrastructure across the organization.
Key Responsibilities:
- Define, implement, and manage scalable platform architecture aligned with business and technical requirements.
- Lead AWS infrastructure design and operations including IAM, VPC, networking, security, and cost optimization.
- Design and optimize cloud-based storage, compute resources, and orchestration workflows to support data platforms.
- Collaborate with Data Engineers and DevOps teams to streamline deployment of data pipelines and infrastructure components.
- Automate infrastructure provisioning and management using Terraform, CloudFormation, or similar Infrastructure as Code (IaC) tools.
- Integrate platform capabilities with internal tools, analytics platforms, and business applications.
- Establish cloud engineering best practices including infrastructure security, reliability, and observability.
- Provide technical mentorship to engineering team members and lead knowledge-sharing initiatives.
- Monitor system performance, troubleshoot production issues, and implement solutions for reliability and scalability.
- Drive best practices in cloud engineering, security, and infrastructure as code (IaC).
Requirements
- Bachelor's degree in Computer Science, Engineering, or a related field; or equivalent work experience.
- 6+ years of hands-on experience in platform engineering, DevOps, or cloud infrastructure roles.
- Expertise in AWS core services (IAM, EC2, S3, VPC, CloudWatch, etc.) and managing secure, scalable environments.
- Proficiency in Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools.
- Strong understanding of data platforms, pipelines, and workflow orchestration in cloud-native environments.
- Experience integrating infrastructure with CI/CD tools and workflows (e.g., GitHub Actions, Jenkins, GitLab CI).
- Familiarity with cloud security best practices, access management, and cost optimization strategies.
- Strong problem-solving and troubleshooting skills across cloud and data systems.
- Prior experience in a leadership or mentoring role is a plus.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.
Role - ETL Developer
Work Mode - Hybrid
Experience- 4+ years
Location - Pune, Gurgaon, Bengaluru, Mumbai
Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL
Required Skills:
- 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
- Experience in Pyspark, AWS, AWS Glue
- Experience in AWS ,Migration
- Experience with automated scripting and tracking KPIs/metrics for database performance
- Proficiency in shell scripting and ETL.
- Strong communication skills and a collaborative team player
- Knowledge of Python and AWS RDS is a plus
Sr. Devops Engineer – 12+ Years of Experience
Key Responsibilities:
Design, implement, and manage CI/CD pipelines for seamless deployments.
Optimize cloud infrastructure (AWS, Azure, GCP) for high availability and scalability.
Manage and automate infrastructure using Terraform, Ansible, or CloudFormation.
Deploy and maintain Kubernetes, Docker, and container orchestration tools.
Ensure security best practices in infrastructure and deployments.
Implement logging, monitoring, and alerting solutions (Prometheus, Grafana, ELK, Datadog).
Troubleshoot and resolve system and network issues efficiently.
Collaborate with development, QA, and security teams to streamline DevOps processes.
Required Skills:
Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD).
Hands-on experience with cloud platforms (AWS, GCP, or Azure).
Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible).
Experience with containerization and orchestration (Docker, Kubernetes).
Knowledge of networking, security, and monitoring tools.
Proficiency in scripting languages (Python, Bash, Go).
Strong troubleshooting and performance tuning skills.
Preferred Qualifications:
Certifications in AWS, Kubernetes, or DevOps.
Experience with service mesh, GitOps, and DevSecOps.


Job Description:
Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.
Key Responsibilities:
- Develop and maintain web applications using .NET Core, C#, and Angular.
- Design and implement RESTful APIs and integrate with front-end components.
- Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
- Deploy and manage applications on cloud platforms (AWS or Azure).
- Write clean, scalable, and efficient code following best practices.
- Participate in code reviews and provide constructive feedback.
- Troubleshoot and debug applications to ensure optimal performance.
- Stay updated with emerging technologies and propose improvements to existing systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum of 4 years of professional experience in software development.
- Proficiency in .NET Core, C#, and Angular.
- Experience with cloud services (either AWS or Azure).
- Strong understanding of RESTful API design and implementation.
- Familiarity with version control systems like Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently and collaboratively in a team environment.
Preferred Qualifications:
- Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Knowledge of CI/CD pipelines and DevOps practices.
- Familiarity with Agile/Scrum methodologies.
- Strong communication and interpersonal skills.
What We Offer:
- Competitive salary and performance-based incentives.
- Flexible working hours and remote work options.
- Opportunities for professional growth and career advancement.
- Collaborative and inclusive work environment.
- Access to the latest tools and technologies.



Job Title: .NET Developer
Location: Pan India (Hybrid)
Employment Type: Full-Time
Join Date: Immediate / Within 15 Days
Experience: 4+ Years
Deqode is looking for a skilled and passionate Senior .NET Developer to join our growing tech team. The ideal candidate is an expert in building scalable web applications and has hands-on experience with cloud platforms and modern front-end technologies.
Key Responsibilities:
- Design, develop, and maintain scalable web applications using .NET Core.
- Work on RESTful APIs and integrate third-party services.
- Collaborate with UI/UX designers and front-end developers using Angular or React.
- Deploy, monitor, and maintain applications on AWS or Azure.
- Participate in code reviews, technical discussions, and architecture planning.
- Write clean, well-structured, and testable code following best practices.
Must-Have Skills:
- 4+ years of experience in software development using .NET Core.
- Proficiency with Angular or React for front-end development.
- Strong working knowledge of AWS or Microsoft Azure.
- Experience with SQL/NoSQL databases.
- Excellent communication and team collaboration skills.
Education:
- Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.


Job Title: Fullstack Developer
Experience Level: 5+ Years
Location: Borivali, Mumbai
About the Role:
We are seeking a talented and experienced Fullstack Developer to join our dynamic engineering team. The ideal candidate will have at least 5 years of hands-on experience in building scalable web applications using modern technologies. You will be responsible for developing and maintaining both front-end and back-end components, ensuring high performance and responsiveness to requests from the front-end.
Key Responsibilities:
- Design, develop, test, and deploy scalable web applications using Node.js, React, and Python.
- Build and maintain APIs and microservices that support high-volume traffic and data.
- Develop front-end components and user interfaces using React.js.
- Leverage AWS services for deploying and managing applications in a cloud environment.
- Collaborate with cross-functional teams including UI/UX designers, product managers, and QA engineers.
- Participate in code reviews and ensure adherence to best practices in software development.
- Troubleshoot, debug and upgrade existing systems.
- Continuously explore and implement new technologies to maximize development efficiency.
Required Skills & Qualifications:
- 5+ years of experience in fullstack development.
- Strong proficiency in Node.js, React.js, and Python.
- Hands-on experience with AWS (e.g., Lambda, EC2, S3, CloudFormation, RDS).
- Solid understanding of RESTful APIs and web services.
- Familiarity with DevOps practices and CI/CD pipelines is a plus.
- Experience working with relational and NoSQL databases.
- Proficient understanding of code versioning tools, such as Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork abilities.
Nice to Have:
- Experience with serverless architecture.
- Knowledge of TypeScript.
- Exposure to containerization (Docker, Kubernetes).
- Familiarity with agile development methodologies.
Job Title: IT Head – Fintech Industry
Department: Information Technology
Location: Andheri East
Reports to: COO
Job Type: Full-Time
Job Overview:
The IT Head in a fintech company is responsible for overseeing the entire information technology infrastructure, including the development, implementation, and maintenance of IT systems, networks, and software solutions. The role involves leading the IT team, managing technology projects, ensuring data security, and ensuring the smooth functioning of all technology operations. As the company scales, the IT Head will play a key role in enabling digital innovation, optimizing IT processes, and ensuring compliance with relevant regulations in the fintech sector.
Key Responsibilities:
1. IT Strategy and Leadership
- Develop and execute the company’s IT strategy to align with the organization’s overall business goals and objectives, ensuring the integration of new technologies and systems.
- Lead, mentor, and manage a team of IT professionals, setting clear goals, priorities, and performance expectations.
- Stay up-to-date with industry trends and emerging technologies, providing guidance and recommending innovations to improve efficiency and security.
- Oversee the design, implementation, and maintenance of IT systems that support fintech products, customer experience, and business operations.
2. IT Infrastructure Management
- Oversee the management and optimization of the company’s IT infrastructure, including servers, networks, databases, and cloud services.
- Ensure the scalability and reliability of IT systems to support the company’s growth and increasing demand for digital services.
- Manage system updates, hardware procurement, and vendor relationships to ensure that infrastructure is cost-effective, secure, and high-performing.
3. Cybersecurity and Data Protection
- Lead efforts to ensure the company’s IT infrastructure is secure, implementing robust cybersecurity measures to protect sensitive customer data, financial transactions, and intellectual property.
- Develop and enforce data protection policies and procedures to ensure compliance with data privacy regulations (e.g., GDPR, CCPA, RBI, etc.).
- Conduct regular security audits and vulnerability assessments, working with the security team to address potential risks proactively.
4. Software Development and Integration
- Oversee the development and deployment of software applications and tools that support fintech operations, including payment gateways, loan management systems, and customer engagement platforms.
- Collaborate with product teams to identify technological needs, integrate new features, and optimize existing products for improved performance and user experience.
- Ensure the seamless integration of third-party platforms, APIs, and fintech partners into the company’s core systems.
5. IT Operations and Support
- Ensure the efficient day-to-day operation of IT services, including helpdesk support, system maintenance, and troubleshooting.
- Establish service level agreements (SLAs) for IT services, ensuring that internal teams and customers receive timely support and issue resolution.
- Manage incident response, ensuring quick resolution of system failures, security breaches, or service interruptions.
6. Budgeting and Cost Control
- Manage the IT department’s budget, ensuring cost-effective spending on technology, software, hardware, and IT services.
- Analyze and recommend investments in new technologies and infrastructure that can improve business performance while optimizing costs.
- Ensure the efficient use of IT resources and the appropriate allocation of budget to support business priorities.
7. Compliance and Regulatory Requirements
- Ensure IT practices comply with relevant industry regulations and standards, such as financial services regulations, data privacy laws, and cybersecurity guidelines.
- Work with legal and compliance teams to ensure that all systems and data handling procedures meet industry-specific regulatory requirements (e.g., PCI DSS, ISO 27001).
- Provide input and guidance on IT-related regulatory audits and assessments, ensuring the organization is always in compliance.
8. Innovation and Digital Transformation
- Drive innovation by identifying opportunities for digital transformation within the organization, using technology to streamline operations and enhance the customer experience.
- Collaborate with other departments (marketing, customer service, product development) to introduce new fintech products and services powered by cutting-edge technology.
- Oversee the implementation of AI, machine learning, and other advanced technologies to enhance business performance, operational efficiency, and customer satisfaction.
9. Vendor and Stakeholder Management
- Manage relationships with external technology vendors, service providers, and consultants to ensure the company gets the best value for its investments.
- Negotiate contracts, terms of service, and service level agreements (SLAs) with vendors and technology partners.
- Ensure strong communication with business stakeholders, understanding their IT needs and delivering technology solutions that align with company objectives.
Qualifications and Skills:
Education:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field (Master’s degree or relevant certifications like ITIL, PMP, or CISSP are a plus).
Experience:
- 8-12 years of experience in IT management, with at least 4 years in a leadership role, preferably within the fintech, banking, or technology industry.
- Strong understanding of IT infrastructure, cloud computing, database management, and cybersecurity best practices.
- Proven experience in managing IT teams and large-scale IT projects, especially in fast-paced, growth-driven environments.
- Knowledge of fintech products and services, including digital payments, blockchain, and online lending platforms.
Skills:
- Expertise in IT infrastructure management, cloud services (AWS, Azure, Google Cloud), and enterprise software.
- Strong understanding of cybersecurity protocols, data protection laws, and IT governance frameworks.
- Experience with software development and integration, particularly for fintech platforms.
- Strong project management and budgeting skills, with a track record of delivering IT projects on time and within budget.
- Excellent communication and leadership skills, with the ability to manage cross-functional teams and communicate complex technical concepts to non-technical stakeholders.
- Ability to manage multiple priorities in a fast-paced, high-pressure environment.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
· Professional Experience: 5+ years of experience in data engineering or a related field.
· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
· AWS Glue for ETL/ELT.
· S3 for storage.
· Redshift or Athena for data warehousing and querying.
· Lambda for serverless compute.
· Kinesis or SNS/SQS for data streaming.
· IAM Roles for security.
· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
· Version Control: Proficient with Git-based workflows.
· Problem Solving: Excellent analytical and debugging skills.
Optional Skills
· Knowledge of data modeling and data warehouse design principles.
· Experience with data visualization tools (e.g., Tableau, Power BI).
· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
· Exposure to other programming languages like Scala or Java.
Education
· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Why Join Us?
· Opportunity to work on cutting-edge AWS technologies.
· Collaborative and innovative work environment.
Must be:
- Based in Mumbai
- Comfortable with Work from Office
- Available to join immediately
Responsibilities:
- Manage, monitor, and scale production systems across cloud (AWS/GCP) and on-prem.
- Work with Kubernetes, Docker, Lambdas to build reliable, scalable infrastructure.
- Build tools and automation using Python, Go, or relevant scripting languages.
- Ensure system observability using tools like NewRelic, Prometheus, Grafana, CloudWatch, PagerDuty.
- Optimize for performance and low-latency in real-time systems using Kafka, gRPC, RTP.
- Use Terraform, CloudFormation, Ansible, Chef, Puppet for infra automation and orchestration.
- Load testing using Gatling, JMeter, and ensuring fault tolerance and high availability.
- Collaborate with dev teams and participate in on-call rotations.
Requirements:
- B.E./B.Tech in CS, Engineering or equivalent experience.
- 3+ years in production infra and cloud-based systems.
- Strong background in Linux (RHEL/CentOS) and shell scripting.
- Experience managing hybrid infrastructure (cloud + on-prem).
- Strong testing practices and code quality focus.
- Experience leading teams is a plus.
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries.
What will you do at Fynd?
- Run the production environment by monitoring availability and taking a holistic view of system health.
- Improve reliability, quality, and time-to-market of our suite of software solutions
- Be the 1st person to report the incident.
- Debug production issues across services and levels of the stack.
- Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realise it.
- Building automated tools in Python / Java / GoLang / Ruby etc.
- Help Platform and Engineering teams gain visibility into our infrastructure.
- Lead design of software components and systems, to ensure availability, scalability, latency, and efficiency of our services.
- Participate actively in detecting, remediating and reporting on Production incidents, ensuring the SLAs are met and driving Problem Management for permanent remediation.
- Participate in on-call rotation to ensure coverage for planned/unplanned events.
- Perform other task like load-test & generating system health reports.
- Periodically check for all dashboards readiness.
- Engage with other Engineering organizations to implement processes, identify improvements, and drive consistent results.
- Working with your SRE and Engineering counterparts for driving Game days, training and other response readiness efforts.
- Participate in the 24x7 support coverage as needed Troubleshooting and problem-solving complex issues with thorough root cause analysis on customer and SRE production environments
- Collaborate with Service Engineering organizations to build and automate tooling, implement best practices to observe and manage the services in production and consistently achieve our market leading SLA.
- Improving the scalability and reliability of our systems in production.
- Evaluating, designing and implementing new system architectures.
Some specific Requirements:
- B.E./B.Tech. in Engineering, Computer Science, technical degree, or equivalent work experience
- At least 3 years of managing production infrastructure. Leading / managing a team is a huge plus.
- Experience with cloud platforms like - AWS, GCP.
- Experience developing and operating large scale distributed systems with Kubernetes, Docker and and Serverless (Lambdas)
- Experience in running real-time and low latency high available applications (Kafka, gRPC, RTP)
- Comfortable with Python, Go, or any relevant programming language.
- Experience with monitoring alerting using technologies like Newrelic / zybix /Prometheus / Garafana / cloudwatch / Kafka / PagerDuty etc.
- Experience with one or more orchestration, deployment tools, e.g. CloudFormation / Terraform / Ansible / Packer / Chef.
- Experience with configuration management systems such as Ansible / Chef / Puppet.
- Knowledge of load testing methodologies, tools like Gating, Apache Jmeter.
- Work your way around Unix shell.
- Experience running hybrid clouds and on-prem infrastructures on Red Hat Enterprise Linux / CentOS
- A focus on delivering high-quality code through strong testing practices.
What do we offer?
Growth
Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.
Flex University: We help you upskill by organising in-house courses on important subjects
Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.
Culture
Community and Team building activities
Host weekly, quarterly and annual events/parties.
Wellness
Mediclaim policy for you + parents + spouse + kids
Experienced therapist for better mental health, improve productivity & work-life balance
We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Profile: Product Support Engineer
🔴 Experience: 1 year as Product Support Engineer.
🔴 Location: Mumbai (Andheri).
🔴 5 days of working from office.
Skills Required:
🔷 Experience in providing support for ETL or data warehousing is preferred.
🔷 Good Understanding on Unix and Databases concepts.
🔷 Experience working with SQL and No-SQL databases and writing simple
queries to get data for debugging issues.
🔷 Being able to creatively come up with solutions for various problems and
implement them.
🔷 Experience working with REST APIs and debugging requests and
responses using tools like Postman.
🔷 Quick troubleshooting and diagnosing skills.
🔷 Knowledge of customer success processes.
🔷 Experience in document creation.
🔷 High availability for fast response to customers.
🔷 Language knowledge required in one of NodeJs, Python, Java.
🔷 Background in AWS, Docker, Kubernetes, Networking - an advantage.
🔷 Experience in SAAS B2B software companies - an advantage.
🔷 Ability to join the dots around multiple events occurring concurrently and
spot patterns.
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Job Objectives:
- Integration of user- facing elements developed by front- end developers with server- side logic
- Optimize web applications to maximize speed and scale. Support diverse clients from high- powered desktop computers to small mobile devices
- Optimization of the application for maximum speed and scalability - Implementation of security and data protection
- Collaborate with senior management, operations & business team
- Building Rest APIs and maintain Database optimizations
Looking For:
- Great understanding of PHP, Node, react, Express, Socket.io, MVVM framework
- Worked on e- commerce website, server handling and deployments scripts
- The one who has worked on customer facing product for 2 years
- Strong knowledge of MERN stack
- Experience designing scalable, fault tolerant systems and databases.
As a DevOps Engineer, you’ll play a key role in managing our cloud infrastructure, automating deployments, and ensuring high availability across our global server network. You’ll work closely with our technical team to optimize performance and scalability.
Responsibilities
✅ Design, implement, and manage cloud infrastructure (primarily Azure)
✅ Automate deployments using CI/CD pipelines (GitHub Actions, Jenkins, or equivalent)
✅ Monitor and optimize server performance & uptime (100% uptime goal)
✅ Work with cPanel-based hosting environments and ensure seamless operation
✅ Implement security best practices & compliance measures
✅ Troubleshoot system issues, scale infrastructure, and enhance reliability
Requirements
🔹 3-7 years of DevOps experience in cloud environments (Azure preferred)
🔹 Hands-on expertise in CI/CD tools (GitHub Actions, Jenkins, etc.)
🔹 Proficiency in Terraform, Ansible, Docker, Kubernetes
🔹 Strong knowledge of Linux system administration & networking
🔹 Experience with monitoring tools (Prometheus, Grafana, ELK, etc.)
🔹 Security-first mindset & automation-driven approach
Why Join Us?
🚀 Work at a fast-growing startup backed by Microsoft
💡 Lead high-impact DevOps projects in a cloud-native environment
🌍 Hybrid work model with flexibility in Bangalore, Delhi, or Mumbai
💰 Competitive salary ₹12-30 LPA based on experience
How to Apply?
📩 Apply now & follow us for future updates:
🔗 X (Twitter): https://x.com/CygenHost
🔗 LinkedIn: https://www.linkedin.com/company/cygen-host/
🔗 Instagram: https://www.instagram.com/cygenhost
Would you like any modifications before posting this? Or should I move on to the next role? 🚀


Skill Set: 10 years plus as a full stack Java/JavaScript Developer
Micro Services, Distributed Systems
Cloud Services: AWS:(EC2,S3,Lambda,Load Balancing,Serverless)
Programming Backend: Node.js ,
Programming FrontEnd: React.js or ,Angular
Queuing: RabbitMQ /Kafka
Methodologies: Agile Scrum
Responsibilities:
End to end coding ; from software architecture to managing scaling,of high throughput(100000)RPS high volume transactions.
DIscuss business requirements and timelines with management and create a task plan for junior members.
Manage the day to day activities of all team members and report their work progress
Mentoring the team on best coding practices and making sure modules are Live
on time.
Management of security vulnerabilities.
Be a full individual contributor which means can work in a team as well as alone.
Attitude:
Passion for tech innovation and solve problems
GoGetter Attitude
Extremely humble and polite
Experience in Product companies and Managing small teams is a plus


Minimum 7 years experience : Individual Contributor ( Back-end plus Cloud- Infra)
Banking/Payments Domain : Worked on projects from scratch and scaled
Experience with Payment Gateway Companies is a plus
Tech Stack
Java SpringBoot,AWS,GCP,REST
Worked on CI/CD (Manage and setup )
Different SQL and NO-SQL Databases -,PostgreSQL,MongoDB
Responsibilities:
End to end coding ; from software architecture to managing scaling,of high throughput(100000)RPS high volume transactions.
DIscuss business requirements and timelines with management and create a task plan for junior members.
Manage the day to day activities of all team members and report their work progress
Mentoring the team on best coding practices and making sure modules are Live
on time.
Management of security vulnerabilities.
Be a full individual contributor which means can work in a team as well as alone.
Attitude:
Passion for tech innovation and solve problems
GoGetter Attitude
Extremely humble and polite
Experience in Product companies and Managing small teams is a plus


iSchoolConnect is an online platform that makes the University Admissions process hassle-free, fun and accessible to students around the globe. Using our unique AI technology, we allow students to apply to multiple universities with a single application. iSchoolConnect also connects with institutions worldwide and aids them in the transformation of their end-to-end admission processes using our various cutting-edge use cases.
Designation : Senior Fullstack Developer
We are seeking an experienced and highly skilled Senior Full Stack Developer to join our growing development team. The ideal candidate will have extensive experience in building scalable, high-performance web applications and will be responsible for delivering robust backend services and modern, user-friendly frontend solutions. This role will also involve working with cloud services, databases, and ensuring the technical success of projects from inception to deployment.
Responsibilities:
- End-to-End Development: Lead the development and maintenance of both frontend and backend applications. Write clean, scalable, and efficient code for web applications.
- Backend Development: Develop RESTful APIs and microservices using technologies like Node.js, Express.js, and Nest.js.
- Frontend Development: Implement and maintain modern, responsive web applications using frameworks React, Angular, etc
- Database Management: Design and maintain scalable databases, including MongoDB and MySQL, to ensure data consistency, performance, and reliability.
- Cloud Services: Manage cloud infrastructure on AWS and Google Cloud, ensuring optimal performance, scalability, and cost-efficiency.
- Collaboration: Work closely with product managers, designers, and other engineers to deliver new features and improvements.
- Code Quality & Testing: Follow best practices for code quality and maintainability, utilizing Test-Driven Development (TDD), and write unit and integration tests using Jest, and Postman.
- Mentorship: Provide guidance to junior developers, perform code reviews, and ensure high standards of development across the team.
Requirements:
- Experience: 5+ years of hands-on experience in full stack development, with a proven track record in both backend and frontend development.
- Backend Technologies: Proficiency in Node.js, Express.js, and Nest.js for building scalable backend services and APIs.
- Frontend Technologies: Strong experience with React, Angular, etc to build dynamic and responsive user interfaces.
- Databases: Strong knowledge of both relational (MySQL) and NoSQL (MongoDB) databases.
- Cloud Infrastructure: Hands-on experience with AWS and Google Cloud for managing cloud services, databases, and deployments.
- Version Control: Proficient in Git for version control and collaboration.
- Testing: Experience in writing unit and integration tests with Jest, and Postman.
- Problem Solving: Strong analytical and problem-solving skills to work with complex systems.
- Communication: Excellent communication and teamwork skills, with the ability to collaborate cross-functionally.
Nice-to-Have:
- Experience with Docker, Kubernetes, and CI/CD tools.
- Familiarity with GraphQL and Microservices Architecture.
- Experience working in an Agile/Scrum environment.

Job Description below :
Required Skills
BSc/B.E./B.Tech in Computer Science or an equivalent field.
7-10 years' solid commercial experience in software development using experience using Java8, Spring boot, Hibernate, Spring Cloud and related frameworks.
Experience with Angular 8+ versions, B.J‹JS 6, IS/TypeScript
Knowledge of HTML/CSS
Good understanding of Design Patterns
Proficiency with SQL database development, including data modelling and DB performance tuning
Ability to work with customers, gather requirements and create solutions independently
Active participation within and among teams and colleagues distributed globally
Excellent problem-solving skills, in particular a methodical approach to dealing with problems across distributed systems.
Agile development experience
Desired Skills
Experience with angular forms
Experience with dynamic forms/ dynamic angular components
Experience with java Spring boot
Knowledge of Kafka Stream Processing
Understanding of secure software development concepts, especially in a cloud platform
Good communication skills.
Strong organisational skills.
Understanding of test management and automation software (e.g. ALM, Jira, JMeter).
Familiarity with Agile frameworks and Regression testing.
Previous experience within the Financial domain.

Job Description:
Please find below details:
Experience - 5+ Years
Location- Bangalore/Python
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
- Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
- Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
- Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
- Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
- Ensure data quality and consistency by implementing validation and governance practices.
- Work on data security best practices in compliance with organizational policies and regulations.
- Automate repetitive data engineering tasks using Python scripts and frameworks.
- Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
- Professional Experience: 5+ years of experience in data engineering or a related field.
- Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
- AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
- AWS Glue for ETL/ELT.
- S3 for storage.
- Redshift or Athena for data warehousing and querying.
- Lambda for serverless compute.
- Kinesis or SNS/SQS for data streaming.
- IAM Roles for security.
- Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
- Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
- DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
- Version Control: Proficient with Git-based workflows.
- Problem Solving: Excellent analytical and debugging skills.
Optional Skills
- Knowledge of data modeling and data warehouse design principles.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
- Exposure to other programming languages like Scala or Java.
We're looking for a Backend Lead Engineer to join our Engineering Team. The Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles orders, payments, delivery promises, order tracking, logistics integrations to name a few. Our products are actively used by Fynd users, Operations, Delights, and Finance teams. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure for all these users.
What will you do at Fynd?
- Determining project requirements and developing work schedules for the team.
- Managing and mentoring a team of 2-5 engineers, Delegating tasks, and achieving daily, weekly, and monthly goals.
- Build scalable and loosely coupled services to extend our platform
- Build bulletproof API integrations with third-party APIs for various use cases with the team.
- Evolve our Infrastructure and add a few more nines to our overall availability
- Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS
- Give back to the open-source community through contributions on code and blog posts
- This is a startup so everything can change as we experiment with more product improvements
- Write technical documentation
Some Specific Requirements
- Atleast 2+ years of Tech Lead Experience, 5+ years of Development Experience
- You have prior experience developing and working on consumer-facing web/app products
- Good understanding of Data Structures, Algorithms, and Operating Systems
- Hands-on experience in JavaScript / Python / Node.Js. Exceptions can be made if you’re really good at any other language with experience in building web-app based tech products
- Good knowledge of async programming using Callbacks, Promises, and Async/Await
- Experience in at least one of the following frameworks - Flask, Sanic, Fastapi would be plus
- Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io
- Hands-on experience with Frontend codebases using HTML, CSS, and AJAX. Plus knowledge of Frontend Frameworks and Experience in at least one of the following frameworks - React.Js, Angular, Vue.js.
- Deep knowledge of MongoDB, Redis, or MySQL
- Basic knowledge of event-based systems using Kafka or RabbitMQ
- You should have experience with deploying and scaling Microservices.
- You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3
- Deep Knowledge of Cloud-Native Application Architecture: Docker, Kubernetes, Microservices


About Company: Codezen Tech Solutions is a technical consultancy and development firm that believes in increasing transparency and efficiency using cloud based platforms.
Responsibilities and Duties:
- Writing and maintaining reliable Ruby code.
- Integrating user-facing elements designed by the front-end team.
- Connecting applications with additional web servers.
- Maintaining APIs.
- The ideal candidate will be fluent in Ruby and proficient with Ruby on Rails
- Extensive experience developing web applications in Object-Oriented Perl, Python, or Java can be substituted as long as there is a strong desire to work in Ruby
- Design, build, and maintain efficient, reusable, and reliable Ruby code
- Architecting, designing and developing scalable backend systems with RoR
- Solid understanding of deploying and maintaining Rails apps within the AWS environment.
- Set up workers and deploy across multiple instances.
- Work on complex modules and be hands on on the product code as and when required
Required Experience, Skills and Qualifications:
- 2-4 years of experience required
- Experience with Ruby on Rails, along with other common libraries such as RSpec and Resque
- Good understanding of the syntax of Ruby and its nuances
- Solid understanding of object-oriented programming
- Good understanding of server-side templating languages (such as Liquid, Slim, etc depending on your technology stack)
- Good understanding of server-side CSS preprocessors (such as Sass, based on project requirements)
- A knack for writing clean, readable Ruby code
- Proficient understanding of code versioning tools (e.g. Git, Mercurial or SVN)
- Familiarity with development aiding tools {such as Bower, Bundler, Rake, etc.}
- Familiarity with continuous integration
- Familiarity with testing tools.
- Ability to quickly adapt & independently work in a fast-paced Agile environment with minimum supervision.


Role Title
Team Lead| Software Development
Description
The candidate will be required to build towards the future by constantly innovating and problem-solving. Should be passionate about the customer experience journey, highly motivated, intellectually curious and analytical. Conceptualize, rationalize and drive multiple simultaneous projects to deliver engineering work across the portfolio within deadlines and budgets. Experience in managing high performing individuals, building roadmaps, defining processes, delivering projects as well as analyzing cost/benefit of feature selection and communicating results throughout the organization.
Responsibilities Work across teams to organize and accelerate delivery by ensuring all teams are delivering in a coordinated manner.
Understand the business strategy and design approaches within product, program or domain effectively.
Regularly review metrics and proactively seek out new and improved data / mechanisms for visibility, ensuring the program stays aligned with organization objectives.
Proactively identify risks & issues and ensure mitigation efforts are being carried out throughout the software development lifecycle.
Keep abreast of evolving technology landscape.
Will provide technical leadership to software engineers by coaching and monitoring throughout end-to-end software development, maintenance and lifecycle.
Analyse and provide input for the requirements and provide impact assessment for the features or bug fixes.
Collaborate with external partner teams, business teams and other cross-functional teams effectively in driving the projects and the point of contact on the team. Skills Process : Data Structures and Algorithms | API, Version Control , Data structures | DevSecOps | Microservices | Application Security
Technical : FullStack Development – React, Angular, Nodejs, Java, ExpressJS, Passport, Springboot | CI/CD Tools | Cloud Computing – AWS Preferred | User Experience Design | Operating Systems – Linux and Windows | Database – MYSQL, Oracle | Webservers – Nginix, Apache | Containerization, Service Mesh
Product : Corporate Banking | Payments and collections | Treasury | Trade Finance
Human : Mentor & Guide | Creative Vision | Collaboration and Co-operation | Risk Taking Level Information
Education : BTech / M Tech Span of Influence : Multiple Teams


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.


- Looking for candidate who is enthusiastic to work in a Startup environment and build things from Scratch individually
- Candidate has past experience in scalable consumer facing applications managing latency and traffic
- FullStack Individual Contributor with experience to code at speed and take full product ownership
Experience: minimum 8 YRS
Location: Vile parle (E), Mumbai
Skill Set: 8 years plus as a full stack Java/JavaScript Developer
Micro Services, Distributed Systems
Cloud Services: AWS:(EC2,S3,Lambda,Load Balancing,Serverless),Kubernates
Programming Backend: Node.js,Mongodb,Java Spring,PostGreSQL
Programming FrontEnd: ,Angular/React
Queuing: Kafka
Methodologies: Agile Scrum
Responsibilities:
End to end coding ; from software architecture to managing scaling,of high throughput(100000)RPS high volume transactions.
DIscuss business requirements and timelines with management and create a task plan for junior members.
Manage the day to day activities of all team members and report
their work progress
Mentoring the team on best coding practices and making sure modules are Live
on time.
Management of security vulnerabilities.
Be a full individual contributor which means can work in a team as well as alone.
Attitude:
Passion for tech innovation and solve problems
GoGetter Attitude
Extremely humble and polite
Experience in Product companies and Managing small teams is a plus
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
Job Overview
We are seeking an experienced Senior Solution Architect to join our dynamic DevOps organization. The ideal candidate will have a strong background in cloud technologies, with expertise in migration projects across platforms such as GCP, AWS, and Azure. The candidate should possess a deep understanding of DevOps principles, Kubernetes orchestration, Data migration & management and automation tools like CI/CD pipelines and Terraform.The individual should be highly skilled in designing scalable application architectures capable of handling substantial workloads while ensuring the highest standards of quality.
Key Responsibilities
- Lead and drive cloud migration projects from on-premises data centers or other cloud platforms to GCP, AWS, or Azure.
- Design and implement migration strategies that ensure minimal downtime and maximum efficiency.
- Demonstrate proficiency in GCP, AWS, and Azure, with the ability to choose and optimize solutions based on specific business requirements.
- Provide guidance on selecting the appropriate cloud services for various workloads.
- Design, implement, and optimize CI/CD pipelines to streamline software delivery.
- Utilize Terraform for infrastructure as code (IaC) to automate deployment processes.
- Collaborate with development and operations teams to enhance the overall DevOps culture.
- Possess in-depth knowledge and practical experience with Kubernetes orchestration for containerized applications.
- Architect and optimize Kubernetes clusters for high availability and scalability.
- Engage in research and development activities to stay abreast of industry trends and emerging technologies.
- Evaluate and introduce new tools and methodologies to enhance the efficiency and effectiveness of cloud solutions.
- Architect solutions that can handle large-scale workloads and provide guidance on scaling strategies.
- Ensure high-performance levels and reliability in production environments.
- Design scalable and high-performance database architectures tailored to meet business needs.
- Execute database migrations with a keen focus on data consistency, integrity, and performance.
- Develop and implement database pipelines to automate processes such as data migrations, schema changes, and backups.
- Optimize database workflows to enhance efficiency and reliability.
- Work closely with clients to assess and enhance the quality of existing architectures.
- Implement best practices to ensure robust, secure, and well-architected solutions.
- Drive migration projects, collaborating with cross-functional teams to ensure successful execution.
- Provide technical leadership and mentorship to junior team members.
Required Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Relevant industry experience in a Solution Architect role.
- Proven experience in leading cloud migration projects across GCP, AWS, and Azure.
- Expertise in DevOps practices, CI/CD pipelines, and infrastructure automation.
- In-depth knowledge of Kubernetes and container orchestration.
- Strong background in scaling architectures to handle significant workloads.
- Sound knowledge in database migrations
- Excellent communication skills and the ability to articulate complex technical concepts to both technical and non-technical stakeholders.
Required Skills
· Experienced developer with hands-on expertise in Java Spring Boot: Spring Boot Data, Spring Boot Integration, Spring Boot Messaging, Spring Boot Web, Spring Boot Security, and Spring Boot AOP., MySQL, REST API, SOAP APIs, JavaScript, Angular, Tomcat, Drools, Docker, Kubernetes, MongoDB.
· Strong understanding of MicroServices architecture, design patterns and best practices.
· Designing and Developing web applications using Microservices on Java Spring Boot, testing using jUnit & jMeter, deploying using Maven, Jenkins, Docker & Kubernetes, monitoring using ELK and associated technologies.
· Excellent problem-solving skills and ability to troubleshoot technical issues effectively. Troubleshoot and resolve technical issues, ensuring the stability and performance of applications.
Responsibilities
· Implement low-level designs and coding for complex software systems.
· Collaborate with cross-functional teams and Business Analysts to understand business requirements and translate them into technical solutions.
· Implement best practices for software development, code reviews, and quality assurance.
A) Skills Required
Essential Skills (Two top
skills)
3 possible combinations.
1. Candidate having expertise in both ElasticSearch and Kafka, preferably.
OR
2. Candidate having expertise in ElasticSearch and willing to learn Kafka.
OR
3. Candidate having expertise in Kafka and willing to learn ElasticSearch.
B) Other Information
Educational Qualifications Graduate
Experience Mid-Level (6+ years)
Minimum Qualifications:
ElasticSearch/OpenSearch
· Software Lifecycle/programing skills
· Linux
· Python
· Ingestion tools (logstash, OpenSearch Ingestion, Fluentd, fluentbit, Harness,
CloudFormation, container, images, ECS, lambda).
· SQL query
· Json
· AWS knowledge
Kafka/MSK
· Linux
· In-depth understanding of Kafka broker configurations, zookeepers, and
connectors
· Understand Kafka topic design and creation.
· Good knowledge in replication and high availability for Kafka system
· Good understanding of producers and consumer group
· Understanding Kafka partitions and scaling up
· Kafka latency/lag and throughput
· Integrating Kafka connect with various data sources can be internal or external.
· Kafka security using SSL/Certs
Company Description
Krish is committed to enabling customers to achieve their technological goals by delivering solutions that combine the right technology, people, and costs. Our approach emphasizes building long-term relationships while ensuring customer success through tailored solutions, leveraging the expertise and integrity of our consultants and robust delivery processes.
Location : Mumbai – Tech Data Office
Experience : 5 - 8 years.
Duration : 1-year contract (extendable)
Job Overview
We are seeking a highly skilled Sales Engineer (L2/L3) with in-depth expertise in Palo Alto Networks solutions. This role requires designing, implementing, and supporting cutting-edge network and security solutions to meet customers' technical and business needs. The ideal candidate will have strong experience in sales engineering and advanced skills in deploying, troubleshooting, and optimizing Palo Alto products and related technologies, with the ability to assist in implementation tasks when required.
Key Responsibilities
Solution Design & Technical Consultation:
- Collaborate with sales teams and customers to understand business and technical requirements.
- Design and propose solutions leveraging Palo Alto Networks technologies, including Next-Generation Firewalls (NGFW), Prisma Access, Panorama, SD-WAN, and Threat Prevention.
- Prepare detailed technical proposals, configurations, and proof-of-concept (POC) demonstrations tailored to client needs.
- Optimize existing customer deployments, ensuring alignment with industry best practices.
Customer Engagement & Implementation:
- Present and demonstrate Palo Alto solutions to stakeholders, addressing technical challenges and business objectives.
- Conduct customer and partner workshops, enablement sessions, and product training.
- Provide post-sales support to address implementation challenges and fine-tune deployments.
- Lead and assist with hands-on implementations of Palo Alto Networks products when required.
Support & Troubleshooting:
- Provide L2-L3 level troubleshooting and issue resolution for Palo Alto Networks products, including advanced debugging and system analysis.
- Assist with upgrades, migrations, and integration of Palo Alto solutions with other security and network infrastructures.
- Develop runbooks, workflows, and documentation for post-sales handover to operations teams.
Partner Enablement & Ecosystem Management:
- Collaborate with channel partners to build technical competency and promote adoption of Palo Alto solutions.
- Support certification readiness and compliance for both internal and partner teams.
- Participate in events, workshops, and seminars to showcase technical expertise.
Skills and Qualifications
Technical Skills:
- Advanced expertise in Palo Alto Networks technologies, including NGFW, Panorama, Prisma Access, SD-WAN, and GlobalProtect.
- Strong knowledge of networking protocols (e.g., TCP/IP, BGP, OSPF) and security frameworks (e.g., Zero Trust, SASE).
- Proficiency in troubleshooting and root-cause analysis for complex networking and security issues.
- Experience with security automation tools and integrations (e.g., API scripting, Ansible, Terraform).
Soft Skills:
- Excellent communication and presentation skills, with the ability to convey technical concepts to diverse audiences.
- Strong analytical and problem-solving skills, with a focus on delivering customer-centric solutions.
- Ability to manage competing priorities and maintain operational discipline under tight deadlines.
Experience:
- 5+ years of experience in sales engineering, solution architecture, or advanced technical support roles in the IT security domain.
- Hands-on experience in designing and deploying large-scale Palo Alto Networks solutions in enterprise environments.
Education and Certifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications such as PCNSA, PCNSE, or equivalent vendor certifications (e.g., CCNP Security, NSE4) are highly preferred.
Data Engineer + Integration engineer + Support specialistExp – 5-8 years
Necessary Skills:· SQL & Python / PySpark
· AWS Services: Glue, Appflow, Redshift
· Data warehousing
· Data modelling
Job Description:· Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services
· A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes
· Work with stakeholders to identify business needs and requirements for data-related projects
Strong SQL and/or Python or PySpark knowledge
· Creating data models that can be used to extract information from various sources & store it in a usable format
· Optimize data models for performance and efficiency
· Write SQL queries to support data analysis and reporting
· Monitor and troubleshoot data pipelines
· Collaborate with software engineers to design and implement data-driven features
· Perform root cause analysis on data issues
· Maintain documentation of the data architecture and ETL processes
· Identifying opportunities to improve performance by improving database structure or indexing methods
· Maintaining existing applications by updating existing code or adding new features to meet new requirements
· Designing and implementing security measures to protect data from unauthorized access or misuse
· Recommending infrastructure changes to improve capacity or performance
Experience in Process industry
Data Engineer + Integration engineer + Support specialistExp – 3-5 years
Necessary Skills:· SQL & Python / PySpark
· AWS Services: Glue, Appflow, Redshift
· Data warehousing basics
· Data modelling basics
Job Description:· Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform.
· A strong understanding of data modelling, data structures, databases (Redshift)
Strong SQL and/or Python or PySpark knowledge
· Design and implement ETL processes to load data into the data warehouse
· Creating data models that can be used to extract information from various sources & store it in a usable format
· Optimize data models for performance and efficiency
· Write SQL queries to support data analysis and reporting
· Collaborate with team to design and implement data-driven features
· Monitor and troubleshoot data pipelines
· Perform root cause analysis on data issues
· Maintain documentation of the data architecture and ETL processes
· Maintaining existing applications by updating existing code or adding new features to meet new requirements
· Designing and implementing security measures to protect data from unauthorized access or misuse
· Identifying opportunities to improve performance by improving database structure or indexing methods
· Designing and implementing security measures to protect data from unauthorized access or misuse
· Recommending infrastructure changes to improve capacity or performance
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Responsibilities:
• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
• Deploy and manage Kubernetes clusters using AWS.
• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
• Monitor system performance using Datadog, ELK, and Cloudflare tools.
• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
• Collaborate with development teams to design, implement and test infrastructure changes.
• Troubleshoot and resolve infrastructure issues as they arise.
• Participate in on-call rotation and provide support for production issues.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering or a related field.
• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
• Strong understanding of Linux administration and shell scripting.
• Experience with automation tools such as Terraform, Ansible, or similar.
• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
• Experience with container orchestration platforms such as Kubernetes.
• Familiarity with container technologies such as Docker.
• Experience with cloud providers such as AWS.
• Experience with monitoring tools such as Datadog and ELK.
Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration skills.
• Ability to work independently or in a team environment.
• Strong attention to detail.
• Ability to learn and apply new technologies quickly.
• Ability to work in a fast-paced and dynamic environment.
• Strong understanding of DevOps principles and methodologies.
Kindly apply at https://www.wohlig.com/careers
Job description
We are seeking a highly skilled and experienced IT Department Head with strong communication skills, a technical background, and leadership capabilities to manage our IT team. The ideal candidate will be responsible for overseeing the organization's IT infrastructure, ensuring the security and efficiency of our systems, and maintaining compliance with relevant industry standards. The role requires an
in-depth understanding of cloud technologies , server management, network security, managed IT services, and strong problem-solving capabilities.
Key Responsibilities:-
The Information Technology Manager is a proactive and hands-on IT Manager to oversee and evolve our technology infrastructure
· In this role, the Manager will manage all aspects of our IT operations, from maintaining our current tech stack to strategizing and implementing future developments
· This position will ensure that our technology systems are modern, secure, and efficient, aligning IT initiatives with our business goals
· IT Strategy & Leadership: Develop and execute an IT strategy that supports the company's objectives, ensuring scalability and security
· Infrastructure Management: Oversee the maintenance and optimization of our Azure Cloud infrastructure, AWS Cloud, and Cisco Meraki networking systems
· Software & Systems Administration: Manage Microsoft 365 administration.
· Cybersecurity: Enhance our cybersecurity posture using tools like Sentinel One, Sophos Firewall and other tools
· Project Management: Lead IT projects, including system upgrades and optimizations, ensuring timely delivery and adherence to budgets
· Team Leadership: Mentor and guide a small IT team, fostering a culture of continuous improvement and professional development
· Vendor Management: Collaborate with external vendors and service providers to ensure optimal performance and cost-effectiveness
· Technical Support: Provide high-level technical support and troubleshooting for IT-related issues across the organization and client in USA Other duties as needed
· IT Audit & Compliance: Conduct regular audits to ensure IT processes are compliant with security regulations and best practices (GDPR, SOC2, ISO 27001), ensuring readiness for internal and external audit.
· Documentation: Maintain thorough and accurate documentation for all systems, processes, and procedures to ensure clarity and consistency in IT operations.
Preferred Skills:-
. Experience with SOC 2, ISO 27001, or similar security frameworks.
. Experience with advanced firewall configurations and network
architecture.
Job Type: Full-time
Benefits:
- Paid sick time
Shift:
- Day shift
Work Days:
- Monday to Friday
Experience:
- IT management: 2 years (Required)
Work Location: In person
Job Description
Position - SRE developer / DevOps Engineer
Location - Mumbai
Experience- 3- 10 years
About HaystackAnalytics:
HaystackAnalytics is a company working in deep technology of genomics, computing and data science for creating the first of its kind clinical reporting engine in Healthcare. We are a new but well funded company with a tremendous amount of pedigree in the team (IIT Founders, IIT & IIM core team). Some of the technologies we have created are a global first in infectious disease and chronic diagnostics. As a product company creating a huge amount of IP, our Technology and R&D team are our crown jewels. With early success of our products in India, we are now expanding to take our products to international shores.
Inviting Passionate Engineers to join a new age enterprise:
At HaystackAnalytics, we rely on our dynamic team of engineers to solve the many challenges and puzzles that come with our rapidly evolving stack that deals with Healthcare and Genomics.
We’re looking for full stack engineers who are passionate problem solvers, ready to work with new technologies and architectures in a forward-thinking organization that’s always pushing boundaries. Here, you will take complete, end-to-end ownership of projects across the entire stack.
Our ideal candidate has experience building enterprise products and has understanding and experience of working with new age front end technologies, web frameworks, APIs, databases, distributed computing,back end languages, caching, security, message based architectures et al.
You’ll be joining a small team working at the forefront of new technology, solving the challenges that impact both the front end and back end architecture, and ultimately, delivering amazing global user experiences.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS )
- Es6 / Typescript
- Electron app / TAURI
- Component library ( Webcomponents / radix / material )
- CSS ( tailwind)
- State management --> Redux / Zustand / Recoil
- Build tools - > (webpack/vite/Parcel/turborepo)
- Frameworks -- > Next JS /
- Design patterns
- Test Automation Frameworks (cypress playwright etc )
- Functional Programming concepts
- Scripting ( bash , python )
Backend Skills
- Node / Deno / bun - Express / NEST JS
- Language : Typescript / Python / Rust /
- REST / GRAPHQL
- SOLID Design Principles
- Storage (mongodb/ Object Storage / postgres )
- Caching ( Redis / In memory Data grid )
- Pub sub (KAFKA / SQS / SNS / Event bridge / RabbitMQ)
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift )
- GITOPS
- Automation ( terraform , Serverless )
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in learning new tools, languages, workflows, and philosophies to grow
- Communication
We are looking for, Java Full stack Developer / Java Developer
Relevant Experience (In years):4+
Job Location: Mumbai
Skill: Java 8+, Spring Boot, Spring, Junit, MySQL, JSP, Servlet, JPA/Hibernate, REST Web service, SOAP web service, JQuery, Ajax, HTML, JS, JSON.
Role Category: Software Development
Education: BE / BTech in Any Specialization, MCA, BSC (IT)
Desired Competencies (Technical / Behavioral Competency)
Has experience working on Frontend technologies CSS, HTML, and JavaScript / JQuery
Hands-on experience of MYSQL Database.
Good verbal and written communication and collaboration skills to effectively communicate with both business and technical teams.
Comfortable working in a fast-paced, result-oriented environment.
Able to understand the requirement quickly and efficiently, should be able to work independently.
Significant Java Programming experience (in J2EE).
Prefer to have experience work with Agile Sprint and Scrum methodology.
Should be able to understand the requirements clearly and document the same whenever required.
Able to write reusable and scalable components and to build product aligned to platform architecture.
Follow coding guidelines and write efficient code.
Perform unit testing and preparation of unit test cases effectively.
Explore new technologies and carryout POCs
Willingness to explore and innovate new ideas with team members and appreciate individual inputs/ strengths
Good communication skills.
To support product and team with positive attitude and full dedication.


Job Title - Cloud Fullstack Engineer
Experience Required - 5 Years
Location - Mumbai
Immediate joiners are preferred.
About the Job
As a Cloud Fullstack Engineer you will design, develop, and maintain end-to-end solutions for cloud-based applications. The Cloud Fullstack Engineer will be responsible for building both frontend and backend components, integrating them seamlessly, and ensuring they work efficiently within a cloud infrastructure.
What You’ll Be Doing
- Frontend Development
- Design and implement user-friendly and responsive web interfaces using modern frontend technologies (e.g., React, Angular, Vue.js).
- Ensure cross-browser compatibility and mobile responsiveness of web applications.
- Collaborate with UX/UI designers to translate design specifications into functional and visually appealing interfaces.
- Backend Development
- Develop scalable and high-performance backend services and APIs to support frontend functionalities.
- Design and manage cloud-based databases and data storage solutions.
- Implement authentication, authorization, and security best practices in backend services.
- Cloud Integration
- Build and deploy cloud-native applications using platforms such as AWS, Google Cloud Platform (GCP), or Azure.
- Leverage cloud services for computing, storage, and networking to enhance application performance and scalability.
- Implement and manage CI/CD pipelines for seamless deployment of applications and updates.
- End-to-End Solution Development
- Architect and develop fullstack applications that integrate frontend and backend components efficiently.
- Ensure data flow between frontend and backend is seamless and secure.
- Troubleshoot and resolve issues across the stack, from UI bugs to backend performance problems.
- Performance Optimization
- Monitor and optimize application performance, including frontend load times and backend response times.
- Implement caching strategies, load balancing, and other performance-enhancing techniques.
- Conduct performance testing and address bottlenecks and scalability issues.
- Security and Compliance
- Implement security best practices for both frontend and backend components to protect against vulnerabilities.
- Ensure compliance with relevant data protection regulations and industry standards.
- Conduct regular security assessments and audits to maintain application integrity.
- Collaboration and Communication
- Work closely with cross-functional teams, including product managers, designers, and other engineers, to deliver high-quality solutions.
- Participate in code reviews, technical discussions, and project planning sessions.
- Document code, processes, and architecture to facilitate knowledge sharing and maintainability.
- Continuous Improvement
- Stay updated with the latest trends and advancements in frontend and backend development, as well as cloud technologies.
- Contribute to the development of best practices and standards for fullstack development within the team.
- Participate in knowledge-sharing sessions and provide mentorship to junior engineers.
What We Need To See
- Strong experience in both frontend and backend development, as well as expertise in cloud technologies and services.
- Experience in fullstack development, with a strong focus on both frontend and backend technologies.
- Proven experience with cloud platforms (AWS, GCP, Azure) and cloud-native application development.
- Experience with modern frontend frameworks (e.g., React, Angular, Vue.js) and backend technologies (e.g., Node.js, Java, Python).
- Technical Expertise:
1. FrontEnd
- Handon Experience with HTML5, CSS, JavaScript, ReactJs, Next.js, redux, JQuery
2. Proficiency in Backend Development
- Strong experience with backend programming languages such as Node.js, Python
- Expertise in working with frameworks such as NestJS, Express.js, or Django.
3. Microservices Architecture
- Experience designing and implementing microservices architectures.
- Knowledge of service discovery, API gateways, and distributed tracing.
4. API Development
- Proficiency in designing, building, and maintaining RESTful and GraphQL APIs.
- Experience with API security, rate limiting, and authentication mechanisms (e.g., JWT, OAuth).
5. Database Management
- Strong knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g. MongoDB).
- Experience in database schema design, optimization, and management.
6. Cloud Services
- Hands-on experience with cloud platforms such as Azure,AWS or Google Cloud.
- Security: Knowledge of security best practices and experience implementing secure coding practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Ability to manage multiple priorities and work in a fast-paced, dynamic environment.

Role Overview:
As a Senior Full Stack Developer with React, you will be responsible for architecting, developing, and maintaining scalable backend services and user-friendly frontend interfaces. You will collaborate closely with cross-functional teams to deliver high-quality software solutions that meet our business goals and client needs. Your expertise will be vital in guiding the technical direction of our projects and mentoring junior developers.
Job Description
Job Title: Techno - Functional Project Manager
Location: Mumbai
Experience: 5+ Years
Qualification- B.E /B.Tech in Computer Science or M.E/M.Tech in Computer Science
Key Responsibilities:
- Strong Technical experience in Node/ Nest JS & React native/Java and oracle/ MY SQL
- Should have worked in BFSI preferably banking domain
- Should have handled large or complex projects and should be able to manage multiple project track activities independently
- Experience in mobile native apps /web development as well as should have strong problem solving skills.
- Strong analytical capabilities as well should have some experience on measuring performance of application at design stage.
- Should have hands on experience on mobile apps and web apps development and designing skills.
- Vendor management and should be capable of designing API specification and application functional flow.
- Should have hands on experience on SQL/ oracle db queries and db design. Experience on AWS/Azure Cloud would be an added advantage