50+ DevOps Jobs in India
Apply to 50+ DevOps Jobs on CutShort.io. Find your next job, effortlessly. Browse DevOps Jobs and apply today!
Job Title: DevOps Engineer
Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.
Responsibilities:
Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.
Automate application deployment and environment provisioning using AWS and containerization tools.
Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).
Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.
Build and manage containerized environments using Docker (Kubernetes is a plus).
Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.
Ensure system security, data integrity, and high availability across environments.
Collaborate with development teams to streamline builds, testing, and deployments.
Troubleshoot and resolve infrastructure and deployment-related issues.
Required Skills:
AWS (EC2, ECS, RDS, S3, IAM, Lambda)
CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy
Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible
Containers: Docker (Kubernetes preferred)
Scripting: Bash, Python
Version Control: Git, GitHub, GitLab
Web Servers: Apache, Nginx (preferred)
Databases: MySQL, MongoDB (preferred)
Qualifications:
3+ years of experience as a DevOps Engineer in a production environment.
Proven experience supporting Laravel, Node.js, and Python-based applications.
Strong understanding of CI/CD, containerization, and automation practices.
Experience with infrastructure monitoring, logging, and performance optimization.
Familiarity with agile and collaborative development processes.
ROLES AND RESPONSIBILITIES:
Standardization and Governance:
- Establishing and maintaining project management standards, processes, and methodologies.
- Ensuring consistent application of project management policies and procedures.
- Implementing and managing project governance processes.
Resource Management:
- Facilitating the sharing of resources, tools, and methodologies across projects.
- Planning and allocating resources effectively.
- Managing resource capacity and forecasting future needs.
Communication and Reporting:
- Ensuring effective communication and information flow among project teams and stakeholders.
- Monitoring project progress and reporting on performance.
- Communicating strategic work progress, including risks and benefits.
Project Portfolio Management:
- Supporting strategic decision-making by aligning projects with organizational goals.
- Selecting and prioritizing projects based on business objectives.
- Managing project portfolios and ensuring efficient resource allocation across projects.
Process Improvement:
- Identifying and implementing industry best practices into workflows.
- Improving project management processes and methodologies.
- Optimizing project delivery and resource utilization.
Training and Support:
- Providing training and support to project managers and team members.
- Offering project management tools, best practices, and reporting templates.
Other Responsibilities:
- Managing documentation of project history for future reference.
- Coaching project teams on implementing project management steps.
- Analysing financial data and managing project costs.
- Interfacing with functional units (Domain, Delivery, Support, Devops, HR etc).
- Advising and supporting senior management.
IDEAL CANDIDATE:
- 3+ years of proven experience in Project Management roles with strong exposure to PMO processes, standards, and governance frameworks.
- Demonstrated ability to manage project status tracking, risk assessments, budgeting, variance analysis, and defect tracking across multiple projects.
- Proficient in Project Planning and Scheduling using tools like MS Project and Advanced Excel (e.g., Gantt charts, pivot tables, macros).
- Experienced in developing project dashboards, reports, and executive summaries for senior management and stakeholders.
- Active participant in Agile environments, attending and contributing to Scrum calls, sprint planning, and retrospectives.
- Holds a Bachelor’s degree in a relevant field (e.g., Engineering, Business, IT, etc.).
- Preferably familiar with Jira, Azure DevOps, and Power BI for tracking and visualization of project data.
- Exposure to working in product-based companies or fast-paced, innovation-driven environments is a strong advantage.
ROLES AND RESPONSIBILITIES:
- Plan, schedule, and manage all releases across product and customer projects.
- Define and maintain the release calendar, identifying dependencies and managing risks proactively.
- Partner with engineering, QA, DevOps, and product management to ensure release readiness.
- Create release documentation (notes, guides, videos) for both internal stakeholders and customers.
- Run a release review process with product leads before publishing.
- Publish releases and updates to the company website release section.
- Drive communication of release details to internal teams and customers in a clear, concise way.
- Manage post-release validation and rollback procedures when required.
- Continuously improve release management through automation, tooling, and process refinement.
IDEAL CANDIDATE:
- 3+ years of experience in Release Management, DevOps, or related roles.
- Strong knowledge of CI/CD pipelines, source control (Git), and build/deployment practices.
- Experience creating release documentation and customer-facing content (videos, notes, FAQs).
- Excellent communication and stakeholder management skills; able to translate technical changes into business impact.
- Familiarity with SaaS, iPaaS, or enterprise software environments is a strong plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive salary package.
- Opportunity to learn from and work with senior leadership & founders.
- Build solutions for large enterprises that move from concept to real-world impact.
- Exceptional career growth pathways in a highly innovative and rapidly scaling environment.
Company Overview
McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.
Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era
Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.
Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.
At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:
- Precision prospecting
- Intent-based targeting
- Data enrichment from 16+ premium sources
- AI-driven workflows to book more meetings, faster
We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.
EXPERIENCE
Duties you'll be entrusted with:
- Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
- Writing efficient, reusable, testable, and scalable code.
- Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of applications and enhancing the functionalities of current software systems.
- Keeping abreast with the latest technology and trends.
Expectations from you:
Basic Requirements
- Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
- Experience with Cloud platforms (AWS, Azure, GCP).
- Strong understanding of monitoring, logging, and observability practices.
- Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
- Expertise in designing, implementing, and optimizing Elasticsearch.
- Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
- Expertise in Event driven architecture.
- Experience in Integrating Generative AI APIs.
- Working experience in high user concurrency.
- Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,
Technical Skills
- Demonstrable experience in web application development with expertise in Node.js or Nest.js.
- Knowledge of database technologies and agile development methodologies.
- Experience working with databases, such as MySQL or MongoDB.
- Familiarity with web development frameworks, such as Express.js.
- Understanding of microservices architecture and DevOps principles.
- Well-versed with AWS and serverless architecture.
Soft Skills
- A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
- Potential to apply innovative and exciting ideas, concepts, and technologies.
- Stay up-to-date with the latest design trends, animation techniques, and software advancements.
- Multi-tasking and time-management skills, with the ability to prioritize tasks.
THRIVE
Some of the extensive benefits of being part of our team:
- We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
- The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
- The McKinley Cares Program has a wide range of benefits:
- The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
- In-house benefits have a referral bonus window and sponsored social functions.
- An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum.
- Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
- In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
- We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.
Role: Senior Backend Engineer(Nodes.js+Typescript+Postgres)
Location: Pune
Type: Full-Time
Who We Are:
After a highly successful launch, Azodha is ready to take its next major step. We are seeking a passionate and experienced Senior Backend Engineer to build and enhance a disruptive healthcare product. This is a unique opportunity to get in on the ground floor of a fast-growing startup and play a pivotal role in shaping both the product and the team.
If you are an experienced backend engineer who thrives in an agile startup environment and has a strong technical background, we want to hear from you!
About The Role:
As a Senior Backend Engineer at Azodha, you’ll play a key role in architecting, solutioning and driving development of our AI led interoperable digital enablement platform.You will work closely with the founder/CEO to refine the product vision, drive product innovation, delivery and grow with a strong technical team.
What You’ll Do:
* Technical Excellence: Design, develop, and scale backend services using Node.js and TypeScript, including REST and GraphQL APIs. Ensure systems are scalable, secure, and high-performing.
* Data Management and Integrity: Work with Prisma or TypeORM, and relational databases like PostgreSQL and MySQL
* Continuous Improvement: Stay updated with the latest trends in backend development, incorporating new technologies where appropriate. Drive innovation and efficiency within the team
* Utilize ORMs such as Prisma or TypeORM to interact with database and ensure data integrity.
* Follow Agile sprint methodology for development.
* Conduct code reviews to maintain code quality and adherence to best practices.
* Optimize API performance for optimal user experiences.
* Participate in the entire development lifecycle, from initial planning , design and maintenance
* Troubleshoot and debug issues to ensure system stability.
* Collaborate with QA teams to ensure high quality releases.
* Mentor and provide guidance to junior developers, offering technical expertise and constructive feedback.
Requirements
* Bachelor's degree in Computer Science, software Engineering, or a related field.
* 5+ years of hands-on experience in backend development using Node.js and TypeScript.
* Experience working on Postgres or My SQL.
* Proficiency in TypeScript and its application in Node.js
* Experience with ORM such as Prisma or TypeORM.
* Familiarity with Agile development methodologies.
* Strong analytical and problem solving skills.
* Ability to work independently and in a team oriented, fast-paced environment.
* Excellent written and oral communication skills.
* Self motivated and proactive attitude.
Preferred:
* Experience with other backend technologies and languages.
* Familiarity with continuous integration and deployment process.
* Contributions to open-source projects related to backend development.
Note: please don't apply if you're profile if you're primary database is postgres SQL.
Join our team of talented engineers and be part of building cutting edge backend systems that drive our applications. As a Senior Backend Engineer, you'll have the opportunity to shape the future of our backend infrastructure and contribute company's success. If you are passionate about backend development and meet the above requirements, we encourage you to apply and become valued member of our team at Azodha.
DevOps Engineer (1–2 Years Experience)
Job Description
We are seeking a motivated and enthusiastic DevOps Engineer with 1–2 years of hands-on experience in cloud and DevOps technologies. The ideal candidate will work closely with development and operations teams to support, automate, and optimize the deployment pipeline, cloud infrastructure, and application performance across AWS and Azure environments.
Roles & Responsibilities
- Deploy, manage, and monitor applications in AWS and Azure cloud environments.
- Work with Linux-based systems to ensure reliable performance, configuration, and maintenance.
- Build and maintain CI/CD pipelines using modern DevOps tools (GitHub Actions, GitLab CI, Jenkins, etc.).
- Create and maintain containerized applications using Docker.
- Assist in implementing IaC (Infrastructure as Code) and automation processes.
- Troubleshoot infrastructure, deployment, and application-related issues.
- Collaborate with development teams to enable smooth DevOps processes.
- Write scripts using Bash or Python to automate routine operational tasks.
- Participate in continuous improvement of deployment reliability and scalability.
Required Skills
- 1–2 years of hands-on experience in DevOps or Cloud Engineering.
- Practical working knowledge of AWS and Azure cloud services.
- Strong understanding and experience with Linux operating systems.
- Experience using Docker, Git, and CI/CD pipeline tools.
- Basic knowledge of Kubernetes concepts (pods, deployments, services).
- Ability to write automation scripts using Bash or Python.
- Strong analytical thinking, problem-solving, and troubleshooting skills.
Good to Have Skills
- Experience with Terraform or other Infrastructure-as-Code tools.
- Familiarity with monitoring and logging tools (Prometheus, Grafana, CloudWatch, ELK, etc.).
- Understanding of networking fundamentals (TCP/IP, DNS, load balancing, routing).
- Exposure to microservices architecture and container orchestration concepts.
Job Type
- Full-time
Location
- On-site
Salary
- As per industry standards
We are seeking a highly skilled Power Platform Developer with deep expertise in designing, developing, and deploying solutions using Microsoft Power Platform. The ideal candidate will have strong knowledge of Power Apps, Power Automate, Power BI, Power Pages, and Dataverse, along with integration capabilities across Microsoft 365, Azure, and third-party systems.
Key Responsibilities
- Solution Development:
- Design and build custom applications using Power Apps (Canvas & Model-Driven).
- Develop automated workflows using Power Automate for business process optimization.
- Create interactive dashboards and reports using Power BI for data visualization and analytics.
- Configure and manage Dataverse for secure data storage and modelling.
- Develop and maintain Power Pages for external-facing portals.
- Integration & Customization:
- Integrate Power Platform solutions with Microsoft 365, Dynamics 365, Azure services, and external APIs.
- Implement custom connectors and leverage Power Platform SDK for advanced scenarios.
- Utilize Azure Functions, Logic Apps, and REST APIs for extended functionality.
- Governance & Security:
- Apply best practices for environment management, ALM (Application Lifecycle Management), and solution deployment.
- Ensure compliance with security, data governance, and licensing guidelines.
- Implement role-based access control and manage user permissions.
- Performance & Optimization:
- Monitor and optimize app performance, workflow efficiency, and data refresh strategies.
- Troubleshoot and resolve technical issues promptly.
- Collaboration & Documentation:
- Work closely with business stakeholders to gather requirements and translate them into technical solutions.
- Document architecture, workflows, and processes for maintainability.
Required Skills & Qualifications
- Technical Expertise:
- Strong proficiency in Power Apps (Canvas & Model-Driven), Power Automate, Power BI, Power Pages, and Dataverse.
- Experience with Microsoft 365, Dynamics 365, and Azure services.
- Knowledge of JavaScript, TypeScript, C#, .NET, and Power Fx for custom development.
- Familiarity with SQL, DAX, and data modeling.
- Additional Skills:
- Understanding of ALM practices, solution packaging, and deployment pipelines.
- Experience with Git, Azure DevOps, or similar tools for version control and CI/CD.
- Strong problem-solving and analytical skills.
- Certifications (Preferred):
- Microsoft Certified: Power Platform Developer Associate.
- Microsoft Certified: Power Platform Solution Architect Expert.
Soft Skills
- Excellent communication and collaboration skills.
- Ability to work in agile environments and manage multiple priorities.
- Strong documentation and presentation abilities.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
We are looking for a Cloud Security Engineer to join our organization. The ideal candidate will have strong hands-on experience in ensuring robust security controls across both applications and organizational data. This candidate is expected to work closely with multiple stakeholders to architect, implement, and monitor effective safeguards. The ideal candidate will champion secure design, conduct risk assessments, drive vulnerability management, and promote data protection best practices for the organization
Responsibilities
- Design and implement security measures for website and API applications.
- Conduct security-first code reviews, vulnerability assessments, and posture audits for business-critical applications.
- Conduct security testing activities like SAST & DAST by integrating them within the project’s CI/CD pipelines and development workflows.
- Manage all penetration testing activities including working with external vendors for security certification of business-critical applications.
- Develop and manage data protection policies and RBAC controls for sensitive organizational data like PII, revenue, secrets, etc.
- Oversee encryption, key management, and secure data storage solutions.
- Monitor threats and responds to incidents involving application and data breaches.
- Collaborate with engineering, data, product and compliance teams to achieve security-by-design principles.
- Ensure compliance with regulatory standards (GDPR, HIPAA, etc.) and internal organizational policies.
- Automate recurrent security tasks using scripts and security tools.
- Maintain documentation around data flows, application architectures, and security controls.
Requirements
- 10+ years’ experience in application security and/or data security engineering.
- Strong understanding of security concepts including zero trust architecture, threat modeling, security frameworks (like SOC 2, ISO 27001), and best practices in corporate security environments.
- Strong knowledge of modern web/mobile application architectures and common vulnerabilities (like OWASP Top 10, etc.)
- Proficiency in secure coding practices and code reviews for major programming languages including Java, .NET, Python, JavaScript, Typescript, React, etc.
- Hands-on experience in at-least two Software tooling in areas of vulnerability scanning and static/dynamic analysis. Software tooling can include Checkmarx, Veracode, SonarQube, Burp Suite, AppScan, etc.
- Advanced understanding of data encryption, key management, and secure storage (SQL, NoSQL, Cloud) and secure transfer mechanisms.
- Working experience in Cloud Environments like AWS & GCP and familiarity with the recommended security best practices.
- Familiarity with regulatory frameworks such as GDPR, HIPAA, PCI DSS and the controls needed to implement them.
- Experience integrating security into DevOps/CI/CD processes.
- Hands-on Experience with automation in any of the scripting languages (Python, Bash, etc.)
- Ability to conduct incident response and forensic investigations related to application/data breaches.
- Excellent communication and documentation skills.
Good To Have:
- Cloud Security certifications in any one of the below
- AWS Certified Security – Specialty
- GCP Professional Cloud Security
- Experience with container security (Docker, Kubernetes) and cloud security tools (AWS, Azure, GCP).
- Experience in safeguard data storage solutions like GCP GCS, BigQuery, etc.
- Hands-on work with any SIEM/SOC platforms for monitoring and alerting.
- Knowledge of data loss prevention (DLP) solutions and IAM (identity and access management) systems.
Perks:
- Day off on the 3rd Friday of every month (one long weekend each month)
- Monthly Wellness Reimbursement Program to promote health and well-being
- Monthly Office Commutation Reimbursement Program
- Paid paternity and maternity leave
Job Description:
Experience - 5 to 8 years
Role - Senior consultant
Work mode - Hybrid (3 days WFO)
Location - Bangalore / Pune
JOB DESCRIPTION :
Application Security Specialists are instrumental in fortifying the security framework that underpins the software delivery processes of our clients. These experts thrive in collaborative settings, engaging with diverse teams across various disciplines to pinpoint and mitigate vulnerabilities in code, systems architecture, and infrastructure. With a profound technical acumen rooted in security practices and a keen understanding of agile methodologies, they advocate for security integration as a fundamental aspect of software development. Their work transcends mere compliance; it is about embedding a culture of security that aligns with agile and DevOps philosophies, ensuring that security measures enhance, rather than hinder, organizational objectives. By guiding teams and clients through the nuances of security
Automation and best practices, Application Security Specialists not only safeguard digital assets but also champion a mindset where security and development go hand in hand towards achieving superior outcomes.
Job Responsibilities:
As an Application Security Specialist , you will play a crucial role in enhancing our software delivery process's security posture.
Embed security throughout the software delivery lifecycle, ensuring secure application development from start to finish.
Build and define comprehensive security practices tailored to our delivery methodologies.
Automate and optimize security measures in line with the application lifecycle, ensuring efficient and effective security protocols.
Serve as a consultant and advisor to both the delivery team and clients, providing expert guidance on security best practices and risk mitigation strategies.
Work closely with delivery, DevOps and Cloud teams to identify and reduce risks associated with code development, system architecture, and infrastructure.
Job Qualifications:
Preferred to have BFSI experience
Experience as a security engineer with direct involvement in working with delivery teams to identify vulnerabilities in code and systems architecture.
Demonstrated experience with implementing security automation and familiarity with agile development methodologies.
Ability to collaborate effectively with software product delivery teams, speaking their language and working towards common goals.
Technical Skills:
In-depth knowledge and experience with OWASP and SANS standards.
Proficiency in manual and automated penetration testing tools and techniques.
Experience with SAST, DAST, Dependency checking, and container vulnerability
assessment tools such as Checkmarx, Burp, ZAP, Fortify, Trivy, etc.
Knowledge and experience in password/secret management tools and techniques.
Understanding of DevSecOps and experience in security automation.
Comprehensive understanding of web technologies, common web frameworks, their vulnerabilities, and mitigations.
Basic understanding of firewall, virtualization, container, networking, and OS security.
Knowledge of cloud security best practices and basic knowledge of cloud providers like AWS, Azure and GCP.
Professional Skills:
Excellent communication and interpersonal skills, with the ability to manage relationships at senior levels of leadership.
Strong consulting skills, including the ability to promote security awareness and influence
decision-making.
Ability to anticipate problems and understand the long-term implications of decisions and
actions. Experience in developing security testing plans and integrating them into the software development lifecycle.
Preferred Skills:
Experience with manual and automated security code review.
Basic knowledge of security policies and standards such as PCI-DSS, ISO 27001 (ISMS), and GDPR.
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Review Criteria
- Strong QA Automation / SDET Engineer Profile
- 3+ YOE in Automation QA Testing
- Must have proven experience with Playwright for end-to-end test automation and Strong hands-on experience with Java or JavaScript
- Must have strong proficiency in TestNG and/or JUnit
- Solid understanding of QA principles, testing strategies, and best practices
- Excellent debugging and troubleshooting capabilities
- Product Company only
Preferred
- Working knowledge of Gradle
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which college you completed your undergraduation (UG)?
- How many years of experience you have in Playwright framework?
Role & Responsibilities
We are seeking a highly skilled Senior QA Automation Engineer to join our dynamic team. This role is ideal for a passionate automation expert who thrives on building robust test frameworks, mentoring team members, and driving quality excellence across our products. You will play a pivotal role in shaping our automation strategy while ensuring the delivery of reliable, scalable test solutions.
Key Responsibilities:
- Design, develop, and maintain automated test suites using Playwright for end-to-end testing
- Implement and optimize test frameworks using TestNG/JUnit to ensure comprehensive test coverage
- Build and manage automation projects using Gradle build tool
- Develop and execute comprehensive testing strategies that align with product requirements and business objectives
- Mentor and guide junior QA engineers in automation best practices and technical skills development
- Conduct thorough code reviews to maintain high-quality automation code standards
- Debug and troubleshoot complex test failures, identifying root causes and implementing solutions
- Collaborate with cross-functional teams including developers, product managers, and DevOps to integrate automation into CI/CD pipelines
- Drive continuous improvement initiatives in test automation processes and methodologies
- Champion quality assurance best practices across the organization
Ideal Candidate
- Programming Proficiency: Strong hands-on experience with Java or JavaScript
- Automation Expertise: Proven experience with Playwright for end-to-end test automation
- Testing Frameworks: Proficiency in TestNG and/or JUnit
- Build Tools: Working knowledge of Gradle
- Testing Methodologies: Solid understanding of QA principles, testing strategies, and best practices
- Problem-Solving Skills: Excellent debugging and troubleshooting capabilities
- Leadership: Demonstrated ability to mentor junior engineers and conduct effective code reviews
- Passion: Strong enthusiasm for test automation and commitment to quality
Preferred Qualifications:
- Experience with AI/ML technologies and their application in testing
- Knowledge of additional automation tools and frameworks
- Experience with CI/CD integration and DevOps practices
- Familiarity with API testing and performance testing tools
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed
Job Title : DevOps Engineer – Fintech (Product-Based)
Experience : 5+ Years
Location : Mumbai
Job Type : Full-Time | Product Company
Role Summary :
We are hiring a DevOps Engineer with strong product-based experience to manage infrastructure for a Fintech platform built on stateful microservices.
The role involves working across hybrid cloud + on-prem, with deep expertise in Kubernetes, Helm, GitOps, IaC, and Cloud Networking.
Mandatory Skills :
Product-based experience, deep Kubernetes (managed & self-managed), custom Helm Chart development, ArgoCD/FluxCD (GitOps), strong AWS/Azure cloud networking & security, IaC module development (Terraform/Pulumi/CloudFormation), experience with stateful microservices (DBs/queues/caches), multi-tenant deployments, HA/load balancing/SSL/TLS/cert management.
Key Responsibilities :
- Deploy and manage stateful microservices in production.
- Handle both managed & self-managed Kubernetes clusters.
- Develop and maintain custom Helm Charts.
- Implement GitOps pipelines using ArgoCD/FluxCD.
- Architect and operate secure infra on AWS/Azure (VPC, IAM, networking).
- Build reusable IaC modules using Terraform/CloudFormation/Pulumi.
- Design multi-tenant cluster deployments.
- Manage HA, load balancers, certificates, DNS, and networking.
Mandatory Skills :
- Product-based company experience.
- Strong Kubernetes (EKS/AKS/GKE + self-managed).
- Custom Helm Chart development.
- GitOps tools : ArgoCD/FluxCD.
- AWS/Azure cloud networking & security.
- IaC module development (Terraform/Pulumi/CloudFormation).
- Experience with stateful components (DBs, queues, caches).
- Understanding of multi-tenant deployments, HA, SSL/TLS, ingress, LB.
Job Title: Senior Devops Engineer (Full-time)
Location: Mumbai, Onsite
Experience Required: 5+ Years
Job Description
We are seeking an experienced DevOps Engineer to build and manage infrastructure for a FinTech product company operating with stateful microservices. The deployment environments include hybrid cloud and on-premise setups. The ideal candidate must have strong production experience with Kubernetes, cloud platforms, and infrastructure automation.
Key Responsibilities
- Design, build, and manage infrastructure for stateful microservices (databases, queues, caching layers).
- Work on Kubernetes environments—both managed (EKS/AKS/GKE) and self-managed clusters.
- Build, enhance, and maintain custom Helm Charts for complex deployments.
- Set up and manage CI/CD pipelines using ArgoCD, FluxCD, or similar GitOps tools.
- Architect and optimize multi-tenant deployment models.
- Implement and manage high availability, load balancing, certificate management (SSL/TLS).
- Design deployment architectures based on business requirements.
- Manage cloud infrastructure on AWS/Azure including VPC, IAM, cloud networking, and security.
- Work with Infrastructure-as-Code (IaC) tools (Terraform/CloudFormation/Pulumi), including writing reusable modules.
- Monitor, troubleshoot, and optimize performance across production environments.
- Ensure security best practices in networking, access control, and secrets management.
Mandatory Skills
- 5+ years of DevOps experience in product-based companies (not services/consulting).
- Strong hands-on experience with stateful microservices in production.
- Deep expertise in Kubernetes (managed + self-managed).
- Strong ability to write custom Helm Charts.
- Experience with multi-tenant production environments.
- Expertise in AWS or Azure (cloud networking, IAM, VPC, security groups, etc.).
- Experience setting up GitOps-based CI/CD (ArgoCD/FluxCD).
- Strong understanding of HA, load balancing, DNS, SSL/TLS certificates.
- Ability to justify architectural decisions and propose deployment designs.
- Hands-on experience with IaC tools and writing custom Terraform/Pulumi modules.
Nice to Have
- Exposure to hybrid cloud deployments
- Knowledge of on-premise orchestration & networking
- Experience with service mesh (e.g., Istio, Linkerd)
- Experience with monitoring/logging tools (Prometheus, Grafana, Loki, ELK)
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed.
Role: Lead Software Engineer (Backend)
Salary: INR 28 to INR 40L per annum
Performance Bonus: Up to 10% of the base salary can be added
Location: Hulimavu, Bangalore, India
Experience: 6-10 years
About AbleCredit:
AbleCredit has built a foundational AI platform to help BFSI enterprises reduce OPEX by up to 70% by powering workflows for onboarding, claims, credit, and collections. Our GenAI model achieves over 95% accuracy in understanding Indian dialects and excels in financial analysis.
The company was founded in June 2023 by Utkarsh Apoorva (IIT Delhi, built Reshamandi, Guitarstreet, Edulabs); Harshad Saykhedkar (IITB, ex-AI Lead at Slack); and Ashwini Prabhu (IIML, co-founder of Mythiksha, ex-Product Head at Reshamandi, HandyTrain).
What Work You’ll Do
- Build best-in-class AI systems - that enterprises can trust, where reliability and explainability are not optional.
- Operate in founder mode — build, patch, or fork, whatever it takes to ship today, not next week.
- Work at the frontier of AI x Systems — making AI models behave predictably to solve real, enterprise-grade problems.
- Own end-to-end feature delivery — from requirement scoping to design, development, testing, deployment, and post-release optimization.
- Design and implement complex, distributed systems that support large-scale workflows and integrations for enterprise clients.
- Operate with full technical ownership — make architectural decisions, review code, and mentor junior engineers to maintain quality and velocity.
- Build scalable, event-driven services leveraging AWS Lambda, SQS/SNS, and modern asynchronous patterns.
- Work with cross-functional teams to design robust notification systems, third-party integrations, and data pipelines that meet enterprise reliability and security standards.
The Skills You Have..
- Strong background as an Individual Contributor — capable of owning systems from concept to production without heavy oversight.
- Expertise in system design, scalability, and fault-tolerant architecture.
- Proficiency in Node.js (bonus) or another backend language such as Go, Java, or Python.
- Deep understanding of SQL (PostgreSQL/MySQL) and NoSQL (MongoDB/DynamoDB) systems.
- Hands-on experience with AWS services — Lambda, API Gateway, S3, CloudWatch, ECS/EKS, and event-based systems.
- Experience in designing and scaling notification systems and third-party API integrations.
- Proficiency in event-driven architectures and multi-threading/concurrency models.
- Strong understanding of data modeling, security practices, and performance optimization.
- Familiarity with CI/CD pipelines, automated testing, and monitoring tools.
- Strong debugging, performance tuning, and code review skills.
What You Should Have Done in the Past
- Delivered multiple complex backend systems or microservices from scratch in a production environment.
- Led system design discussions and guided teams on performance, reliability, and scalability trade-offs.
- Mentored SDE-1 and SDE-2 engineers, enabling them to deliver features independently.
- Owned incident response and root cause analysis for production systems.
- (Bonus) Built or contributed to serverless systems using AWS Lambda, with clear metrics on uptime, throughput, and cost-efficiency.
Highlights:
- PTO & Holidays
- Opportunity to work with a core Gen AI startup.
- Flexible hours and an extremely positive work environment
Position: QA Engineer – Machine Learning Systems (5 - 7 years)
Location: Remote (Company in Mumbai)
Company: Big Rattle Technologies Private Limited
Immediate Joiners only.
Summary:
The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.
Key Responsibilities:
Test Strategy & Governance
- ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
- checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
- (timeliness, completeness, integrity).
- ○ Establish versioned test datasets & golden baselines for repeatable regression of
- features, models, and optimizers.
Data Quality & Transformation
- Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
- Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
- Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.
Model Training & Evaluation
- Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
- Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
- Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
- Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.
Predictions, Optimization & Guardrails
- Validate batch predictions: result shapes, coverage, latency, and failure handling.
- Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
- Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).
Pipelines & E2E
- Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
- prediction/optimization) including orchestrated synthetic runs and fault injection
- (missing slice, late competitor data, SB backlog).
- Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
- freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).
Automation & Tooling
- Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
- Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
- Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).
Reporting & Quality Ops
- Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
- Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.
Required Skills (hands-on experience in the following):
- Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
- DevOps) for fully automated ML QA.
- Strong grasp of ML validation: leakage checks, proper splits, metric selection
- (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
- Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
- (Service Bus/Event Hubs).
- API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
- patterns.
- Familiar with feature stores/feature engineering concepts and reproducibility.
- Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.
Required Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
- Certification in Azure Data or ML Engineer Associate is a plus.
Why should you join Big Rattle?
Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.
Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.
What We Offer:
- Opportunity to work on diverse projects for Fortune 500 clients.
- Competitive salary and performance-based growth.
- Dynamic, collaborative, and growth-oriented work environment.
- Direct impact on product quality and client satisfaction.
- 5-day hybrid work week.
- Certification reimbursement.
- Healthcare coverage.
How to Apply:
Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.
As a Google Cloud Infrastructure / DevOps Engineer, you will design, implement, and maintain cloud infrastructure while enabling efficient development operations. This role bridges development and operations, with a strong focus on automation, scalability, reliability, and collaboration. You will work closely with cross-functional teams to optimize systems and enhance CI/CD pipelines.
Key Responsibilities:
Cloud Infrastructure Management
- Manage and monitor Google Cloud Platform (GCP) services and components.
- Ensure high availability, scalability, and security of cloud resources.
CI/CD Pipeline Implementation
- Design and implement automated pipelines for application releases.
- Build and maintain CI/CD workflows.
- Collaborate with developers to streamline deployment processes.
- Automate testing, deployment, and rollback procedures.
Infrastructure as Code (IaC)
- Use Terraform (or similar tools) to define and manage infrastructure.
- Maintain version-controlled infrastructure code.
- Ensure environment consistency across dev, staging, and production.
Monitoring & Troubleshooting
- Monitor system performance, resource usage, and application health.
- Troubleshoot cloud infrastructure and deployment pipeline issues.
- Implement proactive monitoring and alerting.
Security & Compliance
- Apply cloud security best practices.
- Ensure compliance with industry standards and internal policies.
- Collaborate with security teams to address vulnerabilities.
Collaboration & Documentation
- Work closely with development, operations, and QA teams.
- Document architecture, processes, and configurations.
- Share knowledge and best practices with the team.
Qualifications:
Education
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Experience
- Minimum 3 years of industry experience.
- At least 1 year designing and managing production systems on GCP.
- Familiarity with GCP services (Compute Engine, GKE, Cloud Storage, etc.).
- Exposure to Docker, Kubernetes, and microservices architecture.
Skills
- Proficiency in Python or Bash for automation.
- Strong understanding of DevOps principles.
- Knowledge of Jenkins or other CI/CD tools.
- Experience with GKE for container orchestration.
- Familiarity with event streaming platforms (Kafka, Google Cloud Pub/Sub).
About the Company: Bits In Glass – India
Industry Leader
- Established for 20+ years with global operations in the US, Canada, UK, and India.
- In 2021, Bits In Glass joined hands with Crochet Technologies, strengthening global delivery capabilities.
- Offices in Pune, Hyderabad, and Chandigarh.
- Specialized Pega Partner since 2017, ranked among the top 30 Pega partners globally.
- Long-standing sponsor of the annual PegaWorld event.
- Elite Appian partner since 2008 with deep industry expertise.
- Dedicated global Pega Center of Excellence (CoE) supporting customers and development teams worldwide.
Employee Benefits
- Career Growth: Clear pathways for advancement and professional development.
- Challenging Projects: Work on innovative, high-impact global projects.
- Global Exposure: Collaborate with international teams and clients.
- Flexible Work Arrangements: Supporting work-life balance.
- Comprehensive Benefits: Competitive compensation, health insurance, paid time off.
- Learning Opportunities: Upskill on AI-enabled Pega solutions, data engineering, integrations, cloud migration, and more.
Company Culture
- Collaborative Environment: Strong focus on teamwork, innovation, and knowledge sharing.
- Inclusive Workplace: Diverse and respectful workplace culture.
- Continuous Learning: Encourages certifications, learning programs, and internal knowledge sessions.
Core Values
- Integrity: Ethical practices and transparency.
- Excellence: Commitment to high-quality work.
- Client-Centric Approach: Delivering solutions tailored to client needs.
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
About the job
Key Responsibilities
1. Design, develop, and maintain dynamic, responsive web applications using React.js and Node.js.
2. Build efficient, reusable front-end and back-end components for seamless performance.
3. Integrate RESTful APIs and third-party services.
4. Collaborate with UI/UX designers, mobile teams, and backend developers to deliver high-quality products.
5. Write clean, maintainable, and efficient code following modern coding standards.
6. Debug, test, and optimize applications for maximum speed and scalability.
7. Manage databases using MongoDB and handle cloud deployment (AWS, Firebase, or similar).
8. Participate in code reviews, architectural discussions, and agile development cycles.
Required Skills & Experience
1. 1-5 years of proven experience in Full Stack Web Development using the MERN stack.
2. Proficiency in React.js, Node.js, Express.js, MongoDB, and JavaScript (ES6+).
3. Strong understanding of HTML5, CSS3, Bootstrap, and Tailwind CSS.
4. Hands-on experience with API design, state management (Redux, Context API), and authentication (JWT/OAuth).
5. Familiarity with version control tools (Git, Bitbucket).
6. Good understanding of database design, schema modeling, and RESTful architecture.
7. Strong problem-solving skills and ability to work in a collaborative team environment.
Perks & Benefits
- Onsite opportunity in our modern Greater Noida office.
- Competitive salary based on skills and experience.
- Exposure to real-world projects and latest tech stacks.
- Work with a creative and talented team.
- Career growth opportunities in full-stack and cross-platform development.
Benefits:
- Health insurance
- Leave encashment
- Paid sick time
- Paid time off
- Work from home
Application Question(s):
- How many years of _____ experience do you have?
- What is your current CTC?
- What is your expected CTC?
Job Summary
We are looking for an experienced Backend Developer proficient in .NET, Node.js, and MS SQL Server to join our technical team. The candidate will be responsible for building, maintaining, and optimizing scalable backend services and APIs, ensuring system reliability, performance, and security.
Key Responsibilities
- Design, develop, and maintain backend applications and APIs using .NET (Core/ASP.NET) and Node.js.
- Develop and manage MS SQL Server databases, including schema design, stored procedures, indexing, and performance optimization.
- Integrate backend logic with various third-party systems and APIs.
- Ensure scalability, high performance, and security across backend systems.
- Write clean, maintainable, and well-documented code following best practices.
- Debug and resolve production issues, ensuring system stability and reliability.
- Collaborate with QA engineers, DevOps, and other backend developers to deliver end-to-end solutions.
- Participate in Agile development processes including sprint planning, daily stand-ups, and retrospectives.
- Stay updated with emerging backend technologies and contribute to continuous improvement.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 3–4 years of professional experience in backend development.
- Strong hands-on experience with .NET Core / ASP.NET / C#.
- Strong hands-on experience with Node.js (Express.js or NestJS preferred).
- Proficiency in MS SQL Server (T-SQL, stored procedures, performance tuning, query optimization).
- Experience developing and consuming RESTful APIs.
- Knowledge of API security standards (JWT, OAuth2, etc.).
- Familiarity with Git or other version control systems.
- Experience in Agile/Scrum development environments.
Nice to Have
- Experience with cloud platforms like Azure or AWS.
- Familiarity with ORM frameworks (Entity Framework, Sequelize).
- Exposure to CI/CD pipelines and containerization (Docker).
- Understanding of Redis or message queue systems (RabbitMQ, Kafka).
Soft Skills
- Strong analytical and problem-solving mindset.
- Excellent communication and teamwork skills.
- High sense of responsibility and ownership of assigned projects.
- Ability to work independently under minimal supervision.
Compensation
- Competitive salary based on experience and technical expertise.
- Performance-based bonuses and career growth opportunities.
Job Type: Full-time
We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.
Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
- Monitor and optimize Azure environments to ensure high availability, performance, and security.
- Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
- Troubleshoot and resolve issues related to build, deployment, and infrastructure.
- Implement and manage version control systems, primarily using Git.
- Manage containerization and orchestration using tools like Docker and Kubernetes.
- Ensure compliance with industry standards and best practices for security, scalability, and reliability.
Role: DevOps Engineer
Experience: 2–3+ years
Location: Pune
Work Mode: Hybrid (3 days Work from office)
Mandatory Skills:
- Strong hands-on experience with CI/CD tools like Jenkins, GitHub Actions, or AWS CodePipeline
- Proficiency in scripting languages (Bash, Python, PowerShell)
- Hands-on experience with containerization (Docker) and container management
- Proven experience managing infrastructure (On-premise or AWS/VMware)
- Experience with version control systems (Git/Bitbucket/GitHub)
- Familiarity with monitoring and logging tools for system performance tracking
- Knowledge of security best practices and compliance standards
- Bachelor's degree in Computer Science, Engineering, or related field
- Willingness to support production issues during odd hours when required
Preferred Qualifications:
- Certifications in AWS, Docker, or VMware
- Experience with configuration management tools like Ansible
- Exposure to Agile and DevOps methodologies
- Hands-on experience with Virtual Machines and Container orchestration
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.
Designation: Senior Python Django Developer
Position: Senior Python Developer
Job Types: Full-time, Permanent
Pay: Up to ₹800,000.00 per year
Schedule: Day shift
Ability to commute/relocate: Bhopal Indrapuri (MP) And Bangalore JP Nagar
Experience: Back-end development: 4 years (Required)
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 4 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unit test, or factory boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.
About the Role
We're seeking a talented and versatile Full Stack Developer with a strong foundation in mobile app development to join our dynamic team. You'll play a pivotal role in designing, developing, and maintaining high-quality software applications across various platforms.
Responsibilities
- Full Stack Development: Design, develop, and implement both front-end and back-end components of web applications using modern technologies and frameworks.
- Mobile App Development: Develop native mobile applications for iOS and Android platforms using Swift and Kotlin, respectively.
- Cross-Platform Development: Explore and utilize cross-platform frameworks (e.g., React Native, Flutter) for efficient mobile app development.
- API Development: Create and maintain RESTful APIs for integration with front-end and mobile applications.
- Database Management: Work with databases (e.g., MySQL, PostgreSQL) to store and retrieve application data.
- Code Quality: Adhere to coding standards, best practices, and ensure code quality through regular code reviews.
- Collaboration: Collaborate effectively with designers, project managers, and other team members to deliver high-quality solutions.
Qualifications
- Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Strong programming skills in [relevant programming languages, e.g., JavaScript, Python, Java, etc.].
- Experience with [relevant frameworks and technologies, e.g., React, Angular, Node.js, Swift, Kotlin, etc.].
- Understanding of software development methodologies (e.g., Agile, Waterfall).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong communication and interpersonal skills.
Preferred Skills (Optional)
- Experience with cloud platforms (e.g., AWS, Azure, GCP).
- Knowledge of DevOps practices and tools.
- Experience with serverless architectures.
- Contributions to open-source projects.
What We Offer
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- A collaborative and supportive work environment.
- A chance to work on cutting-edge projects.
About the Role:
We are seeking an experienced Data Engineer to lead and execute the migration of existing Databricks-based pipelines to Snowflake. The role requires strong expertise in PySpark/Spark, Snowflake, DBT, and Airflow with additional exposure to DevOps and CI/CD practices. The candidate will be responsible for re-architecting data
pipelines, ensuring data consistency, scalability, and performance in Snowflake, and enabling robust automation and monitoring across environments.
Key Responsibilities
Databricks to Snowflake Migration
· Analyze and understand existing pipelines and frameworks in Databricks (PySpark/Spark).
· Re-architect pipelines for execution in Snowflake using efficient SQL-based processing.
· Translate Databricks notebooks/jobs into Snowflake/DBT equivalents.
· Ensure a smooth transition with data consistency, performance, and scalability.
Snowflake
· Hands-on experience with storage integrations, staging (internal/external), Snowpipe, tables/views, COPY INTO, CREATE OR ALTER, and file formats.
· Implement RBAC (role-based access control), data governance, and performance tuning.
· Design and optimize SQL queries for large-scale data processing.
DBT (with Snowflake)
· Implement and manage models, macros, materializations, and SQL execution within DBT.
· Use DBT for modular development, version control, and multi-environment deployments.
Airflow (Orchestration)
· Design and manage DAGs to automate workflows and ensure reliability.
· Handle task dependencies, error recovery, monitoring, and integrations (Cosmos, Astronomer, Docker).
DevOps & CI/CD
· Develop and manage CI/CD pipelines for Snowflake and DBT using GitHub Actions, Azure DevOps, or equivalent.
· Manage version-controlled environments and ensure smooth promotion of changes across dev, test, and prod.
Monitoring & Observability
· Implement monitoring, alerting, and logging for data pipelines.
· Build self-healing or alert-driven mechanisms for critical/severe issue detection.
· Ensure system reliability and proactive issue resolution.
Required Skills & Qualifications
· 5+ years of experience in data engineering with focus on cloud data platforms.
· Strong expertise in:
· Databricks (PySpark/Spark) – analysis, transformations, dependencies.
· Snowflake – architecture, SQL, performance tuning, security (RBAC).
· DBT – modular model development, macros, deployments.
· Airflow – DAG design, orchestration, and error handling.
· Experience in CI/CD pipeline development (GitHub Actions, Azure DevOps).
· Solid understanding of data modeling, ETL/ELT processes, and best practices.
· Excellent problem-solving, communication, and stakeholder collaboration skills.
Good to Have
· Exposure to Docker/Kubernetes for orchestration.
· Knowledge of Azure Data Services (ADF, ADLS) or similar cloud tools.
· Experience with data governance, lineage, and metadata management.
Education
· Bachelor’s / Master’s degree in Computer Science, Engineering, or related field.

Global digital transformation solutions provider.
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Outcomes:
Interpret the application/feature/component design to develop the same in accordance with specifications.
Code debug test document and communicate product/component/feature development stages.
Validate results with user representatives; integrates and commissions the overall solution
Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions
Optimises efficiency cost and quality.
Influence and improve customer satisfaction
Set FAST goals for self/team; provide feedback to FAST goals of team members
Measures of Outcomes:
Adherence to engineering process and standards (coding standards)
Adherence to project schedule / timelines
Number of technical issues uncovered during the execution of the project
Number of defects in the code
Number of defects post-delivery
Number of non compliance issues
On time completion of mandatory compliance trainings
Outputs Expected:
Code:
Code as per design
Follow coding standards
templates and checklists
Review code – for team and peers
Documentation:
Create/review templates
checklists
guidelines
standards for design/process/development
Create/review deliverable documents. Design documentation
r and requirements
test cases/results
Configure:
Define and govern configuration management plan
Ensure compliance from the team
Test:
Review and create unit test cases
scenarios and execution
Review test plan created by testing team
Provide clarifications to the testing team
Domain relevance:
Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client.
Learn more about the customer domain identifying opportunities to provide valuable addition to customers
Complete relevant domain certifications
Manage Project:
Manage delivery of modules and/or manage user stories
Manage Defects:
Perform defect RCA and mitigation
Identify defect trends and take proactive measures to improve quality
Estimate:
Create and provide input for effort estimation for projects
Manage knowledge:
Consume and contribute to project related documents
share point
libraries and client universities
Review the reusable documents created by the team
Release:
Execute and monitor release process
Design:
Contribute to creation of design (HLD
LLD
SAD)/architecture for Applications/Features/Business Components/Data Models
Interface with Customer:
Clarify requirements and provide guidance to development team
Present design options to customers
Conduct product demos
Manage Team:
Set FAST goals and provide feedback
Understand aspirations of team members and provide guidance opportunities etc
Ensure team is engaged in project
Certifications:
Take relevant domain/technology certification
Skill Examples:
Explain and communicate the design / development to the customer
Perform and evaluate test results against product specifications
Break down complex problems into logical components
Develop user interfaces business software components
Use data models
Estimate time and effort required for developing / debugging features / components
Perform and evaluate test in the customer or target environment
Make quick decisions on technical/project related challenges
Manage a Team mentor and handle people related issues in team
Maintain high motivation levels and positive dynamics in the team.
Interface with other teams designers and other parallel practices
Set goals for self and team. Provide feedback to team members
Create and articulate impactful technical presentations
Follow high level of business etiquette in emails and other business communication
Drive conference calls with customers addressing customer questions
Proactively ask for and offer help
Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
Build confidence with customers by meeting the deliverables on time with quality.
Estimate time and effort resources required for developing / debugging features / components
Make on appropriate utilization of Software / Hardware’s.
Strong analytical and problem-solving abilities
Knowledge Examples:
Appropriate software programs / modules
Functional and technical designing
Programming languages – proficient in multiple skill clusters
DBMS
Operating Systems and software platforms
Software Development Life Cycle
Agile – Scrum or Kanban Methods
Integrated development environment (IDE)
Rapid application development (RAD)
Modelling technology and languages
Interface definition languages (IDL)
Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
About the Role: We are looking for a Senior Software Developer with strong experience in .NET development and Microsoft Azure to help build and scale our next-generation FinTech platforms. You will work on secure, high-availability systems that power core financial services, collaborating with cross-functional teams to deliver features that directly impact our customers. You’ll play a key role in developing backend services, cloud integrations, and microservices that are performant, secure, and compliant with financial regulations. Key Responsibilities: Design, develop, and maintain backend services and APIs using C# and .NET Core. Build and deploy cloud-native applications on Microsoft Azure, leveraging services such as App Services, Azure Functions, Key Vault, Service Bus, and Azure SQL. Contribute to architecture decisions and write clean, maintainable, well-tested code. Participate in code reviews, technical planning, and sprint ceremonies in an Agile environment. Collaborate with QA, DevOps, Product, and Security teams to deliver robust, secure solutions. Ensure applications meet high standards of security, reliability, and scalability, especially in a regulated FinTech environment. Support and troubleshoot production issues and contribute to continuous improvement. Required Skills & Qualifications: 5–8 years of experience in software development, primarily with C# / .NET Core. Strong hands-on experience with Microsoft Azure, including Azure App Services, Azure Functions, Azure SQL, Key Vault, and Service Bus. Experience building RESTful APIs, microservices, and integrating with third-party services. Proficiency with Azure DevOps, Git, and CI/CD pipelines. Solid understanding of software design principles, object-oriented programming, and secure coding practices. Familiarity with Agile/Scrum development methodologies. Bachelor’s degree in Computer Science, Engineering, or a related field.
Skills: Dot Net, C#, Azure
Must-Haves
Net with Azure Developer -Required: Function app, Logic Apps, Event Grid, Service Bus, Durable Functions
We are looking for a skilled Backend Developer with strong experience in building, scaling, and optimizing server-side systems. The ideal candidate is proficient in Node.js, FastAPI (Python), and database design, with hands-on experience in cloud infrastructure on AWS or GCP.
Responsibilities:
Design, develop, and maintain scalable backend services and APIs using Node.js and FastAPI
Build robust data models and optimize performance for SQL and NoSQL databases
Architect and deploy backend services on GCP/AWS, leveraging managed cloud services.
Implement clean, modular, and testable code with proper CI/CD and observability (logs, metrics, alerts)
Ensure system reliability, security, and high availability for production environments
Requirements:
2–5 years of backend development experience
Strong proficiency in Node.js, FastAPI, REST APIs, and microservice architecture
Solid understanding of PostgreSQL/MySQL, MongoDB/Redis or similar NoSQL systems
Hands-on experience with AWS or GCP, Docker, and modern DevOps workflows
Experience with caching, queueing, authentication, and API performance optimization
Good to Have:
Experience with event-driven architecture, WebSockets, or serverless functions
Familiarity with Kubernetes or Terraform
Job Location: Gurugram, Haryana, India
Industry: Artificial Intelligence
About the Role:
Join Nitor Infotech as a DevOps Architect, where you will drive CI/CD pipeline and infrastructure automation initiatives. Collaborate with development teams to ensure seamless application deployment and maintenance.
Responsibilities
- CI/CD Pipeline Development: Design and maintain CI/CD pipelines using Jenkins, GitLab CI/CD, or GitHub Actions.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Ansible, Terraform, or AWS CloudFormation.
- Cloud Platform Management: Optimize cloud infrastructure on AWS, Azure, or GCP.
- Monitoring and Alerting: Implement monitoring solutions with Prometheus and Grafana for proactive issue identification.
- DevOps Culture Promotion: Foster collaboration between development and operations teams.
- Team Leadership: Mentor junior DevOps engineers and support their career development.
- Problem Solving: Troubleshoot complex technical issues related to infrastructure and deployments.
Must-Have Skills and Qualifications
- 8+ years in DevOps or related fields.
- 3-5 years experience as a DevOps Architect or Solution Architect
- Proficient in CI/CD tools (Docker, Jenkins, GitHub Actions).
- Expertise in infrastructure automation (Ansible, Terraform).
- In-depth knowledge of cloud platforms (AWS, Azure, GCP).
- Experience with monitoring tools (Prometheus, Grafana).
- Strong scripting skills (Bash, Python).
- Excellent problem-solving and communication skills.
- Familiarity with Agile development methodologies.
Good-to-Have Skills and Qualifications
- Experience with configuration management tools (Ansible, Puppet).
- Knowledge of security best practices in DevOps.
- Familiarity with container orchestration (Kubernetes).
What We Offer
- Competitive salary and performance bonuses.
- Comprehensive health and wellness benefits.
- Opportunities for professional growth.
- Dynamic and inclusive work culture.
- Flexible work arrangements.
Key Required Skills: DevOps Architect, Terraform, Kubertents,CI/CD Pipeline, Azure DevOps, Github action.
Job Specification:
- Job Location - Noida
- Experience - 2-5 Years
- Qualification - B.Tech, BE, MCA (Technical background required)
- Working Days - 5
- Job nature - Permanent
- Role - IT Cloud Engineer
- Proficient in Linux.
- Hands on experience with AWS cloud or Google Cloud.
- Knowledge of container technology like Docker.
- Expertise in scripting languages. (Shell scripting or Python scripting)
- Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.
Job Description:
The incumbent would be responsible for:
- Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
- Server monitoring, analysis and troubleshooting.
- Deploying multi-tier architectures using microservices.
- Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
- Automating workflow with python or shell scripting.
- CI and CD integration for application lifecycle management.
- Hosting and managing websites on Linux machines.
- Frontend, backend and database optimization.
- Protecting operations by keeping information confidential.
- Providing information by collecting, analyzing, summarizing development & service issues.
- Prepares & installs solutions by determining and designing system specifications, standards & programming.
Job Description:
Position - Cloud Developer
Experience - 5 - 8 years
Location - Mumbai & Pune
Responsibilities:
- Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
- Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
- Develop RESTful APIs and backend services aligned with modern architectural practices.
- Apply object-oriented programming principles and design patterns to build scalable systems.
- Build and maintain automated test frameworks and scripts to ensure high product quality.
- Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
- Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
- Use Git and related version control practices effectively in a team-based development environment.
- Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.
Skills:
- 5+ years of experience
- Experience with IaC Module
- Terraform coding experience along with Terraform Module as a part of central platform team
- Azure/GCP cloud experience is a must
- Experience with C#/Python/Java Coding - is good to have
Director - Backend | Snapmint About Snapmint.
Experience: 12+ years, Location: Gurgaon.
Founded by serial Entrepreneurs from IIT Bombay, Snapmint is challenging the way banking is done by building the banking experience ground up. Our first product provides purchase financing at 0% interest rates to 300 Million banked consumers in India who do not have credit cards using instant credit scoring and advanced underwriting systems. We look at hundreds of variables, much beyond traditional credit models. With real time credit approval, seamless digital loan servicing and repayments technology we are revolutionizing the way banking is done for todays smartphone-wielding Indian. https://snapmint.com/
Job Overview: As Director Backend, you will lead a team of backend engineers, driving the development of scalable, reliable, and performant systems. You will work closely with product management, front-end engineers, and other cross-functional teams to deliver high-quality solutions while ensuring alignment with the companys technical and business goals. You will play a key role in coaching and mentoring engineers, promoting best practices, and helping to grow the backend engineering capabilities.
Key Responsibilities:
- Lead, mentor, and manage a team of backend engineers, ensuring high-quality delivery and fostering a collaborative work environment.
- Collaborate with product managers, engineers, and other stakeholders to define technical solutions and design scalable backend architectures.
- Own the development and maintenance of backend systems, APIs, and services.
- Drive technical initiatives, including infrastructure improvements, performance optimizations, and platform scalability.
- Guide the team in implementing industry best practices for code quality, security, and performance.
- Participate in code reviews, providing constructive feedback and maintaining high coding standards.
- Promote agile methodologies and ensure the team adheres to sprint timelines and goals.
- Develop and track key performance indicators (KPIs) to measure team productivity and system reliability.
- Foster a culture of continuous learning, experimentation, and improvement within the backend engineering team.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
- 12+ years of experience in backend development with a proven track record of leading engineering teams.
- Strong experience with backend language ie. Node.js
- Experience working with databases (SQL, NoSQL), caching systems, and RESTful APIs.
- Familiarity with cloud platforms like AWS, GCP, or Azure and containerization technologies (e.g., Docker, Kubernetes).
- Solid understanding of software development principles, version control, and CI/CD practices.
- Excellent problem-solving skills and the ability to architect complex systems.
- Strong leadership, communication, and interpersonal skills.
- Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively.
Roles & Responsibilities:
- Lead UI development for various components of our Web based Enterprise Sensing Capability.
- Take ownership of complex software projects from conception to deployment.
- Manage software delivery scope, risk, and timeline.
- Possess strong rapid prototyping skills and can quickly translate concepts into working code.
- Provide technical guidance and mentorship to junior developers.
- Contribute to front-end development using cloud technology.
- Develop innovative solutions using generative AI technologies.
- Conduct code reviews to ensure code quality and adherence to best practices.
- Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations.
- Identify and resolve technical challenges effectively.
- Stay updated with the latest trends and advancements in UI development.
- Work closely with the product team, business team, and other stakeholders.
- Design, develop, and implement user interfaces and modules, including custom reports, interfaces, and enhancements.
- Analyze and understand the functional and technical requirements of applications, solutions, and systems and translate them into software architecture and design specifications.
- Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software.
- Identify and resolve software bugs and performance issues.
- Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time.
- Maintain detailed documentation of software designs, code, and development processes.
- Customize modules to meet specific business requirements.
- Work on integrating with other systems and platforms to ensure seamless data flow and functionality.
- Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently.
- Develop intuitive and responsive user interfaces (UIs) that enable users to efficiently interact with multiple datasets, leveraging modern front-end frameworks and libraries.
- Collaborate with UX designers to translate design mockups and wireframes into interactive and visually appealing user interfaces.
- Implement UI animations and transitions to enhance the user experience and provide feedback to users.
- Optimize UI performance by identifying and addressing bottlenecks, ensuring smooth and fast interactions.
- Ensure accessibility standards are met, making the UI usable for people with disabilities.
- Participate in Agile ceremonies (Daily Scrum/Refinement/Retro) to partner with the product owner and team to discuss, set, and deliver two-week developmental goals.
- Keep abreast of technology upgrades and advancements and provide recommendations to improve process efficiencies.
Mandatory Criteria
- Looking for candidates from Bangalore and NP less than or equal to 20 days
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 6+ years of experience in quality assurance or software testing, with at least 3 years focused on test automation and 2+ years in a leadership or senior role.
- Solid knowledge of SQL Experience in architecting and implementing automated testing frameworks . Strong in programming/scripting languages such as Python, Java, or JavaScript for automation development.
- Expertise with automation tools like Selenium, Playwright, Appium, or RestAssured, and integrating them into CI/CD workflows.
- Proven leadership skills, including mentoring junior engineers and managing team deliverables in an agile environment.
- Experience with test management tools (e.g., TestRail, qTest) and defect tracking systems (e.g., Jira).
- Deep understanding of testing best practices, including functional, regression, performance, and security testing.
- Ability to analyze system architecture and identify key areas for test coverage and risk mitigation.
- Experience with containerization technologies (Docker, Kubernetes) and cloud platforms (AWS preferred). Understanding of performance, security, and load testing tools (e.g., JMeter, OWASP ZAP).
- Familiarity with observability and monitoring tools (e.g., ELK Stack, Datadog, Prometheus, Grafana) for test environment analysis.
If interested kindly share your updated resume on 82008 31681
💥 We’re Hiring a TECH MAVERICK!
🔥 Hands-on Technical Engineering Manager - MEAN Stack | Mobile Apps | AWS (serverless)
📍 Location: Hyderabad | 🕒 Experience: 10+ Years
🎯 Industry: AI in Education
🚨 Tired of building “just another app”?
At OAKS & SIYA, we’re reinventing education with AI-powered digital learning experiences that impact Millions of learners across India. From immersive mobile apps to automated assessments, we’re building the future of edtech — and we need someone bold enough to lead it.
We’re looking for a Full-Stack Tech Leader who thrives on ownership — from shaping architecture to mentoring devs, and from AWS automation to pixel-perfect mobile apps.
🛠️ What You’ll Do:
· Architect & build dynamic platforms with MEAN Stack
· Develop sleek hybrid mobile apps (Ionic/Capacitor)
· Own & automate AWS SAM deployments (Serverless FTW ⚡)
· Lead sprints, mentor juniors & ensure production-grade releases
· Collaborate with product, design & content teams to deliver real impact
🎯 Your Superpowers:
· MEAN Stack Pro: MongoDB, Express, Angular, Node.js
· Mobile-first Mindset: Ionic, Capacitor, app stores
· AWS Ninja: SAM, Lambda, CI/CD pipelines
· Agile Leader: Can drive teams, not just tasks
· Obsessed with clean, scalable, secure code
🌈 Why Join Us?
· 🚀 High-impact role: Lead core tech for 2 fast-scaling AI edtech products
· 🌟 Creative freedom: Your architecture, your call
· 💡 Purpose-driven work: Shaping how kids learn with AI in education
· 🎙️ Visibility & growth: Your code goes LIVE to thousands of users
Role & responsibilities
- Develop and maintain server-side applications using Go Lang.
- Design and implement scalable, secure, and maintainable RESTful APIs and microservices.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic
- Optimize applications for performance, reliability, and scalability.
- Write clean, efficient, and reusable code that adheres to best practices.
Preferred candidate profile
- Minimum 5 years of working experience in Go Lang development.
- Proven experience in developing RESTful APIs and microservices.
- Familiarity of cloud platforms like AWS, GCP, or Azure.
- Familiarity with CI/CD pipelines and DevOps practices
CTC: up to 20 LPA
Required Skills:
- Strong experience in SAP EWM Technical Development.
- Proficiency in ABAP (Reports, Interfaces, Enhancements, Forms, BAPIs, BADIs).
- Hands-on experience with RF developments, PPF framework, and queue monitoring.
- Understanding of EWM master data, inbound/outbound processes, and warehouse tasks.
- Experience with SAP integration technologies (IDoc, ALE, Web Services).
- Good analytical, problem-solving, and communication skills.
Nice to Have:
- Exposure to S/4HANA EWM.
- Knowledge of Functional EWM processes.
- Experience in Agile / DevOps environments.
If interested kindly share your updated resume on 82008 31681

A global technology-driven performance apparel retailer
Core Focus:
- Operate with a full DevOps mindset, owning the software lifecycle from development through production support.
- Participate in Agile ceremonies and global team collaboration, including on-call support.
Mandatory/Strong Technical Skills (6–8+ years of relevant experience required):
- Java: 4.5 to 6.5 years experience
- AWS: Strong knowledge and working experience with Cloud technologies minimum 2 years.
- Kafka: 2 years of Strong knowledge and working experience with data integration technologies
- Databases: Experience with SQL/NoSQL databases (e.g., Postgres, MongoDB).
Other Key Technologies & Practices:
- Python, Spring Boot, and API-based system design.
- Containers/Orchestration (Kubernetes).
- CI/CD tools (Gitlab, Splunk, Datadog).
- Familiarity with Terraform and Airflow.
- Experience in Agile methodology (Jira, Confluence).
🚀 Hiring: PL/SQL Developer
⭐ Experience: 5+ Years
📍 Location: Pune
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
What We’re Looking For:
☑️ Hands-on PL/SQL developer with strong database and scripting skills, ready to work onsite and collaborate with cross-functional financial domain teams.
Key Skills:
✅ Must Have: PL/SQL, SQL, Databases, Unix/Linux & Shell Scripting
✅ Nice to Have: DevOps tools (Jenkins, Artifactory, Docker, Kubernetes),
✅AWS/Cloud, Basic Python, AML/Fraud/Financial domain, Actimize (AIS/RCM/UDM)
Role: Lead Java Developer
Work Location: Chennai, Pune
No of years’ experience: 8+ years
Hybrid (3 days office and 2 days home)
Type: Fulltime
Skill Set: Java + Spring Boot + Sql + Microservices + DevOps
Job Responsibilities:
Design, develop, and maintain high-quality software applications using Java and Spring Boot.
Develop and maintain RESTful APIs to support various business requirements.
Write and execute unit tests using TestNG to ensure code quality and reliability.
Work with NoSQL databases to design and implement data storage solutions.
Collaborate with cross-functional teams in an Agile environment to deliver high-quality software solutions.
Utilize Git for version control and collaborate with team members on code reviews and merge requests.
Troubleshoot and resolve software defects and issues in a timely manner.
Continuously improve software development processes and practices.
Description:
8 years of professional experience in backend development using Java and leading a team.
Strong expertise in Spring Boot, Apache Camel, Hibernate, JPA, and REST API design
Hands-on experience with PostgreSQL, MySQL, or other SQL-based databases
Working knowledge of AWS cloud services (EC2, S3, RDS, etc.)
Experience in DevOps activities.
Proficiency in using Docker for containerization and deployment.
Strong understanding of object-oriented programming, multithreading, and performance tuning
Self-driven and capable of working independently with minimal supervision
We are seeking a disciplined, serious, and highly skilled Lead JavaScript Engineer with hands-on experience in SvelteKit and Generative AI (RAG & MCP).
Key Responsibilities:
- Lead and mentor a small team of 5 Software Developers.
- Architect, develop, and maintain web and native applications.
- Integrate Generative AI solutions (RAG & MCP).
- Manage authentication, security, and DevOps pipelines.
- Enforce disciplined coding practices, processes, and timely delivery.
- Collaborate with product and design teams to implement scalable solutions.
Requirements:
- Strong expertise in JavaScript / TypeScript mainly with JS Framework of SvelteKit is must.
- Hands-on experience with Generative AI (RAG & MCP).
- Experience leading small teams.
- Deep understanding of web app architecture, security, and DevOps.
- Knowledge of native app frameworks is a plus.
- Disciplined, self-driven, and result-oriented mindset.
- Local candidates preferred.
Additional Notes:
- Full-time office role, 6 days/week.
- Long-term commitment required;
- Frequent job hoppers need not apply.
- Immediate joining.
Job Specification:
- Job Location - Noida
- Experience - 2-5 Years
- Qualification - B.Tech, BE, MCA (Technical background required)
- Working Days - 5
- Job nature - Permanent
- Role - IT Cloud Engineer
- Proficient in Linux.
- Hands on experience with AWS cloud or Google Cloud.
- Knowledge of container technology like Docker.
- Expertise in scripting languages. (Shell scripting or Python scripting)
- Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.
Job Description:
The incumbent would be responsible for:
- Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
- Server monitoring, analysis and troubleshooting.
- Deploying multi-tier architectures using microservices.
- Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
- Automating workflow with python or shell scripting.
- CI and CD integration for application lifecycle management.
- Hosting and managing websites on Linux machines.
- Frontend, backend and database optimization.
- Protecting operations by keeping information confidential.
- Providing information by collecting, analyzing, summarizing development & service issues.
- Prepares & installs solutions by determining and designing system specifications, standards & programming.
Job Title : DevOps Engineer
Experience : 3+ Years
Location : Mumbai
Employment Type : Full-time
Job Overview :
We’re looking for an experienced DevOps Engineer to design, build, and manage Kubernetes-based deployments for a microservices data discovery platform.
The ideal candidate has strong hands-on expertise with Helm, Docker, CI/CD pipelines, and cloud networking — and can handle complex deployments across on-prem, cloud, and air-gapped environments.
Mandatory Skills :
✅ Helm, Kubernetes, Docker
✅ Jenkins, ArgoCD, GitOps
✅ Cloud Networking (VPCs, bare metal vs. VMs)
✅ Storage (MinIO, Ceph, NFS, S3/EBS)
✅ Air-gapped & multi-tenant deployments
Key Responsibilities :
- Build and customize Helm charts from scratch.
- Implement CI/CD pipelines using Jenkins, ArgoCD, GitOps.
- Manage containerized deployments across on-prem/cloud setups.
- Work on air-gapped and restricted environments.
- Optimize for scalability, monitoring, and security (Prometheus, Grafana, RBAC, HPA).
Exp: 10 to 15 Years
CTC: up to 25 LPA
Core skill required:
- In-depth knowledge of Angular 8 or above , Typescript, JavaScript , HTML, and CSS
- Should have adequate knowledge of API Development Technologies to guide the Team to develop the API code and get it tested
- Excellent communication and interpersonal skills, with the ability to lead and mentor technical teams
- Should have good knowledge of the current Technology trends to implement techniques which can enhance the security, performance and stability of the product
- Should have good knowledge in preparing the Low Level Design and ensure the developers are having full understanding before commencement of work
- Good Knowledge of the DevOps process for CI/CD will be an added advantage
- Should have a solid understand of SDLC process using Waterfall, Iterative or Agile Methodology
- Good Knowledge of Quality Processes and Quality Standards
- Have experience in handling risk and providing mitigation strategies to the Product Manager
Primary skills:
- 8+ years of experience Angular 8+ version, Type Script
- Minimum 5 years of experience on Web Application development HTML, CSS, JavaScript/JQuery, Entity framework and Linq Queries
- Been on a Lead role and led a team of 3-5 people for a period of 1 - 2 years
- Must have a good exposure on query writing and DB management for writing stored procedures/ user-defined functions
- Should have a very good understanding of the project architecture
- Should provide Technical guidance to the team to get the task completed on time.
- Assist project manager in the project coordination/management
Kindly share your resume on 82008 31681
Data Engineer
Experience: 4–6 years
Key Responsibilities
- Design, build, and maintain scalable data pipelines and workflows.
- Manage and optimize cloud-native data platforms on Azure with Databricks and Apache Spark (1–2 years).
- Implement CI/CD workflows and monitor data pipelines for performance, reliability, and accuracy.
- Work with relational databases (Sybase, DB2, Snowflake, PostgreSQL, SQL Server) and ensure efficient SQL query performance.
- Apply data warehousing concepts including dimensional modelling, star schema, data vault modelling, Kimball and Inmon methodologies, and data lake design.
- Develop and maintain ETL/ELT pipelines using open-source frameworks such as Apache Spark and Apache Airflow.
- Integrate and process data streams from message queues and streaming platforms (Kafka, RabbitMQ).
- Collaborate with cross-functional teams in a geographically distributed setup.
- Leverage Jupyter notebooks for data exploration, analysis, and visualization.
Required Skills
- 4+ years of experience in data engineering or a related field.
- Strong programming skills in Python with experience in Pandas, NumPy, Flask.
- Hands-on experience with pipeline monitoring and CI/CD workflows.
- Proficiency in SQL and relational databases.
- Familiarity with Git for version control.
- Strong communication and collaboration skills with ability to work independently.
Job Title : Senior Technical Consultant (Polyglot)
Experience Required : 5 to 10 Years
Location : Bengaluru / Chennai (Remote Available)
Positions : 2
Notice Period : Immediate to 1 Month
Role Overview :
We seek passionate polyglot developers (Java/Python/Go) who love solving complex problems and building elegant digital products.
You’ll work closely with clients and teams, applying Agile practices to deliver impactful digital experiences..
Mandatory Skills :
Strong in Java/Python/Go (any 2), with frontend experience in React/Angular, plus knowledge of HTML, CSS, CI/CD, Unit Testing, and DevOps.
Key Skills & Requirements :
Backend (80% Focus) :
- Strong expertise in Java, Python, or Go (at least 2 backend stacks required).
- Additional exposure to Node.js, Ruby on Rails, or Rust is a plus.
- Hands-on experience in building scalable, high-performance backend systems.
Frontend (20% Focus) :
- Proficiency in React or Angular
- Solid knowledge of HTML, CSS, JavaScript
Other Must-Haves :
- Strong understanding of unit testing, CI/CD pipelines, and DevOps practices.
- Ability to write clean, testable, and maintainable code.
- Excellent communication and client-facing skills.
Roles & Responsibilities :
- Tackle technically challenging and mission-critical problems.
- Collaborate with teams to design and implement pragmatic solutions.
- Build prototypes and showcase products to clients.
- Contribute to system design and architecture discussions.
- Engage with the broader tech community through talks and conferences.
Interview Process :
- Technical Round (Online Assessment)
- Technical Round with Client (Code Pairing)
- System Design & Architecture (Build from Scratch)
✅ This is a backend-heavy polyglot developer role (80% backend, 20% frontend).
✅ The right candidate is hands-on, has multi-stack expertise, and thrives in solving complex technical challenges.
🚀 We’re Hiring: Senior Software Engineer – Backend | Remote | Full-time
Are you a backend engineering expert who thrives in high-growth startup environments?
Do you enjoy solving complex challenges with the latest technologies like Java 18+, Spring Boot 3+, and scalable microservices?
We’re looking for a Senior Software Engineer – Backend to help us build a world-class data science platform that powers cutting-edge AI solutions.
What You’ll Do:
🔹 Build and optimize scalable, secure backend systems
🔹 Collaborate with product owners & architects to shape business solutions
🔹 Deliver high-quality, production-ready code with best practices (unit testing, CI/CD, automation)
🔹 Work with modern tools like Kubernetes, Docker, NodeJS, React
🔹 Contribute to a high-performance engineering culture and drive innovation
What We’re Looking For:
✔️ 6+ years of experience with Java/Python, Spring Boot, REST APIs, microservices
✔️ Strong CS fundamentals, algorithms, and system design skills
✔️ Experience in secure web applications & scalable backend architectures
✔️ Knowledge of cloud (AWS/GCP/Azure), GitHub Actions, and Unix scripting
✔️ Startup mindset – fast learner, problem solver, impact-driven
🌍 Remote | High-growth environment | Global impact






















