50+ CI/CD Jobs in India
Apply to 50+ CI/CD Jobs on CutShort.io. Find your next job, effortlessly. Browse CI/CD Jobs and apply today!


Key Responsibilities:
• Backend Development:
o Design, develop, and maintain microservices and APIs using .NET Core. Should have a good understanding of .NET Framework.
o Implement RESTful APIs, ensuring high performance and security.
o Optimize database queries and design schemas for SQL Server / Snowflake / MongoDB.
• Software Architecture & DevOps:
o Design and implement scalable microservices architecture.
o Work with Docker, Kubernetes, and CI/CD pipelines for deployment and automation.
o Ensure best practices in security, scalability, and performance.
• Collaboration & Agile Development:
o Work closely with UI/UX designers, backend engineers, and product managers.
o Participate in Agile/Scrum ceremonies, code reviews, and knowledge-sharing sessions.
o Write clean, maintainable, and well-documented code.
________________________________________
Required Skills & Qualifications:
o 8+ years of experience as a Full-Stack Developer.
o Strong experience in .NET Core, C#.
o Proficiency in React.js, JavaScript (ES6+), TypeScript.
o Experience with RESTful APIs, Microservices architecture.
o Knowledge of SQL / NoSQL databases (SQL Server, Snowflake, MongoDB).
o Experience with Git, CI/CD pipelines, Docker, and Kubernetes.
o Familiarity with Cloud services (Azure, AWS, or GCP) is a plus.
o Strong debugging and troubleshooting skills.
________________________________________
Nice-to-Have:
• Experience with GraphQL, gRPC, WebSockets.
• Exposure to serverless architecture and cloud-based solutions.
• Knowledge of authentication/authorization frameworks (OAuth, JWT, Identity Server).
• Experience with unit testing and integration testing.

Job Title: Senior Software Engineer - Backend
About the firm:
Sustainability lies at the core of Stantech AI. Our vision is to empower organizations to derive actionable insights—effectuating a smarter way of working. We operate on the premise that each client is unique and as such requires their own idiosyncratic solutions. Putting this principle into practice, we deliver tailor-made solutions to digitalize, optimize, and strategize fundamental processes underpinning client organizations. For more information, please refer to our website: www.stantech.ai
Job Description:
As a Senior Software Engineer at Stantech AI, you will play a pivotal role in designing, developing, and maintaining enterprise-grade backend services and APIs that cater to the unique needs of our clients. You will be a key member of our engineering team and will contribute to the success of projects by leveraging your expertise in Python, SQL, and modern DevOps practices.
Key Responsibilities:
- Design, develop, and maintain high-performance backend applications and RESTful APIs using Python FastAPI framework.
- Optimize and maintain relational databases with SQL (data modeling, query optimization, and sharding) to ensure data integrity and scalability.
- Create, configure, and manage CI/CD pipelines using GitLab CI for automated build, test, and deployment workflows.
- Collaborate with cross-functional teams (data scientists, frontend engineers, DevOps) to gather requirements and deliver robust, scalable, and user-friendly solutions.
- Participate in architectural and technical decisions to drive innovation, ensure reliability, and improve system performance.
- Conduct code reviews, enforce best practices, and mentor junior engineers.
- Troubleshoot, diagnose, and resolve production issues in a timely manner.
- Stay up-to-date with industry trends, emerging technologies, and best practices.
- Bonus: Hands-on experience with server-level configuration and infrastructure—setting up load balancers, API gateways, and reverse proxies.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Minimum 3 years of professional experience in backend development, with strong expertise in Python and SQL.
- Proven track record building and maintaining CI/CD pipelines using GitLab CI.
- Familiarity with containerization and orchestration technologies: Docker, Kubernetes.
- Solid understanding of software development lifecycle (SDLC) best practices, design patterns, and version control (Git).
- Excellent problem-solving, debugging, and communication skills.
- Ability to work independently and collaboratively in a fast-paced environment.
- Plus: Experience with front-end technologies (HTML, CSS, JavaScript) and cloud platforms (AWS, GCP, Azure).
Financial Package:
Competitive salary in line with experience: ₹10–20 Lakhs per annum, contingent on qualifications and experience.

Role Purpose: Assist in application and API migration from legacy estate to Azure PaaS or containerized setup.
Key Skills:
- .NET/.NET Core application modernization
- Azure App Services, APIM, Functions
- Integration of CI/CD pipelines for deployments
- Familiarity with Azure SQL, Key Vault, Blob Storage
Experience Level:
- 5+ years in backend/.NET development
- 2+ years in cloud migration or app modernization projects
- Should understand cloud-native application patterns

About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Role Overview:
As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.
Key Responsibilities:
- Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
- Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
- Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
- Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
- End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
- Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
- Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.
Required Skills and Qualifications
- Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
- 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
- Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
- Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
- Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
- Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
- Experience with containerization technologies, specifically Docker.
- Solid understanding of software engineering principles and experience building APIs and microservices.
Preferred Qualifications
- A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
- Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
- Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
- Proven ability to lead technical teams and mentor other engineers.
- Experience developing custom tools or packages for data science workflows.
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure
Job Overview:
We are seeking a skilled QA Tester with a strong background in Vulnerability Testing to ensure the security, functionality, and reliability of our applications. The ideal candidate will have experience in penetration testing, security testing methodologies, load testing, automation, and working with compliance standards.
Key Responsibilities:
- Develop and execute test cases, scripts, and security test plans for applications and APIs.
- Perform vulnerability assessments and penetration testing on web, mobile, and cloud-based applications.
- Identify security loopholes, conduct risk analysis, and provide actionable remediation recommendations.
- Collaborate with development and DevOps teams to ensure secure coding practices.
- Automate security testing and integrate into CI/CD pipelines.
- Test applications against OWASP Top 10 vulnerabilities (e.g., SQL Injection, XSS, CSRF, SSRF).
- Utilize tools such as Burp Suite, OWASP ZAP, Metasploit, Kali Linux, Nessus, etc.
- Conduct API security testing and validate authentication & authorization mechanisms.
- Document security vulnerabilities and collaborate for timely remediation.
- Ensure compliance with industry standards like ISO 27001, GDPR, HIPAA, PCI-DSS, where applicable.
Required Skills & Qualifications:
- 3+ years of experience in Quality Assurance, with a strong focus on Security & Vulnerability Testing.
- In-depth knowledge of penetration testing tools and security frameworks.
- Experience with automated security testing in CI/CD environments (e.g., Jenkins, GitHub Actions, GitLab CI).
- Proficiency in manual and automated testing of web and mobile applications.
- Familiarity with scripting languages such as Python, Bash, or JavaScript for test automation.
- Experience working with cloud platforms like AWS, Azure, or GCP (preferred).
- Strong understanding of HTTP, APIs, and authentication protocols (OAuth, JWT, SAML).
- Good knowledge of network security, firewalls, and IDS/IPS systems.
- Certifications such as CEH, OSCP, CISSP, Security+ are a plus.

Quidcash is seeking a highly skilled and passionate Engineering Manager to lead, mentor, and grow a talented team of software engineers. You will be instrumental in shaping our technical direction, driving the development of our core products, and championing engineering excellence. This is a hands-on leadership role where you'll contribute to architectural decisions, foster a culture of innovation, and ensure your team is equipped to build scalable, robust, and intelligent systems.
If you're a leader who thrives on technical challenges, loves building high-performing teams, and is excited by the potential of AI/ML in fintech, we want to hear from you!
What You'll Do:
· Lead & Mentor: Manage, coach, and develop a team of software engineers, data scientists fostering an inclusive, collaborative, and high-performance culture. Conduct regular 1:1s, performance reviews, and support career growth.
· Technical Leadership: Provide strong technical guidance on architecture, design, and implementation of complex systems, particularly in microservices, OOPS principles, and cloud-native applications.
· AI/ML Integration: Drive the strategy and execution for integrating AI/ML models and techniques into our products and platforms, working closely with data scientists and engineers.
· Engineering Best Practices: Establish, evangelize, and enforce best practices for software development, including code quality, testing (unit, integration, E2E), CI/CD, security, and documentation.
· DevOps Culture: Champion and implement DevOps principles to improve deployment frequency, system reliability, and operational efficiency. Oversee CI/CD pipelines and infrastructure-as-code practices.
· Roadmap & Execution: Collaborate with Product Management, Design, and other stakeholders to define the technical roadmap, translate product requirements into actionable engineering tasks, and ensure timely delivery of high-quality software.
· Architectural Vision: Contribute to and influence the long-term architectural vision for Quidcash platforms, ensuring scalability, resilience, and maintainability.
· Problem Solving: Dive deep into complex technical challenges, lead troubleshooting efforts, and make critical technical decisions.
· Recruitment & Team Building: Actively participate in recruiting, interviewing, and onboarding new engineering talent.
What You'll Bring (Must-Haves):
· Experience:
o Proven experience (6+ years) in software engineering, with a strong foundation in Object-Oriented Programming (OOP/S) using languages like Java, Python, C#, Go, or similar.
o Demonstrable experience (2+ years) in an engineering leadership or management role, directly managing a team of engineers.
· Technical Acumen:
o Deep understanding and practical experience with microservice architecture, including design patterns, deployment strategies (e.g., Kubernetes, Docker), and inter-service communication.
o Solid experience with cloud platforms (AWS, GCP, or Azure).
o Familiarity and practical experience with AI/ML concepts, tools, and their application in real-world products (e.g., machine learning pipelines, model deployment, MLOps).
o Proficiency in establishing and driving DevOps practices (CI/CD, monitoring, alerting, infrastructure automation).
· Leadership & Management:
o Excellent leadership, communication, and interpersonal skills with a proven ability to mentor and grow engineering talent.
o Experience in setting up and enforcing engineering best practices (code reviews, testing methodologies, agile processes).
o Strong project management skills, with experience in Agile/Scrum methodologies.
· Mindset:
o A proactive, problem-solving attitude with a passion for continuous improvement.
o Ability to thrive in a fast-paced, dynamic startup environment.
o Strong business acumen and ability to align technical strategy with company goals.
Nice-to-Haves:
· Experience in the FinTech or financial services, lending industry.
· Hands-on experience with specific AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn).
· Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
· Contributions to open-source projects or a strong public technical presence.
· Advanced degree (M.S. or Ph.D.) in Computer Science, Engineering, or a related field.
Why Join Quidcash?
· Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs business growth.
· Innovation: Work with cutting-edge technologies, including AI/ML, in a forward-thinking environment.
· Growth: Opportunities for professional development and career advancement in a growing company.
· Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.
· Benefits: Competitive salary, comprehensive benefits package, be a part of the next fin-tech evolution.

Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required
At WeAssemble, we connect global businesses with top-tier talent to build dedicated offshore teams. Our mission is to deliver exceptional services through innovation, collaboration, and transparency. We pride ourselves on a vibrant work culture and are constantly on the lookout for passionate professionals to join our journey.
Job Description:
We are looking for a highly skilled Automation Tester with 3 years of experience to join our dynamic team in Mumbai. The ideal candidate should be proactive, detail-oriented, and ready to hit the ground running. If you’re passionate about quality assurance and test automation, we’d love to meet you!
Key Responsibilities:
Design, develop, and execute automated test scripts using industry-standard tools and frameworks.
Collaborate with developers, business analysts, and other stakeholders to understand requirements and ensure quality.
Maintain and update automation test suites as per application changes.
Identify, record, document, and track bugs.
Ensure the highest quality of deliverables with minimal supervision.
Contribute to the continuous improvement of QA processes and automation strategies.
Skills & Qualifications:
Minimum 3 years of hands-on experience in automation testing.
Proficiency in automation tools such as Selenium, TestNG, JUnit, etc.
Solid knowledge of programming/scripting languages (Java, Python, etc.).
Familiarity with CI/CD tools like Jenkins, Git, etc.
Good understanding of software development lifecycle and agile methodologies.
Excellent analytical and problem-solving skills.
Strong communication and teamwork abilities.
Location: Mumbai (Work from Office)
Notice Period: Candidates who can join immediately or within 15 days will be preferred.
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

Are you a UI whiz with a knack for crafting beautiful and intuitive interfaces? Do you love the
challenge of building cross-platform apps with cutting-edge technologies? If so, we want you on our
team!
About the Role:
We're seeking a talented and passionate Software Engineer with a deep understanding of
JavaScript, React, and Flutter to join our fast-growing team. You'll play a key role in designing,
developing, and implementing user interfaces for our web and mobile applications. You'll collaborate
closely with designers, product managers, and backend engineers to bring our vision to life and
create exceptional user experiences.
Responsibilities:
- Design and develop user interfaces (UI) using JavaScript, React, and Flutter
- Build reusable, maintainable, and performant UI components
- Implement responsive layouts that adapt seamlessly across different devices (web, mobile)
- Integrate UI components with backend APIs
- Write clean, well-documented, and testable code
- Collaborate with designers to translate design mockups into functional UIs
- Participate in code reviews and knowledge sharing
- Stay up-to-date on the latest UI development trends and technologies
Qualifications:
- 3+ years of experience as a software engineer with a focus on UI development
- Proficiency in JavaScript, including ES6+ features
- In-depth knowledge of React and experience building React applications
- Solid understanding of Flutter and experience building cross-platform mobile apps
- Experience with UI design principles (user experience, accessibility)
- Familiarity with CSS frameworks (e.g., Bootstrap, Material-UI) a plus
- Experience with state management libraries (e.g., Redux, MobX) a plus
- Experience with unit testing frameworks (e.g., Jest) a plus
- Excellent problem-solving and analytical skills
- Strong communication and collaboration skills
- Ability to work independently and as part of a team
Bonus Points:
- Experience with UI animation libraries (e.g., React Spring, Rive)
- Experience with continuous integration/delivery (CI/CD) pipelines
- Experience with accessibility best practices
- Experience with version control systems (e.g., Git)
Job Title : Senior System Administrator
Experience : 7 to 12 Years
Location : Bangalore (Whitefield/Domlur) or Coimbatore
Work Mode :
- First 3 Months : Work From Office (5 Days)
- Post-Probation : Hybrid (3 Days WFO)
- Shift : Rotational (Day & Night)
- Notice Period : Immediate to 30 Days
- Salary : Up to ₹24 LPA (including 8% variable), slightly negotiable
Role Overview :
Seeking a Senior System Administrator with strong experience in server administration, virtualization, automation, and hybrid infrastructure. The role involves managing Windows environments, scripting, cloud/on-prem operations, and ensuring 24x7 system availability.
Mandatory Skills :
Windows Server, Virtualization (Citrix/VMware/Nutanix/Hyper-V), Office 365, Intune, PowerShell, Terraform/Ansible, CI/CD, Hybrid Cloud (Azure), Monitoring, Backup, NOC, DCO.
Key Responsibilities :
- Manage physical/virtual Windows servers and core services (AD, DNS, DHCP).
- Automate infrastructure using Terraform/Ansible.
- Administer Office 365, Intune, and ensure compliance.
- Support hybrid on-prem + Azure environments.
- Handle monitoring, backups, disaster recovery, and incident response.
- Collaborate on DevOps pipelines and write automation scripts (PowerShell).
Nice to Have :
MCSA/MCSE/RHCE, Azure admin experience, team leadership background
Interview Rounds :
L1 – Technical (Platform)
L2 – Technical
L3 – Techno-Managerial
L4 – HR

Requirements:
• Overall 7 years of experience
• Strong knowledge of ASP.NET Web Forms, Windows services, C# and SQL Server 2008.
• Hands-on experience on Azure DevOps, CICD Pipeline and working with DevOps based deployments.
• Very strong fundamentals on .Net Framework 4.x and above Strong understanding and application in C#
and .Net enterprise patterns and usage of Microsoft Enterprise Library
• Experience on WebAPI, WCF, Microsoft reporting tool (RDL) SSRS
• Basic knowledge of HTML, CSS3
• Good understanding of IIS configuration
• Good expertise in SQL design (normalization, index design, data modeling) and programming (stored
procs, functions), with development
• Good debugging/troubleshooting skill on both front end and back end
• Experience of writing & optimizing queries that access/process millions of records
• Good communication skills – verbal/written
• Designing and implementing DevSecOps tests: SAST, DAST
• Supporting DevOps operations by integrating developed SAST and DAST tests into CI/CD pipeline builds
• Creating and managing CI/CD pipelines using Azure DevOps
• Continuous Integration & Continuous Deployment with Security mindset and focus.
• Leveraging DevSecOps principles, designs and implementing Security solutions based on these
• Performing DevSecOps maturity assessments
• Working as security champion for application development teams.
• Nice to Have
• Exposure to Agile methodology
• Exposure to Banking domain
• Exposure to Build and Release/Deployment
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
Job Title: Java Engineering Manager/Lead
Experience range:- 10 + Years
Location:- Pune / Mumbai
Knowledge and Skills:
- Strong proficiency in Core Java, Spring Boot.
- Experience with RESTful APIs, microservices, and multithreading.
- Solid understanding of RDBMS (MySQL/PostgreSQL).
- Exposure to cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes).
- Familiarity with CI/CD tools like Jenkins, GitLab, or GitHub Actions.
- Background in Fintech, particularly Digital Lending, Supply Chain Finance, or Banking products.
- Experience working in agile/scrum environments.
- At least 3 years of experience leading/managing a team of Java developers.
Key Responsibilities:
- Lead and mentor a team of Java developers, ensuring technical excellence and timely delivery.
- Actively participate in coding, code reviews, architecture decisions, and system design.
- Collaborate with cross-functional teams including Product Managers, QA, and DevOps.
- Maintain a strong hands-on presence in backend Java development and microservices architecture.
- Own the end-to-end lifecycle of features from requirement to deployment and post-release support

Backend Engineer - Python
Location
Bangalore, India
Experience Required
2-3 years minimum
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.
Key Requirements
Technical Skills
CS fundamentals are must (CN, DBMS, OS, System Design, OOPS) • Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask
• Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization
• API Development: Strong experience in designing and implementing RESTful APIs and GraphQL
• Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services
• Containerization: Proficiency with Docker and Kubernetes
• Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka
• Version Control: Advanced Git workflows and collaboration
Experience Requirements
• Minimum 2-3 years of backend development experience
• Proven track record of working on enterprise-level applications
• Experience building scalable systems handling high traffic loads
• Background in microservices architecture and distributed systems
• Experience with CI/CD pipelines and DevOps practices
Responsibilities
• Design, develop, and maintain robust backend services and APIs
• Optimize application performance and scalability
• Collaborate with frontend teams and product managers
• Implement security best practices and data protection measures
• Write comprehensive tests and maintain code quality
• Participate in code reviews and architectural discussions
• Monitor system performance and troubleshoot production issues
Preferred Qualifications
• Knowledge of caching strategies (Redis, Memcached)
• Understanding of software architecture patterns
• Experience with Agile/Scrum methodologies
• Open source contributions or personal projects

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.


- Position: Full Stack Developer – Web & Mobile | AI-Savvy & Independent
- Location: Hyderabad
- Experience: 0–10 years
- Notice Period: Immediate to 15 days preferred
- Employment Type: Full-Time
- What We’re Looking For
- We’re on the hunt for a smart, hands-on Full Stack Developer who can take charge of building and shipping both web and mobile apps — someone who thrives in a fast-moving, high-ownership environment.
- If you're someone who loves solving real problems, codes with intention, and knows how to get things done (even better if you use tools like GitHub Copilot or Claude to do it faster), we’d love to meet you.
- We care less about buzzwords and more about your ability to think independently, build with purpose, and own the full cycle of product development — from architecture to deployment.
- What You’ll Be Doing
- Develop and maintain production-grade web and mobile applications
- Take complete ownership — from coding to deployment and testing
- Use AI tools to speed up development (Cursor, Copilot, Claude Code, etc.)
- Build clean, modular UIs using React and React Native
- Design and develop backend APIs using .NET Core
- Work with PostgreSQL and mobile-friendly local databases like SQLite
- Set up and manage CI/CD pipelines to automate deployments
- Implement offline-first mobile functionality (think sync logic, caching, etc.)
- Collaborate with cross-functional teams — or fly solo when needed
- Make smart tech decisions, and write code you’re proud of
- You’ll Fit Right In If You:
- Like working autonomously and owning your projects end-to-end
- Have real-world experience shipping both mobile and web apps
- Use AI tools to code faster and better, not just for fun
- Have deployed apps to the Play Store or production environments
- Have experience with version control (Git) and collaborative workflows
- Understand performance, architecture, and clean code principles
- Are comfortable explaining your choices and decisions in a team setting
- Tech Stack We Care About
- Frontend: React, React Native
- Backend: .NET Core (or similar server-side experience)
- Databases: PostgreSQL, SQLite (for mobile offline storage)
- DevOps: GitHub Actions, GitLab CI, Jenkins
- Version Control: Git (with repo examples)
- AI Tools: GitHub Copilot, Claude Code, Cursor — show us how you use them!
- Bonus Points If You:
- Have built or scaled apps for field teams or delivery agents
- Have experience with complex sync logic
- Can share a public GitHub repo or a case study of something you built
- Tell Us About Yourself
- To help us get to know you better, please share:
- Full Name:
- Current City & State:
- Current CTC (₹ in LPA):
- Expected CTC (₹ in LPA):
Job Title : Data Engineer – GCP + Spark + DBT
Location : Bengaluru (On-site at Client Location | 3 Days WFO)
Experience : 8 to 12 Years
Level : Associate Architect
Type : Full-time
Job Overview :
We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.
Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.
Key Responsibilities :
- Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
- Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
- Implement and maintain CI/CD for data engineering projects with Git-based version control.
- Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
- Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
- Participate in Agile sprints, backlog grooming, and Jira-based project tracking.
Must-Have Skills :
- Strong experience with DBT, Google Dataform, and BigQuery
- Hands-on expertise with PySpark/Spark SQL
- Proficient in GCP for data engineering workflows
- Solid knowledge of SQL optimization, Git, and CI/CD pipelines
- Agile team experience and strong problem-solving abilities
Nice-to-Have Skills :
- Familiarity with Databricks, Delta Lake, or Kafka
- Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
- Knowledge of MDM patterns, Terraform, or IaC is a plus


Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics
About Eazeebox
Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.
About the Role
We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows. This is a high-ownership role for an "all-rounder" who is passionate about
designing scalable architectures, writing robust code, and ensuring seamless deployments and operations.
What You'll Be Doing
As a critical member of our small, dedicated team, you will take on a versatile role encompassing development, infrastructure, and operations.
Cloud & DevOps Ownership
- Architect and implement containerized services on AWS (S3, EC2, ECS, ECR, CodeBuild, Lambda, Fargate, RDS, CloudWatch) under secure IAM policies.
- Take ownership of CI/CD pipelines, optimizing and managing GitHub Actions workflows.
- Configure and manage microservice versioning and CI/CD deployments.
- Implement secrets rotation and IP-based request rate limiting for enhanced security.
- Configure auto-scaling instances and Kubernetes for high-workload microservices to ensure performance and cost efficiency.
- Hands-on experience with Docker and Kubernetes/EKS fundamentals.
Backend & API Design
- Design, build, and maintain scalable REST/OpenAPI services in Django (DRF), WebSocket implementations, and asynchronous microservices in FastAPI.
- Model relational data in PostgreSQL 17 and optimize with Redis for caching and pub/sub.
- Orchestrate background tasks using Celery or RQ with Redis Streams or Amazon SQS.
- Collaborate closely with the frontend team (React/React Native) to define and build robust APIs.
Testing & Observability
- Ensure code quality via comprehensive testing using Pytest, React Testing Library, and Playwright.
- Instrument applications with CloudWatch metrics, contributing to our observability strategy.
- Maintain a Git-centric development workflow, including branching strategies and pull-request discipline.
Qualifications & Skills
Must-Have
- Experience: 2-4 years of hands-on experience delivering production-level full-stack applications with a strong emphasis on backend and DevOps.
- Backend Expertise: Proficiency in Python, with strong command of Django or FastAPI, including async Python patterns and REST best practices.
- Database Skills: Strong SQL skills with PostgreSQL; practical experience with Redis for caching and messaging.
- Cloud & DevOps Mastery: Hands-on experience with Docker and Kubernetes/EKS fundamentals.
- AWS Proficiency: Experience deploying and managing services on AWS (EC2, S3, RDS, Lambda, ECS Fargate, ECR, SQS).
- CI/CD: Deep experience with GitHub Actions or similar platforms, including semantic-release, Blue-Green Deployments, and artifact signing.
- Automation: Fluency in Python/Bash or Go for automation scripts; comfort with YAML.
- Ownership Mindset: Entrepreneurial spirit, strong sense of ownership, and ability to deliver at scale.
- Communication: Excellent written and verbal communication skills; comfortable in async and distributed team environments.
Nice-to-Have
- Frontend Familiarity: Exposure to React with Redux Toolkit and React Native.
- Event Streaming: Experience with Kafka or Amazon EventBridge.
- Serverless: Knowledge of AWS Lambda, Step Functions, CloudFront Functions, or Cloudflare Workers.
- Observability: Familiarity with Datadog, Posthog, Prometheus/Grafana/Loki.
- Emerging Tech: Interest in GraphQL (Apollo Federation) or generative AI frameworks (Amazon Bedrock, LangChain) and AI/ML.
Key Responsibilities
- Architectural Leadership: Design and lead the technical strategy for migrating our platform from a monolithic to a microservices architecture.
- System Design: Translate product requirements into scalable, secure, and reliable system designs.
- Backend Development: Build and maintain core backend services using Python (Django/FastAPI).
- CI/CD & Deployment: Own and manage CI/CD pipelines for multiple services using GitHub Actions, AWS CodeBuild, and automated deployments.
- Infrastructure & Operations: Deploy production-grade microservices using Docker, Kubernetes, and AWS EKS.
- FinOps & Performance: Drive cloud cost optimization and implement auto-scaling for performance and cost-efficiency.
- Security & Observability: Implement security, monitoring, and compliance using tools like Prometheus, Grafana, Datadog, Posthog, and Loki to ensure 99.99% uptime.
- Collaboration: Work with product and development teams to align technical strategy with business growth plans.
We are looking for a proactive and detail-oriented AWS Project Manager / AWS Administrator to join our team. The ideal candidate will be responsible for configuring, monitoring, and managing AWS cloud services while optimizing usage, ensuring compliance, and driving automation across deployments.
Key Responsibilities:
1) Configure and manage AWS Cloud services including VPCs, URL proxies, Bastion Hosts, and C2S access points.
2) Monitor AWS resources and ensure high availability and performance.
3) Automate deployment processes to reduce manual efforts and enhance operational efficiency.
4) Daily cost monitoring and backup setup for EC2 source code to S3.
5) Manage AWS Elastic Beanstalk (EB), RDS, CloudFront, Load Balancer, and NAT Gateway traffic.
6) Implement security and compliance best practices using AWS Secret Manager, IAM, and organizational policies.
7) Perform system setup for CBT (Computer-Based Test) exams.
8) Utilize AWS Pricing Calculator for resource estimation and budgeting.
9) Implement RI (Reserved Instance) and Spot Instance strategies for cost savings.
10) Create and manage S3 bucket policies with controlled access, especially for specific websites.
11) Automate disaster recovery and backup strategies for RDS and critical infrastructure.
12) Configure and manage AWS environments using EB, RDS, and routing setups.
13) Create and maintain AWS CloudFormation templates for infrastructure as code.
14) Develop comprehensive documentation for all AWS-related activities and configurations.
15) Ensure adherence to AWS best practices in all deployments and services.
16) Handle account creation, policy management, and SSO configuration for organizational identity management.
17) Ability to configure and design AWS architecture.
Required Skills:
· Strong understanding of AWS services and architecture.
· Hands-on experience with:
1) Elastic Beanstalk (EB)
2) RDS
3) EC2
4) S3
5) CloudFront
6) Load Balancer & NAT Gateway
7) CloudFormation
8) IAM, SSO & Identity Management
· Proficient in AWS cost optimization techniques.
· Ability to automate regular tasks and create efficient workflows.
· Clear and structured documentation skills.
Preferred Qualifications:
AWS Certified Solutions Architect (required).
Must have 5+ years of experience in this field.
Previous experience in managed AWS services.
What You’ll Do:
We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.
Responsibilities:
● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)
● Build CI/CD pipelines using Jenkins and integrate them with Git workflows
● Design and manage Kubernetes clusters and helm-based deployments
● Manage infrastructure as code using Terraform
● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)
● Ensure security best practices across cloud resources, networks, and secrets
● Automate repetitive operations and improve system reliability
● Collaborate with developers to troubleshoot and resolve issues in staging/production environments
What We’re Looking For:
Required Skills:
● 1–3 years of hands-on experience in a DevOps or SRE role
● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)
● Proficiency in Kubernetes (deployment, scaling, troubleshooting)
● Experience with Terraform for infrastructure provisioning
● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools
● Understanding of DevSecOps principles and cloud security practices
● Good command over Linux, shell scripting, and basic networking concepts
Nice to have:
● Experience with Docker, Helm, ArgoCD
● Exposure to other cloud platforms (AWS, Azure)
● Familiarity with incident response and disaster recovery planning
● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana
AccioJob is conducting a Walk-In Hiring Drive with B2B Automation Platform for the position of DevOps Engineer.
To apply, register and select your slot here: https://go.acciojob.com/aCZCSM
Required Skills: CI/CD Workflows, Docker, Jenkins, Cloud (AWS preferred)
Eligibility:
- Degree: BTech./BE, MCA
- Branch: Computer Science/CSE/Other CS related branch, IT, Electrical/Other electrical related branches, Communications
- Graduation Year: 2023, 2024, 2025
Work Details:
- Work Location: Noida (Onsite)
- CTC: ₹4 LPA to ₹5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Noida Centre
Further Rounds (for shortlisted candidates only):
Profile & Background Screening Round, Technical Interviews Round 1, Technical Interviews Round 2, Tech + Managerial Round
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/aCZCSM
Or skip the hassle and apply instantly via our app
https://go.acciojob.com/PDvyDp
Job Role: Senior Cypress Testing Engineer
Location: Remote
Type: Full-time
Experience: 4+ years
Salary: Competitive, based on experience
Video Pre-Screen Interview Required... After submitting please also visit here to apply & start the initial video interview: https://app.xinterview.ai/direct_invite/1fedac2d-4cb7-4c33-a2b4-837c17c66fda
About Us
LendingWise is a fast-growing B2B SaaS platform serving the private lending industry with a robust CRM & LOS solution. Built on a custom PHP/MySQL stack with modern front-end technologies (HTML, JS, Bootstrap), we power 250+ customers across the U.S. and manage over $2.3M in ARR. We're scaling our engineering operations and looking for top-tier QA talent to join our distributed team.
What You'll Do
We’re looking for a Cypress Test Engineer to help us increase coverage of our end-to-end test suite across our core CRM & LOS platform. You will collaborate closely with developers, DevOps, and product managers to write automated tests, streamline quality checks, and contribute to our CI/CD workflows.
Key Responsibilities
Design, write, and maintain Cypress end-to-end test suites for core platform features.
Work with PHP developers to integrate testing into GitHub Actions-based CI/CD pipelines.
Assist with configuration and optimization of test environments using Docker containers.
Collaborate with DevOps and developers to integrate test stages into deepsource, PHPUnit, and other code quality tools.
Write test cases that cover user workflows across HTML/Bootstrap/JS interfaces.
Support debugging and analysis of failed tests, helping the team proactively identify regressions.
Recommend improvements to testing strategy and frameworks as the platform evolves.
Requirements
4+ years experience in automated testing with Cypress.
Strong understanding of end-to-end testing principles and frontend behavior validation.
Comfortable working with PHP, MySQL, and legacy codebases.
Familiarity with CI/CD tools like GitHub Actions, Docker, and PHPUnit.
Experience with deepsource or similar code quality and linting tools is a plus.
Strong ENGLISH communication skills and the ability to work independently in a remote environment.
Experience with web technologies such as JavaScript, HTML, and Bootstrap.
Nice to Have
Prior experience working in CRM or fintech product team.
Exposure to testing multi-tenant or enterprise applications.
Why Join LendingWise?
Work on a mission-critical platform used daily by lenders, brokers, and banks.
Fully remote team with flexible working hours.
Opportunity to make a real impact on quality and release reliability.
Collaborative, agile culture with modern tools and engineering practices.
Job Title : Senior Data Engineer
Experience : 6 to 10 Years
Location : Gurgaon (Hybrid – 3 days office / 2 days WFH)
Notice Period : Immediate to 30 days (Buyout option available)
About the Role :
We are looking for an experienced Senior Data Engineer to join our Digital IT team in Gurgaon.
This role involves building scalable data pipelines, managing data architecture, and ensuring smooth data flow across the organization while maintaining high standards of security and compliance.
Mandatory Skills :
Azure Data Factory (ADF), Azure Cloud Services, SQL, Data Modelling, CI/CD tools, Git, Data Governance, RDBMS & NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch), Data Lake migration.
Key Responsibilities :
- Design and develop secure, scalable end-to-end data pipelines using Azure Data Factory (ADF) and Azure services.
- Build and optimize data architectures (including Medallion Architecture).
- Collaborate with cross-functional teams on cybersecurity, data privacy (e.g., GDPR), and governance.
- Manage structured/unstructured data migration to Data Lake.
- Ensure CI/CD integration for data workflows and version control using Git.
- Identify and integrate data sources (internal/external) in line with business needs.
- Proactively highlight gaps and risks related to data compliance and integrity.
Required Skills :
- Azure Data Factory (ADF) – Mandatory
- Strong SQL and Data Modelling expertise.
- Hands-on with Azure Cloud Services and data architecture.
- Experience with CI/CD tools and version control (Git).
- Good understanding of Data Governance practices.
- Exposure to ETL/ELT pipelines and Data Lake migration.
- Working knowledge of RDBMS and NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch).
- Understanding of RESTful APIs, deployment on cloud/on-prem infrastructure.
- Strong problem-solving, communication, and collaboration skills.
Additional Info :
- Work Mode : Hybrid (No remote); relocation to Gurgaon required for non-NCR candidates.
- Communication : Above-average verbal and written English skills.
Perks & Benefits :
- 5 Days work week
- Global exposure and leadership collaboration.
- Health insurance, employee-friendly policies, training and development.
Salesforce DevOps/Release Engineer
Resource type - Salesforce DevOps/Release Engineer
Experience - 5 to 8 years
Norms - PF & UAN mandatory
Resource Availability - Immediate or Joining time in less than 15 days
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
- 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
- Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
- Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
- Solid understanding of Salesforce architecture, metadata, and development lifecycle.
- Familiarity with version control systems (e.g., Git) and agile methodologies
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
- Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
- Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
- Monitor, troubleshoot, and resolve deployment and release issues.
- Maintain documentation for deployment processes and provide training on best practices.
- Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills:
- Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
- CI/CDBuilding and maintaining pipelines, automation, and release management
- Version ControlProficiency with Git and related workflows
- Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
- Scripting
- Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
- Communication
- Strong written and verbal communication skills
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications:
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.
We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft.
Key Responsibilities:
Area
What You’ll Own
Architecture & Delivery
• Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc...
• Lead design and code reviews; enforce best practices for testing, CI/CD, observability, security, and cost-efficient cloud operations.
• Drive technical roadmaps, ensuring scalability (billions of events, 99.9 %+ uptime) and rapid feature delivery.
Team Leadership & Growth
• Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones.
• Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture.
• Coach the team on AI-assisted development workflows (e.g., GitHub Copilot, LLM-based code review) to boost productivity and code quality.
Stakeholder Collaboration
• Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions.
• Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions.
Technical Vision & Governance
• Own coding standards, architectural principles, and technology selection.
• Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies.
• Balance technical debt and new feature delivery through data-driven prioritization.
Required Qualifications:
● 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot
● Proven experience leading or mentoring engineers; direct people-management a plus
● Expert knowledge of AWS services and cloud-native design patterns
● Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads
● Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks
● Excellent communication and stakeholder-management skills in a remote-fi rst environment
Nice-to-Have:
● Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning
● Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously
● Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company
● Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Job Title: Engineering Manager (Java / Spring Boot, AWS) – Remote
Leadership Role
Location: Remote
Employment Type: Full-time
Job Opening: Cloud and Observability Engineer
📍 Location: Work From Office – Gurgaon (Sector 43)
🕒 Experience: 2+ Years
💼 Employment Type: Full-Time
Role Overview:
As a Cloud and Observability Engineer, you will play a critical role in helping customers transition and optimize their monitoring and observability infrastructure. You'll be responsible for building high-quality extension packages for alerts, dashboards, and parsing rules using the organization Platform. Your work will directly impact the reliability, scalability, and efficiency of monitoring across cloud-native environments.
This is a work-from-office role requiring collaboration with global customers and internal stakeholders.
Key Responsibilities:
- Extension Delivery:
- Develop, enhance, and maintain extension packages for alerts, dashboards, and parsing rules to improve monitoring experience.
- Conduct in-depth research to create world-class observability solutions (e.g., for cloud-native and container technologies).
- Customer & Internal Support:
- Act as a technical advisor to both internal teams and external clients.
- Respond to queries, resolve issues, and incorporate feedback related to deployed extensions.
- Observability Solutions:
- Design and implement optimized monitoring architectures.
- Migrate and package dashboards, alerts, and rules based on customer environments.
- Automation & Deployment:
- Use CI/CD tools and version control systems to package and deploy monitoring components.
- Continuously improve deployment workflows.
- Collaboration & Enablement:
- Work closely with DevOps, engineering, and customer success teams to gather requirements and deliver solutions.
- Deliver technical documentation and training for customers.
Requirements:
Professional Experience:
- Minimum 2 years in Systems Engineering or similar roles.
- Focus on monitoring, observability, and alerting tools.
- Cloud & Container Tech:
- Hands-on experience with AWS, Azure, or GCP.
- Experience with Kubernetes, EKS, GKE, or AKS.
- Cloud DevOps certifications (preferred).
Observability Tools:
- Practical experience with at least two observability platforms (e.g., Prometheus, Grafana, Datadog, etc.).
- Strong understanding of alerting, dashboards, and infrastructure monitoring.
Scripting & Automation:
- Familiarity with CI/CD, deployment pipelines, and version control.
- Experience in packaging and managing observability assets.
- Technical Skills:
- Working knowledge of PromQL, Grafana, and related query languages.
- Willingness to learn Dataprime and Lucene syntax.
- Soft Skills:
- Excellent problem-solving and debugging abilities.
- Strong verbal and written communication in English.
- Ability to work across US and European time zones as needed.
Why Join Us?
- Opportunity to work on cutting-edge observability platforms.
- Collaborate with global teams and top-tier clients.
- Shape the future of cloud monitoring and performance optimization.
- Growth-oriented, learning-focused environment.


Job Title : Senior .NET Developer
Experience : 8+ Years
Location : Trivandrum / Kochi
Notice Period : Immediate
Working Hours : 12 PM – 9 PM IST (4-hour mandatory overlap with EST)
Job Summary :
We are hiring a Senior .NET Developer with strong hands-on experience in .NET Core (6/8+), C#, Azure Cloud Services, Azure DevOps, and SQL Server. This is a client-facing role for a US-based client, requiring excellent communication and coding skills, along with experience in cloud-based enterprise application development.
Mandatory Skills :
.NET Core 6/8+, C#, Entity Framework/Core, REST APIs, JavaScript, jQuery, MS SQL Server, Azure Cloud Services (Functions, Service Bus, Event Grid, Key Vault, SQL Azure), Azure DevOps (CI/CD), Unit Testing (XUnit/MSTest), Strong Communication Skills.
Key Responsibilities :
- Design, develop, and maintain scalable applications using .NET Core, C#, REST APIs, SQL Server
- Work with Azure Services: Functions, Durable Functions, Service Bus, Event Grid, Key Vault, Storage Queues, SQL Azure
- Implement and manage CI/CD pipelines using Azure DevOps
- Participate in Agile/Scrum ceremonies, collaborate with cross-functional teams
- Perform troubleshooting, debugging, and performance tuning
- Ensure high-quality code through unit testing and technical documentation
Primary Skills (Must-Have) :
- .NET Core 6/8+, C#, Entity Framework / EF Core
- REST APIs, JavaScript, jQuery
- SQL Server: Stored Procedures, Views, Functions
- Azure Cloud (2+ years): Functions, Service Bus, Event Grid, Blob Storage, SQL Azure, Monitoring
- Unit Testing (XUnit, MSTest), CI/CD (Classic/YAML pipelines)
- Strong knowledge of design patterns, architecture, and microservices
- Excellent communication and leadership skills
Secondary Skills (Nice to Have) :
- AngularJS/ReactJS
- Azure APIM, ADF, Logic Apps
- Azure Kubernetes Service (AKS)
- Application support & operational monitoring
Certifications (Preferred) :
- Microsoft Certified: Azure Fundamentals
- Microsoft Certified: Azure Developer Associate
- Relevant Azure/.NET/Cloud certifications

We are seeking an iOS Developer with a strong foundation in SwiftUI and relevant experience in the IoT domain. The candidate will be responsible for developing iOS applications that seamlessly integrate with IoT devices, ensuring optimal performance and user experience.
Skills:
· Strong hands-on experience with Swift and SwiftUI (mandatory)
· Practical experience working with IoT devices (BLE, Wi-Fi, MQTT)
· Solid understanding of MVVM architecture, dependency injection, and iOS performance optimization
· Experience with RESTful APIs, Git, debugging tools, and crash analytics
· Good communication and problem-solving skills
· Additional skills such as Combine, HomeKit, CI/CD pipelines, and automated testing are a plus
Senior Salesforce Developer
Experience - 7 to 8 years
Norms - PF & UAN mandatory
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
· 5–6 years of hands-on Salesforce development experience, with a proven track record of delivering robust solutions on the Salesforce platform.
· Strong expertise in Apex (classes, triggers, batch processes) and Lightning Web Components (LWC).
· Demonstrated experience with Sales Cloud and Experience Cloud implementations.
· Significant experience in integrating Salesforce with external systems using REST/SOAP APIs and middleware tools.
· Deep understanding of the Salesforce Security Model (Profiles, Permission Sets, Organization-Wide Defaults, Sharing Rules).
· Experience in configuring and optimizing Salesforce environments, including custom objects, fields, page layouts, flows, and process automation.
Key Responsibilities:
· Design, develop, and implement custom Salesforce solutions using Apex, LWC, and other Salesforce technologies.
· Integrate Salesforce with external applications and data sources.
· Customize and extend Salesforce applications, including process automation and UI enhancements.
· Configure and maintain security controls to ensure data protection and compliance.
· Collaborate with cross-functional teams to gather requirements and deliver scalable solutions.
· Conduct unit testing, code reviews, and performance optimizations.
· Stay updated on Salesforce releases and best practices.
Technical Skills:
· Apex & LWC Advanced development skills; hands-on with triggers, batch jobs, LWC modules
· Integration REST/SOAP APIs, middleware, data migration, Metadata/Bulk API
· Security Model Profiles, Permission Sets, OWD, Sharing Rules
· Sales Cloud Implementation, customization, automation
· Experience Cloud Building and managing digital experiences
· Testing & Deployment Unit testing, deployment tools (Salesforce DX, GitHub, CI/CD)
· Communication Strong written and verbal communication skills
Preferred Qualifications:
· Bachelor’s degree in Computer Science, IT, or related field.
· Experience with Agile development methodologies.
· Familiarity with DevOps tools (e.g., Git, Copado, Gearset) and CI/CD pipelines.
· Strong problem-solving and analytical skills.
Certifications (Optional but Preferred):
· Salesforce Certified Platform Developer I
· Salesforce Certified Platform Developer II
· Salesforce Certified JavaScript Developer I
· Salesforce Certified Sales Cloud Consultant
· Salesforce Certified Experience Cloud Consultant
Salesforce DevOps/Release Engineer
Resource type - Salesforce DevOps/Release Engineer
Experience - 5 to 8 years
Norms - PF & UAN mandatory
Resource Availability - Immediate or Joining time in less than 15 days
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
- 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
- Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
- Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
- Solid understanding of Salesforce architecture, metadata, and development lifecycle.
- Familiarity with version control systems (e.g., Git) and agile methodologies
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
- Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
- Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
- Monitor, troubleshoot, and resolve deployment and release issues.
- Maintain documentation for deployment processes and provide training on best practices.
- Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills:
- Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
- CI/CDBuilding and maintaining pipelines, automation, and release management
- Version ControlProficiency with Git and related workflows
- Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
- Scripting
- Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
- Communication
- Strong written and verbal communication skills
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications:
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.

Role Overview:
As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.
Skip the wait and get noticed faster by completing our AI-powered screening. Click this link to start your quick interview. It only takes a few minutes and could be your shortcut to landing the job! - https://bit.ly/LT_Python
What You'll Do:
At LearnTube, we’re pushing the boundaries of Generative AI to revolutionise how the world learns. As a Backend Engineer, you will be building the backend for an AI system and working directly on AI. Your roles and responsibilities will include:
- Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
- Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
- Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
- Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
- Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
- Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
- Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
- Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.
What makes you a great fit?
Must-Haves:
- 2+ yrs Python back-end experience (FastAPI)
- Strong with Docker & container orchestration
- Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
- SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals
Nice-to-Haves
- k8s at scale, Terraform,
- Experience with AI/ML inference services (LLMs, vector DBs)
- Go / Rust for high-perf services
- Observability: Prometheus, Grafana, OpenTelemetry
About Us:
At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders:
LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us?
At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
Senior Software Engineer – Java
Location: Pune (Hybrid – 3 days from office)
Experience: 8–15 Years
Domain: Information Technology (IT)
Joining: Immediate joiners only
Preference: Local candidates only (Pune-based)
Job Description:
We are looking for experienced and passionate Senior Java Engineers to join a high-performing development team. The role involves building and maintaining robust, scalable, and low-latency backend systems and microservices in a fast-paced, agile environment.
Key Responsibilities:
- Work within a high-velocity scrum team to deliver enterprise-grade software solutions.
- Architect and develop scalable end-to-end web applications and microservices.
- Collaborate with cross-functional teams to analyze requirements and deliver optimal technical solutions.
- Participate in code reviews, unit testing, and deployment.
- Mentor junior engineers while remaining hands-on with development tasks.
- Provide accurate estimates and support the team lead in facilitating development processes.
Mandatory Skills & Experience:
- 6–7+ years of enterprise-level Java development experience.
- Strong in Java 8 or higher (Java 11 preferred), including lambda expressions, Stream API, Completable Future.
- Minimum 4+ years working with Microservices, Spring Boot, and Hibernate.
- At least 3+ years of experience designing and developing RESTful APIs.
- Kafka – minimum 2 years’ hands-on experience in the current/most recent project.
- Solid experience with SQL.
- AWS – minimum 1.5 years of experience.
- Understanding of CI/CD pipelines and deployment processes.
- Exposure to asynchronous programming, multithreading, and performance tuning.
- Experience working in at least one Fintech domain project (mandatory).
Nice to Have:
- Exposure to Golang or Rust.
- Experience with any of the following tools: MongoDB, Jenkins, Sonar, Oracle DB, Drools, Adobe AEM, Elasticsearch/Solr/Algolia, Spark.
- Strong systems design and data modeling capabilities.
- Experience in payments or asset/wealth management domain.
- Familiarity with rules engines and CMS/search platforms.
Candidate Profile:
- Strong communication and client-facing skills.
- Proactive, self-driven, and collaborative mindset.
- Passionate about clean code and quality deliverables.
- Prior experience in building and deploying multiple products in production.
Note: Only candidates who are based in Pune and can join immediately will be considered.
Candidate should be proficient in
Experience- 3+ years
Plugin development
Working with ACF's to build bespoke build
Knowledge of Root sage
Proficiency in MVC coding
Profieciency in working with Git and CI/CD tools
Third party API integration and Development
Must be Disciplined, Accountable for their Jobs and able to ensure timely delivery
Communication skill is important
Magento 2.0 or shopify could be an additional skill set to be considered.
Should be able to manage team
Immediate Joiner
Woo-commerce customization
🌐 Job Opening: Cloud and Observability Engineer
📍 Location: Work From Office – Gurgaon (Sector 43)
🕒 Experience: 2+ Years
💼 Employment Type: Full-Time
Role Overview:
As a Cloud and Observability Engineer, you will play a critical role in helping customers transition and optimize their monitoring and observability infrastructure. You'll be responsible for building high-quality extension packages for alerts, dashboards, and parsing rules using the organization Platform. Your work will directly impact the reliability, scalability, and efficiency of monitoring across cloud-native environments.
This is a work-from-office role requiring collaboration with global customers and internal stakeholders.
Key Responsibilities:
- Extension Delivery:
- Develop, enhance, and maintain extension packages for alerts, dashboards, and parsing rules to improve monitoring experience.
- Conduct in-depth research to create world-class observability solutions (e.g., for cloud-native and container technologies).
- Customer & Internal Support:
- Act as a technical advisor to both internal teams and external clients.
- Respond to queries, resolve issues, and incorporate feedback related to deployed extensions.
- Observability Solutions:
- Design and implement optimized monitoring architectures.
- Migrate and package dashboards, alerts, and rules based on customer environments.
- Automation & Deployment:
- Use CI/CD tools and version control systems to package and deploy monitoring components.
- Continuously improve deployment workflows.
- Collaboration & Enablement:
- Work closely with DevOps, engineering, and customer success teams to gather requirements and deliver solutions.
- Deliver technical documentation and training for customers.
Requirements:
- Professional Experience:
- Minimum 2 years in Systems Engineering or similar roles.
- Focus on monitoring, observability, and alerting tools.
- Cloud & Container Tech:
- Hands-on experience with AWS, Azure, or GCP.
- Experience with Kubernetes, EKS, GKE, or AKS.
- Cloud DevOps certifications (preferred).
- Observability Tools:
- Practical experience with at least two observability platforms (e.g., Prometheus, Grafana, Datadog, etc.).
- Strong understanding of alerting, dashboards, and infrastructure monitoring.
- Scripting & Automation:
- Familiarity with CI/CD, deployment pipelines, and version control.
- Experience in packaging and managing observability assets.
- Technical Skills:
- Working knowledge of PromQL, Grafana, and related query languages.
- Willingness to learn Dataprime and Lucene syntax.
- Soft Skills:
- Excellent problem-solving and debugging abilities.
- Strong verbal and written communication in English.
- Ability to work across US and European time zones as needed.
Why Join Us?
- Opportunity to work on cutting-edge observability platforms.
- Collaborate with global teams and top-tier clients.
- Shape the future of cloud monitoring and performance optimization.
- Growth-oriented, learning-focused environment.

Supply Wisdom: Full Stack Developer
Location: Hybrid Position based in Bangalore
Reporting to: Tech Lead Manager
Supply Wisdom is a global leader in transformative risk intelligence, offering real-time insights to drive business growth, reduce costs, enhance security and compliance, and identify revenue opportunities. Our AI-based SaaS products cover various risk domains, including financial, cyber, operational, ESG, and compliance. With a diverse workforce that is 57% female, our clients include Fortune 100 and Global 2000 firms in sectors like financial services, insurance, healthcare, and technology.
Objective: We are seeking a skilled Full Stack Developer to design and build scalable software solutions. You will be part of a cross-functional team responsible for the full software development life cycle, from conception to deployment.
As a Full Stack Developer, you should be proficient in both front-end and back-end technologies, development frameworks, and third-party libraries. We’re looking for a team player with strong problem-solving abilities, attention to visual design, and a focus on utility. Familiarity with Agile methodologies, including Scrum and Kanban, is essential.
Responsibilities
- Collaborate with the development team and product manager to ideate software solutions.
- Write effective and secure REST APIs.
- Integrate third-party libraries for product enhancement.
- Design and implement client-side and server-side architecture.
- Work with data scientists and analysts to enhance software using RPA and AI/ML techniques.
- Develop and manage well-functioning databases and applications.
- Ensure software responsiveness and efficiency through testing.
- Troubleshoot, debug, and upgrade software as needed.
- Implement security and data protection settings.
- Create features and applications with mobile-responsive design.
- Write clear, maintainable technical documentation.
- Build front-end applications with appealing, responsive visual design.
Requirements
- Degree in Computer Science (or related field) with 4+ years of hands-on experience in Python development, with strong expertise in the Django framework and Django REST Framework (DRF).
- Proven experience in designing and building RESTful APIs, with a solid understanding of API versioning, authentication (JWT/OAuth2), and best practices.
- Experience with relational databases such as PostgreSQL or MySQL; familiarity with query optimization and database migrations.
- Basic front-end development skills using HTML, CSS, and JavaScript; experience with any JavaScript framework (like React or Next Js) is a plus.
- Good understanding of Object-Oriented Programming (OOP) and design patterns in Python.
- Familiarity with Git and collaborative development workflows (e.g., GitHub, GitLab).
- Knowledge of Docker, CI/CD pipelines.
- Hands-on experience with AWS services, Nginx web server, RabbitMQ (or similar message brokers), event handling, and synchronization.
- Familiarity with Postgres, SSO implementation (desirable), and integration of third-party libraries.
- Experience with unit testing, debugging, and code reviews.
- Experience using tools like Jira and Confluence.
- Ability to work in Agile/Scrum teams with good communication and problem-solving skills.
Our Commitment to You:
We offer a competitive salary and generous benefits. In addition, we offer a vibrant work environment, a global team filled with passionate and fun-loving people coming from diverse cultures and backgrounds.
If you are looking to make an impact in delivering market-leading risk management solutions, empowering our clients, and making the world a better place, then Supply Wisdom is the place for you.
You can learn more at supplywisdom.com and on LinkedIn.
Job Title: Junior Scrum Master
Location: Delhi
Key Responsibilities:
- Facilitate daily stand-ups, sprint planning, sprint reviews, and retrospectives for the Scrum team.
- Assist the Product Owner in maintaining a well-groomed product backlog.
- Remove impediments to the Scrum team’s progress and protect the team from outside interruptions.
- Ensure that Scrum principles, practices, and theory are properly understood and enacted.
- Track and report key Agile metrics (e.g., burndown charts, velocity) to ensure transparency.
- Foster an environment of collaboration, continuous improvement, and self-organization within the team.
- Work closely with stakeholders and other teams to coordinate dependencies and deliverables.
- Coach team members in Agile best practices to maximize efficiency and quality.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 2 to 4 years of experience as a Scrum Master or in a related Agile role.
- Proven experience facilitating Agile ceremonies and using tools like Jira, Confluence, or Trello.
- Strong understanding of Agile frameworks (Scrum, Kanban) and the software development life cycle (SDLC).
- Excellent communication, facilitation, and conflict resolution skills.
- Ability to work with cross-functional teams and manage multiple priorities effectively.
Preferred (Nice to Have):
- Certified Scrum Master (CSM), Certified Scrum Practitioner (CSP), or equivalent certification.
- Familiarity with CI/CD pipelines and DevOps practices.
- Knowledge of Salesforce development processes or integrations (a plus).
- Experience working in an Agile/Scrum environment within a software development team.
Benefits:
- Flexible working hours.
- Certification reimbursement for relevant Agile/Scrum certifications.
- Career development support and learning opportunities.
1. Software Development Engineer - Salesforce
What we ask for
We are looking for strong engineers to build best in class systems for commercial &
wholesale banking at Bank, using Salesforce service cloud. We seek experienced
developers who bring deep understanding of salesforce development practices, patterns,
anti-patterns, governor limits, sharing & security model that will allow us to architect &
develop robust applications.
You will work closely with business, product teams to build applications which provide end
users with intuitive, clean, minimalist, easy to navigate experience
Develop systems by implementing software development principles and clean code
practices scalable, secure, highly resilient, have low latency
Should be open to work in a start-up environment and have confidence to deal with complex
issues keeping focus on solutions and project objectives as your guiding North Star
Technical Skills:
● Strong hands-on frontend development using JavaScript and LWC
● Expertise in backend development using Apex, Flows, Async Apex
● Understanding of Database concepts: SOQL, SOSL and SQL
● Hands-on experience in API integration using SOAP, REST API, graphql
● Experience with ETL tools , Data migration, and Data governance
● Experience with Apex Design Patterns, Integration Patterns and Apex testing
framework
● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,
bitbucket
● Should have worked with at least one programming language - Java, python, c++
and have good understanding of data structures
Preferred qualifications
● Graduate degree in engineering
● Experience developing with India stack
● Experience in fintech or banking domain

About Us:
Heyo & MyOperator are India’s largest conversational platforms, delivering Call + WhatsApp engagement solutions to over 40,000+ businesses. Trusted by brands like Astrotalk, Lenskart, and Caratlane, we power customer engagement at scale. We support a hybrid work model, foster a collaborative environment, and offer fast-track growth opportunities.
Job Overview:
We are looking for a skilled Quality Analyst with 2-4 years of experience in software quality assurance. The ideal candidate should have a strong understanding of testing methodologies, automation tools, and defect tracking to ensure high-quality software products. This is a fully
remote role.
Key Responsibilities:
● Develop and execute test plans, test cases, and test scripts for software products.
● Conduct manual and automated testing to ensure reliability and performance.
● Identify, document, and collaborate with developers to resolve defects and issues.
● Report testing progress and results to stakeholders and management.
● Improve automation testing processes for efficiency and accuracy.
● Stay updated with the latest QA trends, tools, and best practices.
Requirements Skills:
● 2-4 years of experience in software quality assurance.
● Strong understanding of testing methodologies and automated testing.
● Proficiency in Selenium, Rest Assured, Java, and API Testing (mandatory).
● Familiarity with Appium, JMeter, TestNG, defect tracking, and version control tools.
● Strong problem-solving, analytical, and debugging skills.
● Excellent communication and collaboration abilities.
● Detail-oriented with a commitment to delivering high-quality results.
Why Join Us?
● Fully remote work with flexible hours.
● Exposure to industry-leading technologies and practices.
● Collaborative team culture with growth opportunities.
● Work with top brands and innovative projects.

Position : Tech Lead – Fullstack Developer
Experience : 7 to 15 Years
Location : MG Road, Bengaluru (Hybrid – 3 Days in Office)
Notice Period : Immediate / Serving / 15 Days or Less
About the Opportunity :
We are hiring a Tech Lead – Fullstack Developer for a well-funded product startup building an enterprise-grade SaaS platform in the Cybersecurity domain.
The role involves designing and delivering scalable microservices and cloud-native applications in a high-performing, Agile engineering environment.
You'll work alongside industry veterans from billion-dollar digital firms, contributing to technical design, product architecture, and engineering best practices.
Mandatory Skills : Java, Spring Boot, ReactJS (or any modern JavaScript framework), RESTful APIs, PostgreSQL, Docker, Kubernetes, CI/CD, Hibernate/JPA, Multithreading, and Microservices architecture.
Role Highlights :
- Lead fullstack product development using Java (Spring Boot) and ReactJS (or similar frameworks).
- Design, develop, test, and deploy scalable microservices and RESTful APIs.
- Collaborate with Product, DevOps, and QA teams in a fast-paced Agile environment.
- Write modular, secure, and efficient code optimized for performance and maintainability.
- Mentor junior developers and influence architecture decisions across the team.
- Participate in all stages of the product lifecycle, from design to deployment.
- Create technical documentation, UML diagrams, and contribute to knowledge-sharing through blogs or whitepapers.
Key Skills Required :
- Strong expertise in Java (mandatory) and Spring Boot.
- Proficient in frontend development using ReactJS or similar frameworks.
- Hands-on experience building and consuming RESTful APIs.
- Solid knowledge of PostgreSQL, Hibernate/JPA, and transaction management.
- Familiarity with Docker, Kubernetes, and cloud platforms (Azure/GCP).
- Understanding of API Gateway, ACID properties, multithreading, and performance tuning.
- Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI) and Agile methodologies.
- Strong debugging and profiling skills for performance bottlenecks.
Nice to Have :
- Experience with data integration tools (e.g., Pentaho, Apache NiFi).
- Exposure to the Healthcare or Cybersecurity domain.
- Familiarity with OpenAI APIs or real-time analytics tools.
- Willingness to contribute to internal documentation, blog posts, or whitepapers.
Perks & Benefits :
- Opportunity to build a product from scratch.
- Flat hierarchy and direct access to leadership.
- Strong focus on learning, mentorship, and technical innovation.
- Collaborative startup culture with long-term growth opportunities.
Interview Process :
- Technical Round – Technical Assessment
- Technical Interview – Core Development
- Advanced Technical Interview – Design & Problem Solving
- Final Round – CTO Discussion
Job Requirements
Required Experience
5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment
management.
Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
Significant hands-on experience with at least two of the following tools: Gearset, Copado,
Flosum.
Solid understanding of Salesforce architecture, metadata, and development lifecycle.
Familiarity with version control systems (e.g., Git) and agile methodologies.
Key Responsibilities
Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset,
Copado, or Flosum.
Automate and optimize deployment processes to ensure efficient, reliable, and repeatable
releases across Salesforce environments.
Collaborate with development, QA, and operations teams to gather requirements and ensure
alignment of deployment strategies.
Monitor, troubleshoot, and resolve deployment and release issues.
Maintain documentation for deployment processes and provide training on best practices.
Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills
Skill Area Requirements
Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
CI/CDBuilding and maintaining pipelines, automation, and release management
Version ControlProficiency with Git and related workflows
Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
Scripting
Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
Communication
Strong written and verbal communication skills
Preferred Qualifications
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.
Devops Engineer :
Tech stack : AWS, Gitlab , Python, SNF
Mrproptek is on the lookout for a DevOps Engineer with a passion for cloud infrastructure, automation, and scalable systems. If you're ready to hit the ground running — we want you on our team ASAP!
Location: Mohali, Punjab (On-site)
Experience: 2+ Years
Skills We’re Looking For:
- Strong hands-on experience with AWS
- Expertise in GitLab CI/CD pipelines
- Python scripting proficiency
- Knowledge of SNF (Serverless and Functions) architecture
- Excellent communication and collaboration skills
- Immediate joiners preferred
At MrPropTek, we're redefining the future of property technology with innovative digital solutions. Join a team where your skills truly matter and your ideas shape what’s next.
About the Company
We are hiring for a fast-growing, well-funded product startup backed by a leadership team with a proven track record of building billion-dollar digital businesses. The company is focused on delivering enterprise-grade SaaS products in the Cybersecurity domain for B2B markets. You’ll be part of a passionate and dynamic engineering team building innovative solutions using modern tech stacks.
Key Responsibilities
- Design and develop scalable microservices using Java and Spring Boot
- Build and manage robust RESTful APIs
- Collaborate with cross-functional teams in an Agile setup
- Lead and mentor junior engineers, driving technical excellence
- Contribute to architecture discussions and code reviews
- Work with PostgreSQL, implement data integrity and consistency
- Deploy and manage services on cloud platforms like GCP or Azure
- Utilize Docker/Kubernetes for containerization and orchestration
Must-Have Skills
- Strong backend experience with Java, Spring Boot, REST APIs
- Proficiency in frontend development with React.js
- Experience with PostgreSQL and database optimization
- Hands-on with cloud platforms (GCP or Azure)
- Familiarity with Docker and Kubernetes
- Strong understanding of:
- API Gateways
- Hibernate & JPA
- Transaction management & ACID properties
- Multi-threading and context switching
Good to Have
- Experience in Cybersecurity or Healthcare domain
Exposure to CI/CD pipelines and DevOps practices
Job Overview
We are looking for a detail-oriented and skilled QA Engineer with expertise in Cypress to join our Quality Assurance team. In this role, you will be responsible for creating and maintaining automated test scripts to ensure the stability and performance of our web applications. You’ll work closely with developers, product managers, and other QA professionals to identify issues early and help deliver a high-quality user experience.
You should have a strong background in test automation, excellent analytical skills, and a passion for improving software quality through efficient testing practices.
Key Responsibilities
- Develop, maintain, and execute automated test cases using Cypress.
- Design robust test strategies and plans based on product requirements and user stories.
- Work with cross-functional teams to identify test requirements and ensure proper coverage.
- Perform regression, integration, smoke, and exploratory testing as needed.
- Report and track defects, and work with developers to resolve issues quickly.
- Collaborate in Agile/Scrum development cycles and contribute to sprint planning and reviews.
- Continuously improve testing tools, processes, and best practices.
- Optimize test scripts for performance, reliability, and maintainability.
Required Skills & Qualifications
- Hands-on experience with Cypress and JavaScript-based test automation.
- Strong understanding of QA methodologies, tools, and processes.
- Experience in testing web applications across multiple browsers and devices.
- Familiarity with REST APIs and tools like Postman or Swagger.
- Experience with version control systems like Git.
- Knowledge of CI/CD pipelines and integrating automated tests (e.g., GitHub Actions, Jenkins).
- Excellent analytical and problem-solving skills.
- Strong written and verbal communication.
Preferred Qualifications
- Experience with other automation tools (e.g., Selenium, Playwright) is a plus.
- Familiarity with performance testing or security testing.
- Background in Agile or Scrum methodologies.
- Basic understanding of DevOps practices.
Job Title: AWS Devops Engineer – Manager Business solutions
Location: Gurgaon, India
Experience Required: 8-12 years
Industry: IT
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Key Deliverables (Essential functions & Responsibilities of the Job):
· Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
· Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
· Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
· Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
· Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
· Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.
· Work closely with development teams to improve application reliability, scalability, and performance.
· Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
· Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
· Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Knowledge Skills and Abilities:
· 7+ years of hands-on AWS DevOps experience, especially with middleware services.
· Strong expertise in MongoDB Atlas or other cloud MongoDB services.
· Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
· Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
· Excellent scripting skills in Python, Bash, or PowerShell.
· Experience in containerization and orchestration: Docker, EKS, ECS.
· Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
· Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
· Ability to solve complex problems and thrive in a fast-paced environment.
Preferred Qualifications
· AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.
· MongoDB Certified DBA or Developer.
· Experience with serverless services like AWS Lambda, Step Functions.
· Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current ctc-
Email: etalenthire[at]gmail[dot]com
Satish; 88 O2 74 97 43
🔴 Profile: Senior Java Full Stack Developer
🔷 Experience: 6+Years
🔷 Location: Remote
🔷 Shift: Day Shift
(Only immediate joiners & candidates who have completed notice period)
✨ What we want:
✅ Java 8+ expertise
✅ React proficiency
✅ Jenkins (CI/CD pipeline management)
✅ Docker (Containerization)
✅ React (Advanced component development)
✅ CI/CD (Continuous Integration/Deployment)
✅ Spring Boot framework
✅ Microservices architecture
✅ Full stack development experience
✅ RESTful API development
✅ Database experience (SQL/NoSQL)
✅ Git version control & CI/CD
✅ Frontend-Backend integration

This is a technology-driven company specializing in cloud-ba
Working Days: 6 Days (5 Days-WFO, Sat- WFH)
Job Description:
We are seeking a skilled QA Tester with expertise in Vulnerability Testing to ensure the security, functionality, and reliability of our applications. The ideal candidate will have experience in penetration testing, security testing methodologies, automation, and compliance standards.
Key Responsibilities:
- Develop and execute test cases, scripts, and security test plans for applications and APIs.
- Perform vulnerability assessments and penetration testing on web, mobile, and cloud-based applications.
- Identify security loopholes, conduct risk analysis, and provide actionable recommendations.
- Work closely with development and DevOps teams to ensure secure coding practices.
- Automate security testing and integrate it into CI/CD pipelines.
- Test applications for OWASP Top 10 vulnerabilities, SQL injection, XSS, CSRF, SSRF, etc.
- Utilize security tools such as Burp Suite, OWASP ZAP, Metasploit, Kali Linux, Nessus, etc.
- Conduct API security testing and validate authentication & authorization mechanisms.
- Document security vulnerabilities and collaborate with teams for remediation.
- Ensure compliance with industry standards like ISO 27001, GDPR, HIPAA, PCI-DSS where applicable.
Required Skills & Qualifications:
- 3+ years of experience in Quality Assurance with a focus on Security & Vulnerability Testing.
- Strong knowledge of penetration testing tools and security frameworks.
- Experience with automated security testing in CI/CD (Jenkins, GitHub Actions, GitLab CI, etc.).
- Proficiency in manual and automated security testing of web and mobile applications.
- Familiarity with scripting languages like Python, Bash, or JavaScript for automation.
- Experience working with cloud platforms such as AWS, Azure, or GCP is a plus.
- Strong understanding of HTTP, APIs, authentication protocols (OAuth, JWT, SAML, etc.).
- Knowledge of network security, firewalls, and intrusion detection systems (IDS/IPS).
- Certifications like CEH, OSCP, CISSP, or Security+ are an added advantage.


Key Responsibilities:
● Design, develop, and maintain scalable web applications using .NET Core, .NET
Framework, C#, and related technologies.
● Participate in all phases of the SDLC, including requirements gathering, architecture
design, coding, testing, deployment, and support.
● Build and integrate RESTful APIs, and work with SQL Server, Entity Framework, and
modern front-end technologies such as Angular, React, and JavaScript.
● Conduct thorough code reviews, write unit tests, and ensure adherence to coding
standards and best practices.
● Lead or support .NET Framework to .NET Core migration initiatives, ensuring
minimal disruption and optimal performance.
● Implement and manage CI/CD pipelines using tools like Azure DevOps, Jenkins, or
GitLab CI/CD.
● Containerize applications using Docker and deploy/manage them on orchestration
platforms like Kubernetes or GKE.
● Lead and execute database migration projects, particularly transitioning from SQL
Server to PostgreSQL.
● Manage and optimize Cloud SQL for PostgreSQL, including configuration, tuning, and
ongoing maintenance.
● Leverage Google Cloud Platform (GCP) services such as GKE, Cloud SQL, Cloud
Run, and Dataflow to build and maintain cloud-native solutions.
● Handle schema conversion and data transformation tasks as part of migration and
modernization efforts.
Required Skills & Experience:
● 5+ years of hands-on experience with C#, .NET Core, and .NET Framework.
● Proven experience in application modernization and cloud-native development.
● Strong knowledge of containerization (Docker) and orchestration tools like
Kubernetes/GKE.
● Expertise in implementing and managing CI/CD pipelines.
● Solid understanding of relational databases and experience in SQL Server to
PostgreSQL migrations.
● Familiarity with cloud infrastructure, especially GCP services relevant to application
hosting and data processing.
● Excellent problem-solving, communication,