50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
We're seeking an experienced Senior Backend Engineer to join our team. As a senior backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop. This includes APIs, databases, and server-side logic.
Responsibilities:
● Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
● Write clean, efficient, and well-documented code that adheres to industry standards and best practices
● Code Quality: Ensure code quality through code reviews, adherence to best practices, and continuous improvement
● Mentorship: Guide and mentor team members, fostering growth and innovation
● Collaboration: Work closely with stakeholders to align technical goals with business objectives
● Problem-Solving: Analyze and resolve technical challenges promptly ● Innovation: Stay updated with the latest technology trends and integrate them into solutions
Requirements:
● At least 7+ years of experience building scalable and reliable backend systems
● Strong expertise in NodeJS/NestJS, Express, PostgreSQL
● Experience with microservices architecture and distributed systems
● Proficiency in database design (SQL and NoSQL)
● Knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines
● Deep understanding of design patterns, data structures, and algorithms
● Hands-on experience with containerization technologies like Docker and orchestration tools like Kubernetes
● Exceptional communication and leadership skills
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to effectively lead and maintain a collaborative team environment
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment. We're seeking an experienced Backend Software Engineer to join our team. As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop. This includes APIs, databases, and server-side logic.
Responsibilities:
● Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
● Write clean, efficient, and well-documented code that adheres to industry standards and best practices
● Participate in code reviews and contribute to the improvement of the codebase
● Debug and resolve issues in the existing codebase
● Develop and execute unit tests to ensure high code quality
● Work with DevOps engineers to ensure seamless deployment of software changes
● Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
● Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
● Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements:
● At least 3+ years of experience building scalable and reliable backend systems
● Strong expertise in NodeJS/NestJS, Express, PostgreSQL
● Experience with microservices architecture and distributed systems
● Proficiency in database design (SQL and NoSQL)
● Knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines
● Deep understanding of design patterns, data structures, and algorithms
● Hands-on experience with containerization technologies like Docker and orchestration tools like Kubernetes
● Exceptional communication and leadership skills
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to effectively lead and maintain a collaborative team environment
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
NonStop io is seeking a proficient React.js developer to join our front-end development team. In this position, you’ll be mainly crafting and integrating UI components through React.js methodologies and workflows like Mobx and Redux.
Responsibilities:
● Developing and implementing UI components using React.js
● Collaborating with cross-functional teams to design and ship new features
● Building reusable components and front-end libraries for future use
● Translating designs and wireframes into high-quality code
● Optimizing components for maximum performance across various web browsers
● Troubleshooting and debugging issues to ensure smooth user experiences
● Participating in code reviews to maintain code quality and consistency ● Improvement and optimization of existing codebase
● Keeping up with industry trends
● Identifying issues with technologies and architecture and then implementing solutions
● Assist with ticket creation, refinement, and estimation
● Participate in sprint planning and ticket distribution for frontend team
Qualifications & Skills:
● Proficiency in React.js and its core principles
● Strong JavaScript, TypeScript, HTML5, and CSS3 skills
● Experience with popular React.js state management libraries and approaches (such as Mobx, Redux, and Context API)
● Familiarity with RESTful APIs and integration
● Knowledge of modern authorization mechanisms, such as JSON Web Tokens
● Understanding of front-end build tools and pipelines
● Excellent problem-solving and communication skills
● A strong attention to detail, and a passion for delivering high-quality code
● Expertise in designing scalable and efficient front-end architecture
● Adaptability to changing project requirements and priorities
● Experience with version control systems, particularly Git
● Experience working in Scrum and familiarity with Atlassian (Jira, Confluence, Bitbucket)
● Proven experience in leading teams
● Experience with diverse applications and architectural patterns
● A degree in computer science, software engineering, or a related field ● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Key Responsibilities
AI Architecture & Solution Design
- Design end-to-end AI solution architectures, including:
- Generative AI and LLM-based systems
- Retrieval-Augmented Generation (RAG) pipelines
- Agentic and multi-agent workflows
- Define reference architectures and best practices for AI-enabled features within enterprise products.
- Ensure AI solutions integrate seamlessly with existing applications, data, and cloud architectures.
AI Integration & MCP Servers
- Design and implement Model Context Protocol (MCP) servers to securely expose tools, APIs, and data to AI agents.
- Define standards for tool interfaces, access control, auditing, and safety guardrails.
- Enable product teams to onboard AI tools and capabilities using reusable, scalable integration patterns.
Agentic AI & Workflow Enablement
- Architect AI-driven workflows that support collaboration between humans and AI agents.
- Design AI-to-AI (A2A) and AI-to-system interaction patterns.
- Ensure agent behaviors are deterministic, explainable, and aligned with enterprise requirements.
Hands-On Development & Prototyping
- Build proofs-of-concept and production-ready implementations using Python and/or TypeScript.
- Rapidly validate ideas from ideation to deployment.
- Establish reusable frameworks, libraries, and CI/CD pipelines for AI development.
AI Governance, Quality & Safety
- Implement guardrails to minimize hallucinations, unsafe actions, and data leakage.
- Define evaluation and monitoring strategies for AI systems, including prompt regression and RAG accuracy checks.
- Ensure AI solutions comply with enterprise security, privacy, and governance standards.
Developer Enablement & Collaboration
- Partner with Product, Engineering, QE, Performance, and Security teams to deliver AI capabilities.
- Mentor teams on AI design patterns, tooling, and best practices.
- Contribute to internal AI communities through demos, documentation, and knowledge sharing.
Qualifications :
Required Qualifications
- Bachelor’s degree in computer science, Engineering, or a related technical field, or equivalent practical experience.
- Demonstrated expertise in cloud‑native system design, distributed architectures, and enterprise‑scale integrations.
- Proven ability to architect and implement AI-enabled systems, including integrating Large Language Models (LLMs) into production-grade software.
- Strong ownership of architectural decisions, technical direction, and solution delivery across complex, cross-functional initiatives.
- Hands-on experience applying security, observability, and automation best practices within enterprise environments.
- 6–10 years of experience in software architecture and distributed systems.
- 5+ years of experience building Generative AI or LLM-based solutions.
- Practical experience designing and implementing:
- Retrieval-Augmented Generation (RAG) architectures
- Agentic AI systems
- Tool-calling frameworks and AI integration layers
- Proficiency in Python and/or .Net/TypeScript/Node.js.
- Experience working with major cloud platforms such as Azure, AWS, or Google Cloud Platform (GCP).
Preferred Qualifications
- Experience with OpenAI, Azure OpenAI, Anthropic, or similar LLM platforms.
- Familiarity with Model Context Protocol (MCP) or equivalent AI tool-integration frameworks.
- Experience applying AI engineering practices beyond prototyping, including evaluation, reliability, and scalability considerations.
- Ability to translate ambiguous business problems into clear technical architecture and execution plans.
- History of influencing technical standards and mentoring senior engineers or architects.
- Experience with vector databases, embeddings, and retrieval optimisation.
- Experience building AI-enabled developer tooling and CI/CD pipelines.
- Prior experience in enterprise SaaS environments.
Job Title: Data Engineer
About the Role
We are looking for a highly motivated Data Engineer to join our growing team and play
a critical role in shaping the data foundation of different software platforms. This role sits
at the intersection of data engineering, product, and business stakeholders, and is
responsible for building reliable data pipelines, delivering actionable insights, and
ensuring data quality across systems.
You will work closely with internal teams and external partners to translate business
requirements into scalable data solutions, while maintaining high standards for data
integrity, performance, and usability.
Key Responsibilities
Data Engineering & Architecture
Design, build, and maintain scalable data pipelines and ETL/ELT processes
Develop and optimize data models in PostgreSQL and cloud-native
architectures
Work within AWS ecosystem (e.g., S3, Lambda, RDS, Glue, Redshift, etc.) to
support data workflows
Ensure efficient ingestion and processing of large-scale datasets
Business & Partner Integration
Collaborate directly with business stakeholders and external partners to
gather requirements and deliver reporting solutions
Translate ambiguous business needs into structured data models and
dashboards
Integrate with third-party APIs and other external data sources
Data Quality & Governance
Implement robust data validation, monitoring, and QA processes
Ensure consistency, accuracy, and reliability of data across the platform
Troubleshoot and resolve data discrepancies proactively
Reporting & Analytics Enablement
Build datasets and pipelines that power dashboards and reporting tools
Support internal teams with ad hoc analysis and data requests
Partner with product and engineering teams to embed data into the SaaS product experience
Performance & Scalability
Optimize queries, pipelines, and storage for performance and cost efficiency
Continuously improve system scalability as data volume and complexity grow
Required Qualifications
3–6+ years of experience in Data Engineering or related role
Strong proficiency in Python for data processing and scripting
Advanced experience with PostgreSQL (query optimization, schema design)
Hands-on experience with AWS data architecture (S3, RDS, Lambda, Glue,
Redshift, etc.)
Experience integrating with external APIs
Solid understanding of ETL/ELT pipelines, data modeling, and warehousing
concepts
Experience working cross-functionally with business stakeholders
Preferred Qualifications
Experience in AdTech, eCommerce, or SaaS platforms
Familiarity with BI tools (e.g., Looker, Tableau, Power BI)
Experience with workflow orchestration tools (e.g., Airflow)
Understanding of data governance and compliance best practices
Exposure to real-time or streaming data pipelines
What We’re Looking For
Strong problem-solver who can operate in a fast-paced, ambiguous
environment
Ability to balance technical depth with business context
Excellent communication skills — able to work directly with non-technical
stakeholders
Ownership mindset with a focus on execution and quality
Overview:
We're looking for a Full Stack Developer with strong backend expertise who can build,
manage, and scale AI-driven products end to end. You'll play a critical role in designing
scalable architectures, optimizing performance and cost, and building robust AI and agentic
systems.
Responsibilities
1. Architect and build scalable backend systems using FastAPI, PostgreSQL, and Redis.
2. Design, develop, and maintain AI-driven applications, integrating multiple LLMs, APIs,
and agentic frameworks.
3. Implement vector databases (pgvector, Qdrant, etc.) for RAG and AI memory systems.
4. Orchestrate multi-agent AI systems with LangChain/LangGraph, including function
calling, agent collaboration, and monitoring.
5. Build and integrate RESTful APIs for frontend and external use.
6. Manage DevOps workflows, including CI/CD, cloud deployments (AWS/GCP), server
scaling, and logging/monitoring (Sentry).
7. Optimize application cost, latency, and reliability, balancing speed with LLM call
efficiency and caching strategies.
8. Collaborate with product, design, and AI teams to translate business requirements into
high-performing tech.
9. Maintain documentation and ensure code quality with tests, reviews, and async-first
architecture.
10. Contribute to frontend development (React + TypeScript) when necessary, ensuring
seamless API integration and data visualization.
Requirements
Core Skills
• Strong proficiency in Python and FastAPI.
• Experience with PostgreSQL (including pgvector) and SQLAlchemy (async).
• Solid understanding of Redis, RQ (Redis Queue), and caching mechanisms.
• Proven experience integrating LLMs and AI APIs (OpenAI, Anthropic, etc.).
• Hands-on experience with LangChain / LangGraph, RAG pipelines, and agent
orchestration.
• Experience working with cloud platforms (AWS / GCP) and managing file storage (S3).
• Familiarity with frontend stacks (React, TypeScript, Tailwind, Zustand).
• Working knowledge of DevOps: Docker, CI/CD pipelines, deployment automation, and
observability tools (Sentry, Mixpanel, Clarity).
Bonus / Nice to Have
• Experience building agent monitoring dashboards or AI workflows.
• Prior experience in startup or product-based environments.
• Understanding of LLM cost optimization, token management, and function calling
orchestration.
• Familiarity with external API integrations like BrightData, Hunter.io, Adzuna, and Serper.
• Experience building scalable AI products (e.g., chatbots, AI copilots, data agents, or
automation tools).
Mindset
• Startup-ready: comfortable working in fast-paced, ambiguous environments.
• Deep curiosity about AI systems and automation.
• Strong sense of ownership and accountability for shipped products.
• Pragmatic and cost-conscious in architectural decisions.
• Excellent communication and documentation skills.
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.
Description
Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.
As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.
What you will do...
- Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
- You will work to develop delivery estimates and an estimated project plan.
- You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
- Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
- Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
- Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
- Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.
What we need...
- 5+ years of experience working in a Software Engineering capacity
- Excellent knowledge and experience with Python, and preferably additional languages such as Go
- Strong critical thinking skills, and a bias towards problem solving
- Familiarity with implementing microservice architectures
- Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
- Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
- Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
- Experience deploying production workloads on the public cloud - either GCP or AWS
- Experience using CI/CD tools such as GitHub Actions, GitLab, etc
- Able to work with new tools and technologies where you may not have prior experience
- Comfortable with being on video in meetings internally and with clients
- Strong English communications skills
We are a fully remote company and offer competitive compensation and benefits.
We are looking for a highly skilled Full Stack Developer (MERN Stack) with 3–5 years of experience to join our growing team. You will have the opportunity to work on cutting-edge technology solutions, build products from scratch, and contribute to scalable systems handling large volumes of data.
Key Responsibilities:
- Design, develop, and maintain scalable full-stack applications
- Build responsive and high-performance user interfaces using modern frontend frameworks
- Develop robust backend services and APIs
- Ensure seamless system performance while handling large-scale data without slowdowns
- Collaborate with cross-functional teams (product, design, QA) to meet business goals
- Optimize applications for maximum speed, scalability, and reliability
- Participate in architecture discussions and contribute to technical decisions
Required Skills & Qualifications:
Frontend
- Strong experience in React.js
- Hands-on experience with Next.js (mandatory)
- Good understanding of UI/UX principles and responsive design
Backend
- Solid experience in Node.js
- Experience with Python or Java is a plus
- Strong knowledge of RESTful APIs and microservices architecture
Databases
- Strong experience with SQL (mandatory)
- Experience with MongoDB is a plus
- Caching & Messaging
- Experience with at least one: Redis, Kafka, or Cassandra
Other Requirements
- Strong problem-solving and analytical skills
- Ability to work in a fast-paced, collaborative environment
- Good communication and stakeholder management skills
Good to Have:
- Cloud certifications (AWS / Azure / GCP)
- Experience working on high-scale or distributed systems
- Exposure to DevOps practices and CI/CD pipelines
Why Join Us:
- Opportunity to work on cutting-edge tech and greenfield projects
- Ownership and freedom to build solutions from scratch
- Collaborative and growth-focused work environment
About the Role
Pendo is looking for a Software Engineer to help build and scale the platform that powers our integrations with enterprise systems such as Salesforce, Slack, Segment, and other partner tools. This team develops the services, APIs, data pipelines, and user interfaces that enable customers to seamlessly connect Pendo into their product and data ecosystems.
In this role, you will primarily focus on building scalable backend systems while also contributing to the frontend experiences that allow customers to configure, manage, and monitor integrations. You’ll collaborate closely with product managers, designers, and infrastructure teams to deliver reliable, high-performance capabilities used by millions of users.
What You'll Do
- Design and build scalable backend services and APIs that power Pendo’s integrations platform.
- Develop and maintain distributed, event-driven data pipelines that process and sync high volumes of behavioral and product analytics data.
- Contribute to frontend applications that allow customers to configure, manage, and monitor integrations and data workflows.
- Lead technical initiatives from design through implementation, testing, and production rollout.
- Integrate with third-party APIs and enterprise platforms using technologies such as REST, webhooks, and OAuth.
- Collaborate with product, design, infrastructure, and partner teams to translate business needs into high-quality technical solutions.
- Use modern development workflows and AI-powered tools to improve developer productivity and streamline engineering processes.
- Participate in design reviews and promote best practices in testing, observability, performance, and system reliability.
- Contribute to improving platform scalability, availability, and operational excellence.
What We're Looking For
- Experience building backend services, APIs, or distributed systems.
- Experience developing modern web applications using frameworks such as Vue, React, or Angular.
- Strong proficiency in at least one backend language such as Go, Java, Python, or C++.
- Experience working with cloud infrastructure such as AWS or GCP.
- Familiarity with distributed systems, event-driven architectures, or high-throughput data pipelines.
- Experience writing and maintaining unit, integration, and end-to-end tests.
- Strong collaboration and communication skills.
Nice to Have
- Experience building integration platforms or working with third-party APIs.
- Familiarity with authentication models such as OAuth and enterprise SaaS integrations.
- Experience working with analytics or behavioral event data.
- Experience leveraging AI-assisted development tools or working with modern AI workflows.
Technologies We Use
- Frontend: Vue, Vuex, React, Angular, Highcharts, Jest, Cypress
- Backend: Go, Java, Python, C++
- Cloud & Data: AWS, GCP, Redis, Pub/Sub, SQL/NoSQL
- AI / ML: GenAI, LLMs, LangChain, MLOps
Core Technical Skills
- Strong in Core Java, Java 8, OOPs
- Hands-on experience with Spring Boot, Spring MVC, Spring Data JPA
- Experience in Microservices Architecture & REST API development
- Good knowledge of SQL databases (MySQL / SQL Server / PostgreSQL)
- Experience with AWS services (Lambda, S3, DynamoDB, EC2)
- Familiarity with Kafka / Event-driven architecture (good to have)
- Knowledge of Spring Security, JWT, OAuth 2.0
- Experience with Docker, Jenkins, Git
🔧 Development Responsibilities
- Design and develop scalable REST APIs and microservices
- Work on backend systems handling large data and real-time processing
- Optimize database queries, performance, and indexing
- Collaborate with cross-functional teams for end-to-end delivery
🛠️ Support (L2) Responsibilities
- Handle production issues, bug fixing, and root cause analysis
- Provide post-deployment and migration support
- Troubleshoot performance issues, DB deadlocks, and API failures
- Work on incident resolution and system stability improvements
- Coordinate with L1/support teams (if applicable)
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 4+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● Bachelor's or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming along with Frameworks like Django/Fast Api/Flask and Java frameworks like Spring, Hibernate, SpringBoot, etc
● Debug and resolve technical issues that arise during the development or after deployment at various stages.
● Experience in databases including MySQL and NoSQL
● Experience in designing, developing and maintaining high availability systems.
● Experience in MVC pattern, Tomcat, Git, and Jira.
● Experience working with AWS cloud platform.
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Strong analytical and data driven approach to problem solving
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A
Who are we aka "About Us":
We are an early-stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Baker street Fintech Pvt Ltd (Parent Company) might be the place for you. We have a flat, ownership-oriented culture, and deliver world-class quality. You will be working with a founding team that has delivered over 26 industry-leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
As Cambridge Wealth, we are well-established in the wealth and mutual fund distribution segment, having won awards from BSE Star as well as Mutual Fund houses. Our UHNI/HNI/NRI clients include renowned professionals from various industries.
What are we looking for a.k.a “The JD” :
We are seeking a skilled and detail-oriented Data Analyst to join our product team. As a Data Analyst, you will play a crucial role in extracting, analysing, and interpreting complex financial data to drive strategic decision-making and optimize our data solutions. The ideal candidate should possess a strong foundation in SQL / NoSQL databases, Python programming, and proficiency in tools like PostgreSQL and Excel. A deep understanding of financial concepts is also a plus. Additionally, having an interest in business intelligence tools and machine learning will be valuable for this role.
Responsibilities:
- Proficient in writing complex SQL Queries
- Utilize Python for data manipulation, analysis, and visualisation, using libraries such as pandas, matplotlib, psycopg etc.
- Perform database optimization, indexing, and query tuning to ensure high performance.
- Monitor and maintain data quality, troubleshoot data-related issues, and implement solutions to optimize data integrity and performance.
- Design, configure, and maintain PostgreSQL databases
- Set up and manage database clusters, replication, and backups for disaster recovery
Preferred Qualifications:
- Intermediate-level Excel skills for data analysis and reporting.
- Strong communication skills to present findings effectively and recommendations to both technical and non-technical stakeholders.
- Detail-oriented mindset with a commitment to data accuracy and quality.
*(Only Applicants who have finished their educational commitments are requested to apply)
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- Has worked (0-1.5 years preferably) or is looking to work specifically with an early-stage startup.
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and process from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements.
- You want complete ownership for your role & be able to drive it the way you think is right.
- You can be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
- Are looking to stick around for the long term and grow with the company.
Job Description & Specification:
Post Title: Data Engineer
Work Mode: Kochi Onsite - UK Time zone
Role Overview:
We are seeking a talented and experienced Data Engineer to join our team. The ideal candidate will have expertise in technologies such as Metabases, Dbt, Stitch, Snowflake, Avo, and MongoDB. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics and data-driven decision-making processes.
Responsibilities:
- Designing, developing and implementing scalable data pipelines and ETL processes using tools such as Stitch and Dbt to ingest, transform, and load data from various sources into our data warehouse (Snowflake).
- Implement data modeling best practices and standards using Dbt to create and manage data models for reporting and analytics.
- Collaborating with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
- Develop and maintain dashboards and visualizations in Metabases to enable self-service analytics and data exploration for internal teams.
- Building and optimizing ETL processes to ensure data quality and integrity.
- Optimizing data processing and storage solutions for performance, scalability and reliability, leveraging cloud-based technologies.
- Implementing monitoring and alerting systems to proactively identify and address data issues.
- Implementing data quality checks and monitoring processes to ensure the accuracy, completeness, and integrity of data.
- Managing and optimizing databases (like MongoDB for performance and scalability).
- Developing and maintaining documentation, best practices, and standards for data engineering processes and workflows.
- Stay up to date with emerging technologies and trends in data engineering, machine learning, and analytics, and evaluate their potential impact on data strategy and architecture.
Requirements:
- Bachelor's or Master's degree in Computer Science.
- Minimum of 4 years of experience working as a data engineer with expertise in Metabases, Dbt, Stitch, Snowflake, Avo, MongoDB.
- Strong programming skills in languages like Python, and experience with SQL and database technologies (e.g., PostgreSQL, MySQL, MongoDB).
- Hands-on experience with data integration tools (e.g., Stitch), data modeling tools (e.g., Dbt), and BI platforms (e.g., Metabases).
- Experience with cloud platforms such as AWS.
- Strong understanding of data modeling concepts, database design, and data warehousing principles
- Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka) and cloud-based data platforms (e.g., AWS EMR, Azure Databricks, Google BigQuery).
- Familiarity with data integration tools, ETL processes, and workflow orchestration tools (e.g., Apache Airflow, Apache NiFi).
- Excellent problem-solving skills and attention to detail.
- Strong communication skills with the ability to work effectively in a global team environment.
- Experience in the education or Edtech industry is a plus.
- Knowledge of Avo for schema management and versioning will be an added advantage.
- Familiarity with machine learning algorithms, data science workflows, and analytics tools (e.g., TensorFlow, PyTorch, scikit-learn, Tableau).
- Knowledge of distributed computing concepts and containerization technologies.
- Experience with version control systems (e.g., Git) and CI/CD pipelines.
- Certifications in cloud computing (e.g., AWS Certified Developer, Google Cloud Professional Data Engineer) or data engineering (e.g., Databricks Certified Associate Developer) are desirable.
Benefits:
- Competitive salary and bonus structure based on performance and achievement of goals.
- Comprehensive benefits package including medical insurance.
Join us in shaping the future of technology by applying your expertise as a Data Engineer. If you are passionate about driving innovation and delivering impactful solutions, we invite you to be part of our dynamic team. Apply now!!
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹18,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
Advanced Software Architect
Position Responsibilities :
- Lead the architecture and development of AI-powered, distributed systems that meet enterprise-grade performance and security standards.
- Leverage AI tools for code generation, architectural design, and documentation to accelerate delivery and improve quality.
- Design, build, and maintain services using Python, Java, and Node.js, following clean-code and secure design principles.
- Develop agentic AI-based tools, domain-specific copilots, and developer productivity enhancements.
- Collaborate with cross-functional teams to define modular, scalable, and compliant architecture patterns.
- Conduct technical design reviews and produce detailed documentation, including system specifications, API docs, and architecture diagrams.
- Integrate AI solutions into CI/CD pipelines, ensuring observability, automated testing, and deployment standards are met.
- Implement robust monitoring and performance engineering practices to maintain high-quality deployments.
- Continuously evaluate emerging AI technologies and integrate them into development workflows for maximum impact.
- Champion best practices in security, automation, and performance optimization across the organization.
Qualifications :
- 8+ years in software engineering with full-stack or backend development in Python, Java, and/or Node.js.
- 3+ years with AI tools for development, prototyping, or documentation tasks.
- Experience with cloud-native development and containerized deployment (Docker, Kubernetes).
- Knowledge of AI integration patterns, vector stores, prompt engineering, and RAG pipelines.
- Ability to design software architecture using sequence diagrams, ERDs, data models, and threat models.
- Comfortable with Gen AI-first environments and working with remote Agile teams.
Preferred Qualifications
- Experience building AI copilots or developer tools using OpenAI/Claude SDKs, LangChain, or similar frameworks.
- Experience working in a fast-paced, AIDLC environment, with a strong understanding of CI/CD practices.
- Familiarity with GitHub Actions, Argo Workflows, Terraform, and monitoring/observability tools.
- Containerization and Orchestration: Proficiency in Docker and Kubernetes for containerization and orchestration.
- Cloud Platforms: Experience with cloud computing platforms such as AWS, Azure, or OCI Cloud.
We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.
This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.
Responsibilities
- Design, develop, and implement machine learning models and algorithms to solve complex business problems.
- Collaborate with data scientists to transition models from research and development into production-ready systems.
- Build and maintain scalable data pipelines for ML model training and inference using Databricks.
- Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
- Deploy and manage ML models in production environments on Azure, leveraging services such as:
- Azure Machine Learning
- Azure Kubernetes Service (AKS)
- Azure Functions
- Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
- Ensure the reliability, performance, and scalability of ML systems in production.
- Monitor model performance, detect model drift, and implement retraining strategies.
- Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
- Document model architecture, data flows, and operational procedures.
Qualifications
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.
Experience
- Minimum 3+ years of professional experience as an ML Engineer or in a similar role.
Required Skills
- Strong proficiency in Python for data manipulation, machine learning, and scripting.
- Hands-on experience with machine learning frameworks, such as:
- Scikit-learn
- TensorFlow
- PyTorch
- Keras
- Demonstrated experience with MLflow for:
- Experiment tracking
- Model management
- Model deployment
- Proven experience working with Microsoft Azure cloud services, specifically:
- Azure Machine Learning
- Azure Databricks
- Related compute and storage services
- Solid experience with Databricks for:
- Data processing
- ETL pipelines
- ML model development
- Strong understanding of MLOps principles and practices, including:
- CI/CD for ML
- Model versioning
- Model monitoring
- Model retraining
- Experience with containerization and orchestration technologies, including:
- Docker
- Kubernetes (especially AKS)
- Familiarity with SQL and data warehousing concepts.
- Experience working with large datasets and distributed computing frameworks.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Nice-to-Have Skills
- Experience with other cloud platforms (AWS or GCP).
- Knowledge of big data technologies such as Apache Spark.
- Experience with Azure DevOps for CI/CD pipelines.
- Familiarity with real-time inference patterns and streaming data.
- Understanding of Responsible AI principles, including fairness, explainability, and privacy.
Certifications (Preferred)
- Microsoft Certified: Azure AI Engineer Associate
- Databricks Certified Machine Learning Associate (or higher)
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are seeking a highly skilled QA Automation Engineer with strong expertise in Java and Selenium to join our growing engineering team. The ideal candidate will play a key role in designing, developing, and maintaining scalable test automation frameworks while ensuring high product quality across releases.
Roles and Responsibilities:
● Design, develop, and maintain robust automation frameworks using Java and Selenium
● Build automated test scripts for web applications and integrate them into CI CD pipelines
● Collaborate closely with developers, product managers, and business analysts to understand requirements and define effective test strategies
● Participate in sprint planning, requirement reviews, and technical discussions
● Perform root cause analysis for defects and work with engineering teams for resolution
● Improve automation coverage and reduce manual regression effort
● Ensure test environments, test data, and execution reports are maintained and documented
● Mentor junior QA engineers and promote best practices in automation
● Develop, execute, and maintain comprehensive test plans and test cases for manual and automated testing
● Perform functional, regression, performance, and security testing to ensure software quality
● Design and develop automated test scripts using tools such as Selenium, Appium, or similar frameworks
● Identify, document, and track software defects, working closely with development teams for resolution
● Ensure test coverage by working closely with developers, product managers, and other stakeholders
● Establish and maintain continuous integration (CI) and continuous deployment (CD) pipelines for test automation
● Conduct API testing using tools like Postman or RestAssured
● Collaborate with cross-functional teams to enhance the overall quality of the product
● Stay up to date with the latest industry trends and best practices in QA methodologies and automation frameworks
Requirements:
● 5 to 7 years of experience in QA automation
● Strong hands-on experience with Java and Selenium WebDriver
● Experience in building or enhancing automation frameworks from scratch
● Good understanding of TestNG or JUnit
● Experience with Maven or Gradle
● Familiarity with CI CD tools such as Jenkins, GitHub Actions, or similar
● Strong understanding of Agile Scrum methodology
● Experience with API testing tools such as Rest Assured or Postman is a plus
● Knowledge of version control systems like Git
● Strong analytical and problem-solving skills
● Strong understanding of software testing life cycle (STLC) and defect lifecycle management
● Experience with version control systems (e.g., Git)
● Relevant certifications in software testing (e.g., ISTQB) are desirable but not required
● Solid understanding of software testing principles, methodologies, and techniques
● Excellent analytical and problem-solving skills
● Strong attention to detail and a commitment to delivering high-quality software
● Good communication and collaboration skills, with the ability to work effectively in a team environment
Good to Have:
● Experience with performance testing tools
● Exposure to cloud platforms such as AWS or Azure
● Knowledge of containerization tools like Docker
● Experience in BDD frameworks such as Cucumber.
Why Join Us?
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Company Description
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Role Description
This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
🚨 We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Own and evolve the technical backbone of an AI-first enterprise platform.
You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.
🧩 What You’ll Do
- Architect large-scale distributed systems powering AI-driven workflows
- Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
- Redesign legacy systems into scalable, modular, AI-native architectures
- Drive system design excellence across teams (APIs, infra, observability, reliability)
- Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
- Mentor senior engineers and influence engineering culture/org standards
- Partner with product, data, and leadership on long-term technical strategy
🧠 What We’re Looking For
- Proven track record building high-scale backend or platform systems
- Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
- Strong exposure to data systems/infra / Data / real-time architectures
- Experience or strong interest in LLMs, GenAI, or AI system design
- Exceptional system design, abstraction, and problem-solving ability
- High ownership mindset — you think in terms of systems, not tickets
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
- Solve hard system problems (latency, scale, reliability)
- Drive cross-team technical decisions and standards
- Mentor senior engineers and influence org-wide architecture
- Design large-scale distributed systems and backend platforms
- Mentorship & Technical Leadership
- Expertise in system design, scalability, and performance optimization
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Job Details
- Job Title: Director of Engineering
- Industry: SAAS
- Function – Information Technology
- Experience Required: 9-14 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, AWS, NodeJS, mongodb, React.js, WebGL, Three.js, AI/ML, Docker,nKubernetes
Criteria
Candidate must be having 9+ years of engineering experience, with 3u20134 years in technical leadership
Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
Ability to design scalable architectures for high-performance systems.
Should have AI/ML deployment experience
Strong 3D graphics/WebGL/Three.js knowledge.
Candidates should be from SAAS/Software/IT Services based startups or scaleup companies only
Job Description
The Role:
Company is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.
This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy.
What You’ll Own:
1. Technical Leadership & Architecture
● Architect company’s full-stack platform across frontend, backend, infrastructure, and AI.
● Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.
● Make decisions on stack, scalability patterns, architecture, and technical debt.
● Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.
● Lead architectural discussions, design reviews, and set engineering standards.
2. Hands-On Development
● Write production-grade code across frontend, backend, APIs, and cloud infra.
● Build critical features and core system components independently.
● Debug complex systems and optimize performance end-to-end.
● Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.
● Build scalable backend services for large-scale asset processing and real-time pipelines.
● Develop WebGL/Three.js rendering and AR workflows.
3. Team Building & Engineering Management
● Hire and grow a team of 5–8 engineers initially (scaling to 15–20).
● Establish engineering culture, values, and best practices.
● Build career frameworks, performance systems, and growth plans.
● Conduct 1:1s, mentor engineers, and drive continuous improvement.
● Set up processes for agile execution, deployments, and incident response.
4. Product & Cross-Functional Collaboration
● Work with the founder and product team on roadmap, feasibility, and prioritization.
● Translate product requirements into technical execution plans.
● Collaborate with design for UX quality and technical alignment.
● Support sales and customer success with integrations and technical discussions.
● Contribute technical inputs to product strategy and customer-facing initiatives.
5. Engineering Operations & Infrastructure
● Own CI/CD, testing frameworks, deployments, and automation.
● Create monitoring, logging, and alerting setups for reliability.
● Manage AWS infrastructure with a focus on cost and performance.
● Build internal tools, documentation, and developer workflows.
● Ensure enterprise-grade security, compliance, and reliability.
Tech Stack:
1. Frontend
React.js, Next.js, TypeScript, WebGL, Three.js
2. Backend
Node.js, Python, Express/FastAPI, REST, GraphQL
3. AI/ML
PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines
4. 3D & Graphics
Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization
5. Databases
PostgreSQL, MongoDB, Redis, vector databases
6. Cloud & Infra
AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes CI/CD: GitHub Actions
Monitoring: Datadog, Sentry
What We’re Looking For:
1. Must-Haves
● 9+ years of engineering experience, with 3–4 years in technical leadership.
● Deep full-stack experience with strong system design fundamentals.
● Proven success building products from 0→1 in fast-paced environments.
● Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
● Ability to design scalable architectures for high-performance systems.
● Strong people leadership with experience hiring and mentoring teams.
● Ready to code, review, design, and lead from the front.
● Startup mindset: fast execution, problem-solving, ownership.
2. Highly Desirable
● AI/ML deployment experience (CV, generative AI, 3D reconstruction).
● Strong 3D graphics/WebGL/Three.js knowledge.
● Experience with real-time systems, rendering optimizations, or large-scale pipelines.
● Background in B2B SaaS, XR, gaming, or immersive tech.
● Experience scaling engineering teams from 5 → 20+.
● Open-source contributions or technical content creation.
● Experience working closely with founders or executive leadership.
Why Company:
● Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.
● Build from day zero – architecture, team, and culture.
● Path to CTO as the company scales.
● High autonomy to drive technical decisions.
● Direct founder collaboration on product vision.
● High ownership, high-growth environment.
● Backed by global leaders: Microsoft, Google, NVIDIA, AWS.
Location & Work Culture:
● Location: HSR Layout, Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: High-intensity, high-integrity, engineering-first
● Team: Young, ambitious, technically strong
Minimum 5+ years in backend engineering with strong system design expertise
Experience building scalable systems from scratch
Expert-level proficiency in Node.js
Deep understanding of distributed systems
Strong NoSQL design skills
Hands-on AWS cloud experience
Proven leadership and mentoring capability
Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Details
- Job Title: Full Stack Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-7 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js
Criteria
Candidate should have at least 4+ years of professional experience as a Full Stack Engineer
Hands-on experience with both React.js and Node.js
Solid understanding of MongoDB
Should have experience in RESTful APIs
Should be from a startup or scale up companies
Should have good experience in Typescript
Strong understanding of asynchronous programming patterns
Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.
You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.
What You’ll Own
1. Full Stack Development
● Design, develop, test, and deploy robust and scalable web applications.
● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.
● Contribute to frontend feature development and integration.
● Participate in feature planning, estimation, and execution.
2. Backend & API Engineering
● Design and develop RESTful APIs and backend services.
● Implement asynchronous workflows and scalable microservice architectures.
● Ensure performance, reliability, and security of backend systems.
● Implement authentication, authorization, and data protection best practices.
3. Database Design & Optimization
● Design and manage MongoDB schemas using Mongoose.
● Optimize queries and database performance for scale.
● Ensure data integrity and efficient data access patterns.
4. Frontend Collaboration & Integration
● Collaborate with frontend developers to integrate React components and APIs seamlessly.
● Ensure responsive, high-performing application behavior.
5. System Design & Scalability
● Contribute to system architecture and technical design discussions.
● Design scalable, maintainable, and future-ready solutions.
● Optimize applications for speed and scalability.
6. Product & Cross-Functional Collaboration
● Work closely with product and design teams to deliver high-quality features in rapid iterations.
● Participate in the full development lifecycle—from concept to deployment and maintenance.
7. Code Quality & Best Practices
● Write clean, testable, and maintainable code.
● Follow Git-based version control and code review best practices.
● Contribute to improving engineering standards and workflows.
What We’re Looking For
Must-Haves
● 4+ years of professional experience as a Full Stack Engineer or similar role.
● Strong proficiency in JavaScript and TypeScript.
● Hands-on experience with Node.js and Express.js.
● Solid understanding of MongoDB and Mongoose.
● Experience building and consuming RESTful APIs and microservices.
● Strong understanding of asynchronous programming patterns.
● Good grasp of system design principles and application architecture.
● Experience with Git and version control best practices.
● Bachelor’s degree in Computer Science, Engineering, or a related field.
Good-to-Have / Preferred
● Frontend development experience with React.js.
● Exposure to Three.js or similar 3D/visualization libraries.
● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).
● Knowledge of Docker and containerization workflows.
● Experience with testing frameworks (Jest, Mocha, etc.).
● Familiarity with CI/CD pipelines and automated deployments.
Tools You’ll Use
● Backend: Node.js, Express.js, TypeScript
● Frontend: React.js (preferred)
● Database: MongoDB, Mongoose
● Version Control: Git, GitHub / GitLab
● Cloud & DevOps: AWS / GCP / Azure, Docker
● Collaboration: Google Workspace, Notion, Slack
Key Metrics You’ll Own
● Code quality, performance, and scalability
● Timely delivery of features and releases
● System reliability and reduction in production issues
● Contribution to architectural improvements
Why company
● Work on impactful, product-driven tech platforms.
● High-ownership role with end-to-end engineering exposure.
● Opportunity to work with modern technologies and evolving architectures.
● Collaborative startup culture with strong learning and growth opportunities.
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
StarApps Studio is a product-driven SaaS company building Shopify apps that power thousands of online stores. We’ve developed 6 highly-rated Shopify apps (averaging 4.9★) used by 30,000+ Shopify merchants worldwide, including over 1,000 Shopify Plus stores. In just a few years, our bootstrapped team grew from $5.5M to $10M in Annual Recurring Revenue (ARR) by obsessing over quality and merchant success. We’re a tight-knit, 20-person team based in Baner, Pune, on a mission to help e-commerce brands create world-class shopping experiences.
Role Overview
We are looking for a Full Stack Developer who will own features end-to-end with an emphasis on backend excellence. In this role, you will design and optimize complex data models and API architectures in Ruby on Rails, implement robust background job queues (e.g. delayed_job) for heavy workloads, and perform rigorous performance tuning to ensure our systems scale. On the frontend, you'll build and integrate React components to deliver complete, user-friendly features. This is a role for someone who loves tackling deep technical challenges in the backend while also crafting intuitive user interfaces – an opportunity to leverage your backend expertise while driving full-stack product ownership.
Key Responsibilities
- Architect & Optimize Backend: Design scalable database schemas and efficient data models. Develop high-performance RESTful APIs and services in Ruby on Rails, ensuring clean, maintainable code and great performance.
- Backend API Development: Design, implement, and maintain robust backend services and RESTful APIs in Ruby on Rails to support new features and internal tools.
- End-to-End Performance Tuning: Optimize application performance across the stack – from minimizing frontend load times to improving backend query efficiency – for our high-traffic, data-intensive apps.
- Collaboration & Agile Delivery: Work closely with designers, product managers, and QA to translate requirements into technical solutions. Participate in sprint planning, code reviews, and daily deployments to ship features continuously and reliably.
- Quality & Maintenance: Write clean, maintainable code with appropriate test coverage (unit and integration tests) to ensure reliability. Monitor, debug, and resolve issues in production, and continually refactor and improve existing code for stability and performance
What We’re Looking For (Requirements)
- 4–8 Years Experience: Proven experience as a software developer in a product company (experience in e-commerce or SaaS is highly preferred). You have built real products used by actual customers at scale.
- Ruby on Rails Expertise: Strong command of Ruby on Rails. Experience designing RESTful APIs, working with MVC architecture, and using Rails best practices. You should understand how to structure large Rails applications for maintainability.
- Backend Proficiency: Comfortable building server-side applications and APIs with Ruby on Rails. You can implement business logic, integrate with databases, and create RESTful endpoints (bonus if you’ve worked with GraphQL or other backend frameworks).
- Database Skills: Proficiency with PostgreSQL (or similar RDBMS). Capable of writing complex SQL queries, optimizing queries/indexes, and designing efficient relational schemas. Familiarity with Redis or caching strategies is a plus.
- Front-End Proficiency: Comfortable building user interfaces with React and modern JavaScript/TypeScript. Able to implement frontend components that consume APIs and provide a smooth user experience.
- System Design & Quality: Solid understanding of web application architecture, performance tuning, and scalability concerns. Experience with profiling, benchmarking, and optimizing web applications. Commitment to writing clean, maintainable code with proper tests.
- Product Mindset: You care about the why behind the features. You are comfortable digging into requirements, questioning assumptions, and ensuring that we build solutions that truly solve merchant problems.
- Adaptability & Collaboration: Excellent problem-solving skills, communication, and ability to work in a fast-paced, collaborative environment. You are a self-starter who can take ownership of tasks and drive them to completion, but also know when to ask for help.
Tech Stack
- Frontend: React, TypeScript/JavaScript, HTML5, CSS3 (Tailwind/Bootstrap), modern build tools (Webpack, Babel).
- Backend: Ruby on Rails (REST APIs, background jobs), some services in Python.
- Database: PostgreSQL.
- Cloud & DevOps: Amazon Web Services (EC2, S3, RDS, CloudFront), Docker, CI/CD for daily deployments.
- Tools: Git (GitHub), Agile issue tracking (JIRA/Trello), and a keen use of automated testing.
(Don’t worry if you aren’t familiar with every item – we value willingness to learn. This is our current stack, and we continually adopt new technologies that improve our products.)
Why Join Us
- High Impact & Ownership: Your work will directly enhance the shopping experience of 50M+ shoppers daily. At StarApps, developers deploy code daily and see the immediate impact on thousands of merchants – you’ll own projects end-to-end and build features that matter.
- Fast-Growing, Profitable Startup: Join a bootstrapped, profitable company on an exciting growth trajectory (from $4M to $10M ARR). There’s no bureaucracy here – just a passionate team obsessed with product quality and merchant happiness. You’ll be part of our core team as we scale, with ample opportunities to grow into leadership roles.
- Cutting-Edge Tech & Challenges: Work with modern technologies (React, Rails, AWS) on performance-intensive applications. Tackle complex challenges in scaling, optimization, and UX for a global user base – continuously sharpen your skills in a supportive, learning-focused environment.
- Collaborative Culture: We are a small 25-person team that operates like a close-knit family. You’ll work side by side with experienced founders and a talented team that values innovation, humility, and continuous improvement. Our culture is open, empathetic, and growth-oriented – every voice is heard, and every team member plays a crucial role in our success.
Growth & Benefits: We invest in our team’s growth. Expect a competitive salary, performance bonuses, and whatever tools you need to do your best work. We sponsor professional development (courses, conferences, books) and encourage knowledge-sharing. You’ll enjoy a flexible leave policy, team off-sites, and the excitement of building a global product from our new office in Baner, Pune.
About Us
Euphoric Thought Technologies is a fast-growing technology company focused on delivering scalable, high-performance digital solutions. We are looking for a skilled Backend Developer to join our dynamic team and contribute to building robust and efficient systems.
Key Responsibilities
Design, develop, and maintain scalable backend services and APIs
Write clean, maintainable, and efficient code
Collaborate with frontend developers, DevOps, and product teams
Optimize applications for maximum speed and scalability
Troubleshoot, debug, and upgrade existing systems
Implement security and data protection best practices
Participate in code reviews and technical discussions
Required Skills & Qualifications
4–5 years of hands-on experience in backend development
Strong proficiency in at least one backend language such as Java and Core Java
Experience with frameworks like Spring Boot, Django, Express.js, etc.
Good understanding of RESTful APIs and Microservices architecture
Strong experience with databases (MySQL, PostgreSQL, MongoDB)
Familiarity with version control systems (Git)
Experience with cloud platforms (AWS/Azure/GCP) is a plus
Knowledge of Docker, Kubernetes, CI/CD pipelines is an added advantage
Strong problem-solving and analytical skills
Department
Product & Technology
Location
On-site | Prabhat Road, Pune
Experience
3-5 Years in a Data Engineering or Analytics Role
Domain
Fintech / Wealth Management — non-negotiable
Compensation
11-12 LPA Fixed + Performance Bonus
Growth
Title upgrade + salary revision at 12–18 months for strong performers
Why this role is different from most Data Engineer postings
You will work directly with the founding team on a live wealth management platform used by HNI and NRI clients. You will not spend years in a queue waiting to matter your work ships to production, your analysis influences product decisions, and you will guide junior teammates from day one. If you perform, a raise and title upgrade are on the table within 1218 months. This is the kind of early-team role that defines careers.
About Cambridge Wealth
Cambridge Wealth is a fast-growing, award-winning Financial Services and Fintech firm obsessed with quality and exceptional client service. We serve a high-profile clientele NRI, Mass Affluent, HNI, and ultra-HNI professionals and have received multiple awards from major Mutual Fund houses and BSE. We are past the zero-to-one stage and now focused on scaling our features and intelligence layer. You will be joining at exactly the right time.
What You Will Be Doing
This is a central, hands-on data engineering role at the intersection of financial analytics and applied ML. You will own the data pipelines and analytical models that power investment insights for wealth management clients transforming transaction data and portfolio information into measurable, actionable intelligence.
We are not looking for someone who just keeps the lights on. We want someone who looks at a working system and immediately sees how to make it 10x faster, cleaner, and smarter using AI and automation wherever possible.
Key Responsibilities:
Data Engineering & Pipelines
- Build and optimize PostgreSQL-based pipelines to process large volumes of investment transaction data.
- Design and maintain database schemas, foreign tables, and analytical structures for performance at scale.
- Write advanced SQL — window functions, stored procedures, query optimization, index design.
- Build Python automation scripts for data ingestion, transformation, and scheduled pipeline runs.
- Monitor AWS RDS workloads and troubleshoot performance issues proactively.
Financial Analytics & Modelling
- Develop analytical frameworks to evaluate client portfolios against benchmarks and category averages.
- Build data models covering mutual fund schemes, SIPs, redemptions, switches, and transfer lifecycles.
- Create materialized views and derived tables optimized for dashboards and internal reporting tools.
- Analyse client transaction history to surface patterns in investment behaviour and financial discipline.
Applied ML & AI-Driven Development
- Use Python (Pandas, NumPy, Scikit-learn) for trend analysis, forecasting, and predictive modelling.
- Implement classification or regression models to support financial pattern detection.
- Use AI tools — LLMs, Copilots — to accelerate ETL development, code quality, and data cleaning.
- Identify opportunities to automate repetitive data tasks and advocate for smarter tooling.
Data Quality & Governance
- Own data integrity end-to-end in a live, high-stakes financial environment.
- Build and maintain validation and cleaning protocols across all financial datasets.
- Maintain Excel models, Power Query workflows, and structured reporting outputs.
Collaboration & Junior Mentorship
- Work directly with Product, Investment Research, and Wealth Advisory teams.
- Translate open-ended business questions into structured queries and measurable outputs.
- Guide 1–2 junior trainees — review their work, set code quality standards, and help them grow.
- Present findings clearly to non-technical stakeholders — no jargon, just clarity.
Skills — What We Need vs. What Helps
Skill / Tool
Requirement
Must-Haves:
SQL & PostgreSQL (window functions, stored procedures, optimization)
Python — Pandas, NumPy for data processing and automation
ML fundamentals — classification or regression (Scikit-learn)
AWS RDS or equivalent cloud database experience
Financial domain knowledge — mutual funds, SIPs, portfolio concepts
Python data visualization — Matplotlib, Seaborn, or Plotly
Strong Advantage
Excel — Power Query, advanced modelling
Materialized views, query planning, index optimization
Experience with BI/dashboard tools
Good to Have
NoSQL databases
Prior fintech or wealth management startup experience
Financial Domain — Non-Negotiable
This is a wealth management platform. You must come in with a working understanding of:
- Mutual fund structures, scheme types, and NAV-based transactions
- Investment lifecycle — SIPs, Lump Sum, Redemptions, Switches, and STPs
- Portfolio allocation and benchmarking against indices (e.g. Nifty 50, category averages)
- How HNI/NRI clients interact with financial products differently from retail investors
You do not need to be a CFA. But if mutual funds and portfolio analytics are completely new territory, this role is not the right fit right now.
The Culture Fit — Read This Carefully
We are a small, fast-moving team. This is not a place where you wait for a ticket to arrive in your queue. The right person for this role:
- Has worked at a small startup before and is used to wearing multiple hats
- Finds broken or slow data systems genuinely irritating and fixes them without being asked
- Reaches for Python or an LLM when there is a repetitive task — automating is instinctive
- Is comfortable saying 'I don't know but I'll find out' and follows through independently
- Wants visibility and ownership, not just a well-defined job description
- Is looking for a role where strong performance is directly visible and rewarded
Growth Path — What Happens If You Perform
This is not a vague 'growth opportunity' pitch.
If you hit the bar in your first 12–18 months, you will receive a salary revision and a title upgrade to Senior Data Engineer or Lead Data Engineer depending on team expansion. As we scale our Data and AI team, this role is the natural stepping stone to a team lead position. You will also gain direct exposure to founding-team decision-making — the kind of access that is hard to get at larger companies.
Preferred Background
- 2–4 years in a data engineering or analytics role at a startup or small Fintech
- Experience in a live product environment where data errors have real consequences
- Exposure to portfolio analytics, investment research, or wealth management platforms
- Has mentored or reviewed code for at least one junior team member
Hiring Process
We respect your time. The process is direct and moves fast.
- Screening Questions — 5 minutes online
- Online Challenge — MCQ(Data, SQL, AWS, etc), and one applied ML or analytics problem, Communication Skills and Personality (focused, not trick questions)
- People Round — 30-minute video call, culture and communication
- Technical Deep-Dive — 1 hour in person, live financial data problems and your past work
- Founder's Interview — 1 hour in person, growth conversation and mutual fit
- Offer & Background Verification
Job Title: Software Developer (Contractor)
Location: Remote, Up to 1-year contract
Compensation: Hourly
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
Job Title: Software Developer
Location: Remote
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
Position Responsibilities:
- Collaborate with the development team to maintain, enhance, and scale the product for enterprise use.
- Design and develop scalable, high-performance solutions using cloud technologies and containerization.
- Contribute to all phases of the development lifecycle, following SOLID principles and best practices.
- Write well-designed, testable, and efficient code with a strong emphasis on Test-Driven Development (TDD), ensuring comprehensive unit, integration, and performance testing.
- Ensure software designs comply with specifications and security best practices.
- Recommend changes to improve application architecture, maintainability, and performance.
- Develop and optimize database queries using T-SQL.
- Prepare and produce software component releases.
- Develop and execute unit, integration, and performance tests.
- Support formal testing cycles and resolve test defects.
AI-Specific Responsibilities:
- Integrate AI-powered tools and frameworks to enhance code quality and development efficiency.
- Utilize AI-driven analytics to identify performance bottlenecks and optimize system performance.
- Implement AI-based security measures to proactively detect and mitigate potential threats.
- Leverage AI for automated testing and continuous integration/continuous deployment (CI/CD) processes.
- Guide the adoption and effective use of AI agents for automating repetitive development, deployment, and testing processes within the engineering team.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Highly proficient in ASP.NET Core (C#) and full-stack development.
- Experience developing REST APIs.
- Proficiency in front-end technologies (JavaScript, HTML, CSS, Bootstrap, and UI frameworks).
- Strong database experience, particularly with T-SQL and relational database design.
- Advanced understanding of object-oriented programming (OOP) and SOLID principles.
- Experience with security best practices in web and API development.
- Knowledge of Agile SCRUM methodology and experience in collaborative environments.
- Experience with Test-Driven Development (TDD).
- Strong analytical skills, problem-solving abilities, and curiosity to explore new technologies.
- Ability to communicate effectively, including explaining technical concepts to non-technical stakeholders.
- High commitment to continuous learning, innovation, and improvement.
AI-Specific Qualifications:
- Proficiency in AI-driven development tools and platforms such as GitHub Copilot in Agentic Mode.
- Knowledge of AI-based security protocols and threat detection systems.
- Experience integrating GenAI or Agentic AI agents into full-stack workflows (e.g., using AI for code reviews, automated bug fixes, or system monitoring).
- Demonstrated proficiency with AI-assisted development tools and prompt engineering for code generation, testing, or documentation.
Backend Engineer III – Senior Python Developer (LLM & AI)
Location: Gurgaon, India
Positions: 1
Experience: 6 to 9 Years
Gurgaon Hybrid
About the Role
We are seeking an experienced Backend Engineer III / Senior Python Developer to join our AI engineering team and play a critical role in building scalable, secure, and high-performance backend platforms for LLM and AI-driven applications. You will work as a hands-on individual contributor while collaborating closely with Machine Learning Engineers, Data Scientists, Product Managers, and Cloud/DevOps teams to deliver innovative, production-grade AI solutions.
Key Responsibilities
- Design, develop, and maintain scalable backend systems and services using Python to support LLM and AI-based applications
- Build and maintain RESTful APIs and microservices that serve machine learning models and AI components
- Write clean, modular, efficient, and testable code following industry best practices and coding standards
- Participate actively in code reviews, ensuring high quality, security, and maintainability of the codebase
- Debug, profile, and optimize applications to improve performance, reliability, and scalability
- Identify and resolve performance bottlenecks in AI/ML pipelines and backend services
- Collaborate with ML engineers, data scientists, and product teams to translate business and technical requirements into robust backend solutions
- Mentor and support junior developers, promoting a culture of technical excellence and continuous learning
- Design and implement CI/CD pipelines and automate deployment workflows to ensure consistent and reliable releases
- Stay up to date with emerging trends in Python, cloud-native development, and LLM/AI engineering practices and apply them to improve systems and processes
Required Skills & Experience
- 6 to 9 years of strong hands-on experience in Python development
- Solid understanding of Python software design, architecture patterns, and testing best practices
- Proven experience working on AI, Machine Learning, or LLM-based projects
- Strong experience in building and consuming RESTful APIs and microservices architectures
- Hands-on experience with FastAPI, Flask, or similar model-serving frameworks
- Strong debugging, performance profiling, and optimization skills
- Experience with CI/CD tools and workflows (e.g., GitHub Actions, Azure DevOps, Jenkins, etc.)
- Working knowledge of Docker and Kubernetes is a strong plus
- Excellent analytical, problem-solving, and communication skills
- Ability to work independently in a fast-paced, evolving AI/ML environment while mentoring junior team members
Education & Certifications
- Bachelor’s degree in Computer Science, Software Engineering, or a related technical field
- AWS or other relevant cloud certifications are preferred but not mandatory
Why Join Us?
- Work on cutting-edge AI and LLM platforms
- Collaborate with top-tier engineering and data science teams
- Opportunity to influence system architecture and technical direction
- Competitive compensation and career growth opportunities
Job Title: Principal Architect / Scalability Lead (AWS)
📍 Location: Gurgaon (Hybrid)
🕒 Employment Type: Full-Time
Role Overview
We are seeking a Principal Architect / Scalability Lead with deep expertise in AWS and large-scale distributed systems to architect and scale cloud-native products from MVP to enterprise scale.
This role demands a senior technical leader who has proven experience designing systems that handle high throughput, large concurrent workloads, and enterprise-grade reliability, while ensuring exceptional end-user experience.
You will work closely with Product, Data Engineering, AI/ML, and Backend teams to define architecture standards, scalability roadmaps, and engineering best practices.
Key Responsibilities
🏗 Architecture & Scalability Leadership
- Architect highly scalable, resilient, and high-performance cloud-native systems on AWS.
- Design distributed systems capable of supporting 100K+ concurrent users.
- Lead architecture evolution from MVP to enterprise-grade deployment.
- Translate business and consumer requirements into robust technical architecture.
- Drive scalability planning, capacity modeling, and performance engineering.
🔄 End-to-End Ownership
- Own full SDLC visibility from discovery and design to release, monitoring, and optimization.
- Establish best practices for:
- Microservices architecture
- Distributed systems design
- Observability & monitoring
- DevSecOps & CI/CD
- Ensure system uptime, fault tolerance, and cost efficiency.
☁ AWS Cloud & Infrastructure
- Design and implement scalable systems using AWS services.
- Lead containerization and orchestration using Docker and Kubernetes (EKS).
- Architect secure, automated CI/CD pipelines.
- Drive cloud cost optimization and infrastructure efficiency.
📈 Performance & Reliability Engineering
- Define and enforce SLAs, SLOs, and reliability metrics.
- Lead performance testing, load testing, and scalability validation.
- Implement monitoring, alerting, and observability frameworks.
- Design fault-tolerant and highly available systems.
🧠 Backend, Data & AI Collaboration
- Provide architectural guidance for:
- Backend services using Node.js and Python
- Frontend platforms using React / Next.js
- Data platforms using Snowflake
- Collaborate with Data Engineering and AI/ML teams on data-intensive and AI-driven systems.
- Design architectures supporting asynchronous processing, caching, and event-driven workflows.
👥 Leadership & Governance
- Mentor senior engineers and guide architecture best practices.
- Lead architecture governance and design reviews.
- Influence senior stakeholders with data-driven technical decisions.
- Drive cross-functional alignment across Product, Engineering, Data, and AI teams.
Required Qualifications
- 8–15 years of experience in software engineering.
- Proven experience scaling distributed systems handling 100K+ users or high-throughput workloads.
- Deep hands-on expertise in AWS cloud architecture.
- Strong experience with Docker, Kubernetes, and container orchestration.
- Expertise in microservices, caching strategies, asynchronous processing, and distributed systems.
- Strong understanding of performance engineering and reliability frameworks.
- Experience building enterprise-grade systems for large-scale organizations.
Preferred Skills
- Experience with event-driven architectures (Kafka, SQS, SNS, etc.).
- Knowledge of database scalability and data warehousing (Snowflake).
- Exposure to Data Engineering and AI/ML platforms.
- Strong stakeholder communication and strategic thinking skil
Senior Quality Engineer – AI Products
Fulltime
Remote
Requirements
● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.
● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.
● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.
● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.
● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.
● Experience with AWS or other major cloud platforms.
● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.
● Advanced skills with API and SQL testing methodologies.
● Familiarity with test management tools such as TestRail; experience with Qase is a plus.
● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.
● Experience with testing tools: Jira, Sentry, DataDog.
● Strong understanding of Agile/Scrum methodologies.
● Proven track record of mentoring junior engineers and contributing to process improvements.
● Excellent analytical and problem-solving abilities.
● Strong communication skills with ability to present to both technical and non-technical stakeholders.
● Proficiency in English (C1-C2 level).
● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.
Preferred Qualifications
● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).
● Hands-on experience with document parsing, OCR, or unstructured data pipelines.
● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.
● Experience testing SaaS products in regulated industries (such as PCI-compliant).
● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).
● Experience with microservice architectures and distributed systems.
● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.
● Background in security or compliance testing for AI systems.
● Certifications such as ISTQB or CSTE.
● Experience working in legal technology, fintech, or professional services software.
● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.
● Experience evaluating and implementing new QE tools and processes
Job Summary
We are looking for an experienced Java Full Stack Developer with strong expertise in Java, React.js, and AWS to design, develop, and maintain scalable web applications. The ideal candidate should have experience building high-performance applications and working across both front-end and back-end technologies.
Key Responsibilities
- Develop and maintain full-stack web applications using Java and React.js
- Design and build RESTful APIs and microservices using Java frameworks
- Develop responsive and interactive frontend interfaces using React.js
- Work with AWS services for deployment, scalability, and infrastructure
- Collaborate with cross-functional teams including product managers, designers, and QA
- Write clean, maintainable, and efficient code following best practices
- Participate in code reviews, testing, debugging, and performance optimization
- Implement CI/CD pipelines and cloud-based solutions
Required Skills
- Strong experience in Java (Spring Boot / Spring Framework)
- Good knowledge of React.js, JavaScript, HTML, CSS
- Experience building REST APIs and microservices architecture
- Hands-on experience with AWS services (EC2, S3, Lambda, RDS, etc.)
- Familiarity with Git, CI/CD pipelines, and Agile development
- Experience with database technologies (MySQL, PostgreSQL, or MongoDB)
Preferred Skills
- Experience with Docker / Kubernetes
- Knowledge of serverless architecture
- Experience working in cloud-native environments
- Understanding of system design and scalable architecture
About the Role
We are looking for an experienced Senior Backend Developer to design and build scalable, secure, and high-performance backend systems. The ideal candidate will have deep expertise in Python/Django, microservices architecture, and cloud technologies, along with strong problem-solving skills and leadership capabilities.
Key Responsibilities
•Design and develop backend services using Django and Python.
•Architect and implement microservices-based solutions for scalability and maintainability.
•Work with PostgreSQL and Redis for efficient data storage and caching.
•Build and maintain RESTful APIs and ensure robust API design principles.
•Implement system design best practices for high availability and fault tolerance.
•Containerize applications using Docker and manage deployments with Kubernetes.
•Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.
•Apply security best practices to protect data and application integrity.
•Collaborate with frontend, QA, and DevOps teams for seamless delivery.
•Mentor junior developers and conduct code reviews to maintain quality standards.
Required Skills & Expertise
•Django/Python – Advanced proficiency in backend development.
•Microservices Architecture – Strong understanding of distributed systems.
•PostgreSQL & Redis – Expertise in relational and in-memory databases.
•Docker/Kubernetes – Hands-on experience with containerization and orchestration.
•API Design & System Design – Ability to design scalable and secure systems.
•Cloud (AWS/Azure) – Practical experience with cloud services and deployments.
•Security Best Practices – Knowledge of authentication, authorization, and data protection.
Preferred Qualifications
•Experience with CI/CD pipelines and DevOps practices.
•Familiarity with message queues (e.g., RabbitMQ, Kafka).
•Exposure to monitoring tools (Prometheus, Grafana).
What We Offer
•Competitive salary and benefits.
•Opportunity to work on cutting-edge backend technologies.
•Collaborative and growth-oriented work environment.
About TVARIT
TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.
Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.
Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.
· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.
· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.
· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.
· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards
· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.
· Utilize Docker and Kubernetes for scalable data processing.
· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.
Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.
. 2 years of team handling experience.
· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).
· Strong analytical and problem-solving skills with attention to detail.
· Good to have MLOps, DevOps including model lifecycle management
· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.
· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming skills and expertise with data engineering and machine learning deployment
● Experience in databases including MySQL and NoSQL
● Experience in developing and maintaining critical and high availability systems will be given strong preference
● Experience in software design using design principles and architectural modeling.
● Experience working with AWS cloud platform.
● Strong analytical and data driven approach to problem solving
Description:
Experience in backend development with a strong focus on Java, Microservices and Java Migration
Java Microservices Back end Developer, Angular UI experience
Java Sprint boot Microservices with expected knowledge and Basic knowledge of Database.
Hands-on experience on Spring boot, Webservices, Microservices, SOAP and REST
Drive analysis of business requirements, functional requirements, and technical specification documents to design
and develop technical solutions that meet business needs
Assess opportunities for application and process improvement and obtain broader buy-in across global stakeholders
Pro-actively share and report risks, issues, challenges, blockers and forthcoming tasks with the stakeholders and team. Strong JavaScript (ES6+), HTML5, and CSS3 fundamentals, Familiar with matrix operations or nested data structures (arrays, objects, maps), Strong debugging and optimization skills
Having knowledge of some framework for TDD
Good to have Graph QL knowledge
Job Title : Senior DevOps Engineer (Only Mumbai Candidates)
Experience : 5+ Years
Location : Mumbai (On-site)
Notice Period : Immediate to 15 Days
Interview Process : 1 Internal Round + 1 Client Round
Mandatory Skills :
Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.
Role Overview :
We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.
The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.
Key Responsibilities :
- Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
- Deploy and manage microservices on Kubernetes clusters.
- Build and maintain Infrastructure as Code using Terraform and Helm.
- Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
- Implement GitOps workflows using ArgoCD or FluxCD.
- Ensure secure, scalable, and reliable DevOps architecture.
- Implement monitoring and logging using Prometheus, Grafana, or ELK.
Good to Have :
- Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Immediate hiring for Senior Data Engineer
📍 Location: Hyderabad/Bangalore
💼 Experience: 7+Years
🕒 Employment Type: Full-Time
🏢 Work Mode: Hybrid
📅 Notice Period: 0-1Month serving notice only
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
🔎 Key Responsibilities:
- Data Pipeline Development
- Data Modeling and Architecture
- Data Integration and API Development
- Data Infrastructure Management
- Collaboration and Documentation
🎯 Required Skills:
- Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
- 7+ years of proven experience in data engineering, software development, or related technical roles.
- 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
- 7+ years of experience with database systems, data modeling, and advanced SQL.
- 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
- Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
- 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
- Strong analytical, problem-solving, and debugging skills with high attention to detail.
- Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
- Ability to adapt to rapidly evolving technologies and business requirements.
Work Mode: 5 days in office
Notice: Max 30 days
*1 final round will be in-person
Responsibilities
● Own and champion the development process of our web-based applications, including: SDLC, coding standards, code reviews, check-ins and builds, issue tracking, bug triages, incident management. and testing.
● Build and maintain a high-performing software development team including hiring, training, and onboarding.
● Identify opportunities to eliminate non-value add activities to enable our developers to do what they love best—developing! No pointless meetings, no unnecessary interruptions, no random changes of course, no new problems from on high dumped in their lap each month.
● Identify growth opportunities for team members to continue to learn and develop in a supportive environment.
● Provide an engaging and challenging landscape for career growth.
● Provide leadership, mentorship, and motivation to the engineering team to sustain high levels of productivity and morale.
● Collaborate with Product Management on product requirements.
● Champion and advocate for the engineering team to the rest of the organization.
● Create a positive culture of fairness, quality, and accountability while challenging the status quo and bringing new ideas to light.
● Participate as a member of company’s Engineering Leadership team to build a high performing organization across multiple locations.
Requirements
● 12+ years of software development experience, 2+ years of development leadership experience.
● Demonstrated technical leadership and people management skills.
● Experience with agile development processes.
● Hands-on experience in driving/leading technical efforts in cloud-based applications.
● Proven track record of driving quality within a team, with a commitment to automated testing.
● Strong communication skills with the ability to effectively influence product at different levels of abstraction and communicate to both technical and non-technical audiences.
● Excellent coding skills to provide guidance and craftsmanship for our engineers
● Technical acumen to provide solid judgment in situations so you can provide the optimal short term decisions without sacrificing long term technology goals
● Demonstrated critical analysis skills to provide continuous improvement of technology, process, and productivity
Technical Experience
We are looking for someone who has experience working in environments that utilize some of the following technologies:
● AWS & Azure
● Typescript
● Node.js
● React.js
● Material UI
● Jira
● GitHub
● CI/CD
● SQL (MySQL, PostgreSQL, SQL Server)
● MongoDB
Location: Bangalore
Experience required: 7-10 years.
Key skills: .NET core, ASP .NET, Microsoft Azure, MVC, AWS
"At Pace Wisdom Solutions, our .NET team is a dynamic and collaborative group of experts specializing in end-to-end development. With a focus on both front-end and back-end technologies, we leverage the robust .NET framework and Azure to deliver innovative and scalable solutions. Our agile approach ensures adaptability to industry changes, empowering us to provide clients with cutting-edge and tailored applications."
We are seeking a highly skilled and experienced Senior .NET Developer with a minimum of 7 years of hands-on experience. The ideal candidate will possess expertise in both front-end and back-end development, with a strong background in MVC architecture and exposure to Microsoft Azure technologies. The role requires an individual who can work independently, lead a team effectively, and contribute to the successful delivery of projects.
Engineering Culture at Pace Wisdom:
We foster a collaborative and communicative environment where engineers are empowered to share ideas freely. Teamwork is paramount, and we believe the best solutions come from diverse perspectives. We are committed to promoting from within, providing clear career paths and mentorship opportunities to help our engineers reach their full potential. Our culture prioritizes continuous learning and growth, offering a safe space to experiment, innovate, and refine your skills.
Responsibilities:
• Create scalable solutions by understanding business requirements, write code, test according to best practices.
• Own and Collaborate with the team including our customers, QA, design, and other stakeholders to drive successful project delivery.
• Advocate and mentor teams to follow best practices around: documentation, unit testing, code reviews etc.
• Comply with security policies and processes.
Qualifications:
• 7-10 years of professional experience in developing applications using .NET framework, .NET Core, Azure Services, Entity Framework
• Good knowledge of common software architecture design patterns, Object Oriented Programming, Data structures, Algorithms, Database design patterns and other best practices.
• Exposure to Cloud technologies (AWS, Azure, Google Cloud - at least one of them)
• Exposure to developing SPA on React, Angular or VueJS
• Experience with micro services, messaging systems (RabbitMQ/Kafka)
• Proven ability to lead and mentor development teams.
• Effective communication and interpersonal skills.
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
• Right from the idea stage to scope out business requirements.
• Design & architect the right solution and define tangible milestones.
• Setup dedicated and on-demand tech teams for agile delivery.
• Take accountability for successful deployments to ensure efficient go-to-market Implementations.
Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business.






















