50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Excellent Opportunity- Lead Java Full Stack (React +AWS+ Dynamo) -Wissen Technology, Whitefield Bengaluru
Hi ,
As discussed please find company's details and JD as mentioned:
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Job Description:
Requirements Lead (Java+ React+ AWS+ DynamoDB)
- Bachelor’s degree in computer science or related field.
- 7-12 years of experience in software development.
- Hands-on experience working on AWS cloud environment and DynamoDB.
- Proficiency in Java, J2EE, Spring, Hibernate, REST API, Microservices.
- Experience in developing applications using J2EE Design Patterns and AWS services.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills
Amita Soni
Senior Consultant-Talent Acquisition-Wissen Technology, Pune
Key Responsibilities
· Develop and maintain full-stack applications using MongoDB, Express.js, React (Next.js), and Node.js
· Build performant and scalable frontend applications using Next.js
· Write clean, type-safe, and maintainable code using TypeScript
· Design and implement robust RESTful APIs
· Optimize applications for performance, SEO, and scalability
· Collaborate with design, product, and QA teams
· Debug, troubleshoot, and enhance existing systems
· Utilize AI tools to improve development efficiency and workflows
Required Skills
· Strong proficiency in JavaScript (ES6+) and TypeScript
· Hands-on experience with:
· React.js & Next.js (SSR, SSG, routing, performance optimization)
· Node.js & Express.js
· MongoDB
· Strong understanding of API development and integration
· Experience with Git version control
· Knowledge of responsive design and cross-browser compatibility
· Experience building production-grade applications
AI Tool Expertise (Mandatory)
· Experience using tools like ChatGPT, GitHub Copilot, Cursor or similar
· Ability to:
o Generate, optimize, and debug code using AI tools
o Improve development speed and productivity
· Strong ability to review and validate AI-generated code
· Basic understanding of prompt engineering for development tasks
Lead Data Engineer
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
What you will wake up to solve.
- Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
- Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
- Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
- Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
- Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
- Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
- Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.
Welcome to Searce
The AI-Native tech consultancy that's rewriting the rules.
Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads.
Functional Skills
the solver personas.
- The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
- The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
- The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
- The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
- The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.
Experience & Relevance
- Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
- Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
- AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
- Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
- Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹18,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
We are looking for a proactive and detail-oriented L1 Application Support Engineer to support end users and monitor applications in a cloud-based environment.
This is a hybrid role involving:
- End-user IT support
- Application monitoring & troubleshooting
- Basic cloud (AWS) operations
The ideal candidate should have exposure to production/application support, monitoring tools, and incident handling.
🎯 Key Responsibilities
🧑💼 End-User Support
- Provide first-level support via calls, emails, and ticketing tools
- Troubleshoot issues related to Windows OS (10/11) and Microsoft Office 365 (Outlook, Teams, OneDrive)
- Manage user accounts, password resets, and access provisioning using Active Directory
- Support onboarding/offboarding (laptop setup, access configuration)
📊 Application Monitoring & Support
- Monitor applications and infrastructure using tools like Amazon CloudWatch, Grafana, and Prometheus
- Respond to alerts and perform initial troubleshooting
- Analyze application logs and identify issues
- Perform basic validation of APIs and application availability
- Assist in deployment validation and smoke testing
⚙️ Technical Troubleshooting
- Perform basic network troubleshooting (DNS, VPN, connectivity issues)
- Use Linux/Unix commands for log analysis and debugging
- Identify issues and escalate to L2/L3 teams with proper analysis
🎫 Incident & Ticket Management
- Log, track, and update tickets in ServiceNow or Jira
- Ensure SLA adherence and timely resolution
- Take ownership of issues until closure or escalation
- Collaborate with DevOps, infrastructure, and application teams
Overview
We are looking for a highly skilled Lead Data Engineer with strong expertise in Data Warehousing & Analytics to join our team. The ideal candidate will have extensive experience in designing and managing data solutions, advanced SQL proficiency, and hands-on expertise in Python & POWER BI .
Skills : Python, Databricks, SQL
Key Responsibilities:
- Design, develop, and maintain scalable data warehouse solutions.
- Write and optimize complex SQL queries for data extraction, transformation, and reporting.
- Develop and automate data pipelines using Python.
- Work with AWS cloud services for data storage, processing, and analytics.
- Collaborate with cross-functional teams to provide data-driven insights and solutions.
- Ensure data integrity, security, and performance optimization.
Required Skills & Experience:
- Must have a minimum of 6-10 years of experience in Data Warehousing & Analytics.
- Must have strong experience in Databricks
- Strong proficiency in writing complex SQL queries with deep understanding of query optimization, stored procedures, and indexing.
- Hands-on experience with Python for data processing and automation.
- Experience working with AWS cloud services.
- Hands-on experience with reporting tools like Power BI or Tableau.
- Ability to work independently and collaborate with teams across different time zones.
Job Role: Sr. Full Stack Developer
Experience- Min 6 Years
Location-Bangalore
Company Profile- https://www.wissen.com/
Domain
Fintech, Banking, Capital Markets, Investment Banking
Job Summary
We are looking for a highly experienced Senior Full Stack Engineer with strong hands-on expertise in Java, Spring Boot, AWS,React, and DynamoDB. The ideal candidate will have a strong background in building secure, scalable, high-performance applications for financial services, with experience in regulated environments such as banking, capital markets, or investment banking.
Key Responsibilities
- Design, develop, and maintain scalable backend services using Java and Spring Boot.
- Build responsive and reusable user interfaces using React.
- Design and optimize data models and access patterns in DynamoDB.
- Develop RESTful APIs and integrate them with front-end and downstream systems.
- Work on microservices-based architecture and cloud-native application design.
- Collaborate with product managers, business analysts, architects, QA, and DevOps teams to deliver business-critical solutions.
- Ensure application security, performance, reliability, and maintainability.
- Participate in code reviews, architecture reviews, and design discussions.
- Troubleshoot production issues and support enhancements in live environments.
- Follow SDLC, Agile, and DevOps best practices in a fast-paced financial services environment.
Required Skills
- 8+ years of experience in software development.
- Hands-on experience working on React, AWS cloud environment and DynamoDB.
- Proficiency in Java, J2EE, Spring, Hibernate, REST API, Microservices.
- Experience in developing applications using J2EE Design Patterns and AWS services.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
- Qualifications
- Bachelor’s or Master’s degree in Computer Science, IT, or a related discipline.
- Proven experience delivering enterprise-grade applications in regulated financial environments.
Role Overview:
As a Database Administrator, you will be responsible for the full lifecycle management of our MySQL or PostgreSQL database systems. This includes installation, configuration, performance tuning, security implementation, backup and recovery, and proactive monitoring. You will ensure the reliability, availability, and security of our database infrastructure, supporting our internal operations and client projects.
Job Responsibilities
- Installation and configuration of database software across diverse operating systems.
- Designing efficient physical database models derived from logical designs and application specifications, along with configuring database servers according to best practices and workload requirements.
- Establishing and implementing robust backup and recovery strategies tailored to data volatility and application availability needs.
- Implementing comprehensive security measures at the OS, database, and network levels to ensure authorized data access and maintain a rigorous security infrastructure with auditing capabilities for compliance.
- Fine-tuning hardware/VM resources for optimal database performance.
- Proactive monitoring of the database environment, including performance optimization through adjustments to data structures, SQL, application logic, or the DBMS subsystem.
- Configuration and implementation of database replication technologies (e.g., Master-Slave, Master-Master, Log Shipping, Mirroring, Always On).
- Automation of routine DBA tasks utilizing scripting languages such as Shell, PowerShell, Python, or GO.
- Proficiency in writing general SQL queries.
- Setting up comprehensive monitoring solutions for databases (OS and database levels) using custom scripts or third-party monitoring tools.
- Basic Cloud platform knowledge (AWS/GCP/Azure)
Qualification
Experienced DBA (4-8 years) with deep expertise in MySQL or PostgreSQL architecture, configuration, and management.
Proficient in SQL, backup/recovery, security implementation, performance tuning, and replication for both systems.
Skilled in scripting (e.g., Shell, Python) and Linux, possessing strong problem-solving, communication, and teamwork abilities with a proactive approach.
A relevant Bachelor's degree in Computer Science, Information Technology, or a related field.
We identify better ways of doing things.
Solver? Absolutely. But not the usual kind. We are searching for the architects of the
audacious & the pioneers of the possible. If you are the type to dismantle assumptions,
re-engineer ‘best practices,’ and build solutions that make the future possible NOW,
then you are speaking our language.
➔ Improver. Solver. Futurist.
➔ Great sense of humor.
➔ ‘Possible. It is.’ Mindset.
➔ Compassionate collaborator. Bold experimenter. Tireless iterator.
➔ Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
➔ Thinks in systems. Solves at scale.
This Isn’t for Everyone. But if you’re the kind who questions why things are done a
certain way—and then identifies 3 better ways to do it — we’d love to chat with you.
Director - Data engineering
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
what you will wake up to solve.
1. Delivery & Tactical Rigor
- Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
- Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
- Execution & Technical Resolution
- Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
- Quality Enforcement
- Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.
2. Strategic Growth & Practice Scaling
- Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
- Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
- Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.
3. Leadership & Unit Management
- Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
- Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
- Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.
Welcome to Searce
The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.
We don’t do traditional.
As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.
Functional Skills
1. Delivery Management & Operational Excellence
- Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
- Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
- SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
- Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.
2. Architectural Implementation & Technical Oversight
- Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
- Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
- Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
- DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.
3. Unit Management & Commercial Execution
- Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
- Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
- Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
- Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.
Tech Superpowers
- Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
- End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
- Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
- Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
- AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
- Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
- Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
- Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.
Experience & Relevance
- Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
- Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
- Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
- Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
- Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
- Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Solutions Architect - Data Engineering
Modern tech solutions advisory & 'futurify' consulting as a Searce lead fds (‘forward deployed solver’) architecting scalable data platforms and robust data engineering solutions that power intelligent insights and fuel AI innovation.
If you’re a tech-savvy, consultative seller with the brain of a strategist, the heart of a builder, and the charisma of a storyteller — we’ve got a seat for you at the front of the table.
You're not a sales lead. You're the transformation driver.
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
- Improver. Solver. Futurist.
- Great sense of humor.
- ‘Possible. It is.’ Mindset.
- Compassionate collaborator. Bold experimenter. Tireless iterator.
- Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
- Thinks in systems. Solves at scale.
This Isn’t for Everyone. But if you’re the kind who questions why things are done a certain way— and then identifies 3 better ways to do it — we’d love to chat with you.
Your Responsibilities
what you will wake up to solve.
You are not just a Solutions Architect; you are a futurifier of our data universe and the primary enabler of our AI ambitions. With a deep-seated passion for data engineering, you will architect and build the foundational data infrastructure that powers the customers entire data intelligence ecosystem.
As the Directly Responsible Individual (DRI) for our enterprise-grade data platforms, you own the outcome, end-to-end. You are the definitive solver for our customer's most complex data challenges, leveraging a powerful tech stack including Snowflake, Databricks, etc. and core GCP & AWS services (BigQuery, Spanner, Airflow, Kafka). This is a hands-on-keys role where you won't just design solutions—you'll build them, break them, and perfect them.
- Solution Design & Pre-sales Excellence:Collaborate with cross-functional teams, including sales, engineering, and operations, to ensure successful project delivery.
- Design Core Data Engineering: Master data modeling, architecting high-performance data ingestion pipelines and ensuring data quality and governance throughout the data lifecycle.
- Enable Cloud & AI: Design and implement solutions utilizing core GCP data services, building foundational data platforms that efficiently support advanced analytics and AI/ML initiatives.
- Optimize Performance & Cost: Continuously optimize data architectures and implementations for performance, efficiency, and cost-effectiveness within the cloud environment.
- Bridge Business & Tech: Translate complex business requirements into clear technical designs, providing technical leadership and guidance to data engineering teams.
- Stay Ahead of the Curve: Continuously research and evaluate new data technologies, architectural patterns, and industry trends to keep our data platforms at the cutting edge.
Functional Skills:
- Enterprise Data Architecture Design: Expert ability to design holistic, scalable, and resilient data architectures for complex enterprise environments.
- Cloud Data Platform Strategy: Proven capability to strategize, design, and implement cloud-native data platforms.
- Pre-Sales & Technical Storyteller: Crafts compelling, client-ready proposals, architectural decks, and technical demonstrations. Doesn't just present; shapes the strategic technical narrative behind every proposed solution.
- Advanced Data Modelling: Mastery in designing various data models for analytical, operational, and transactional use cases.
- Data Ingestion & Pipeline Orchestration: Strong expertise in designing and optimizing robust data ingestion and transformation pipelines.
- Stakeholder Communication: Exceptional skills in articulating complex technical concepts and architectural decisions to both technical and non-technical stakeholders.
- Performance & Cost Optimization: Adept at optimizing data solutions for performance, efficiency, and cost within a cloud environment.
Tech Superpowers:
- Cloud Data Mastery: You're a wizard at leveraging public cloud data services, with deep expertise in GCP (BigQuery, Spanner, etc.) and expert proficiency in modern data warehouse solutions like Snowflake.
- Data Engineering Core: Highly skilled in designing, implementing, and managing data workflows using tools like Apache Airflow and Apache Kafka. You're also an authority on advanced data modeling and ETL/ELT patterns.
- AI/ML Data Foundation: You instinctively design data pipelines and structures that efficiently feed and empower Machine Learning and Artificial Intelligence applications.
- Programming for Data: You have a strong command over key programming languages (Python, SQL) for scripting, automation, and building data processing applications.
Experience & Relevance:
- Architectural Leadership (8+ Years): You bring extensive experience (7+ years) specifically in a Solutions Architect role, focused on data engineering and platform building.
- Cloud Data Expertise: You have a proven track record of designing and implementing production-grade data solutions leveraging major public cloud platforms, with significant experience in Google Cloud Platform (GCP).
- Data Warehousing & Data Platform: Demonstrated hands-on experience in the end-to-end design, implementation, and optimization of modern data warehouses and comprehensive data platforms.
- Databricks & BigQuery Mastery: You possess significant practical experience with Databricks as a core data warehouse and GCP BigQuery for analytical workloads.
- Data Ingestion & Orchestration: Proven experience designing and implementing complex data ingestion pipelines and workflow orchestration using tools like Airflow and real-time streaming technologies like Kafka.
- AI/ML Data Enablement: Experience in building data foundations specifically geared towards supporting Machine Learning and Artificial Intelligence initiatives.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’.
Don’t Just Send a Resume. Send a Statement.
So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’
Job Title : AWS Data Engineer
Experience : 4+ Years
Location : Bengaluru (HSR – Hybrid, 3 Days WFO)
Notice Period : Immediate Joiner
💡 Role Overview :
We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.
🔥 Mandatory Skills :
Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security
🚀 Key Responsibilities :
- Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
- Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
- Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
- Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
- Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
- Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
- Collaborate with data analysts and data scientists to deliver actionable insights
- Work in an Agile environment to deliver high-quality data solutions
✅ Mandatory Skills :
- Strong Python (including AWS SDKs), SQL, Spark
- Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
- Experience with DBT and ETL/ELT pipeline development
- Workflow orchestration using Airflow / Step Functions
- Knowledge of data lake formats (Parquet, ORC, Iceberg)
- Exposure to DevOps practices (Terraform, CI/CD)
- Strong understanding of data governance and security best practices
- Minimum 4–7 years in Data Engineering (3+ years on AWS)
➕ Good to Have :
- Understanding of Data Mesh architecture
- Experience with platforms like Data.World
- Exposure to Hadoop / HDFS ecosystems
🤝 What We’re Looking For :
- Strong problem-solving and analytical skills
- Ability to work in a collaborative, cross-functional environment
- Good communication and stakeholder management skills
- Self-driven and adaptable to fast-paced environments
📝 Interview Process :
- Online Assessment
- Technical Interview
- Fitment Round
- Client Round
Required Skills
- 8+ years of DevOps / Cloud Engineering experience
- Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, etc.)
- Expertise in Kubernetes (deployment, scaling, cluster management)
- Strong experience in PostgreSQL and AWS RDS administration
- Proficiency in Terraform for infrastructure automation
- Experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.)
- Strong knowledge of Java (mandatory) and application deployment lifecycle
- Experience with Docker and containerization
- Solid understanding of networking, security, and system architecture
- Strong troubleshooting and problem-solving skills

A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
Responsibilities:
- Lead architecture, technical decisions, and ensure code quality, scalability, and performance
- Develop backend systems using Python & SQL; build APIs and optimize databases
- Work with frontend (React/Angular) and API-driven architectures
- Integrate AI/ML models and support analytics/LLM-based solutions
- Manage cloud deployments (Azure/AWS) and implement CI/CD practices
- Ensure system reliability, monitoring, and production readiness
- Mentor team members, conduct reviews, and collaborate with cross-functional teams
Responsibilities:
- Lead architecture, technical decisions, and ensure code quality, scalability, and performance
- Develop backend systems using Python & SQL; build APIs and optimize databases
- Work with frontend (React/Angular) and API-driven architectures
- Integrate AI/ML models and support analytics/LLM-based solutions
- Manage cloud deployments (Azure/AWS) and implement CI/CD practices
- Ensure system reliability, monitoring, and production readiness
- Mentor team members, conduct reviews, and collaborate with cross-functional teams
Key Responsibilities:
- Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
- Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
- Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
- Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
- Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
- Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
- Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
- Contribute to the development of technical documentation and training materials.
Required Skillset:
- Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
- Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
- Experience in designing and developing scalable, high-performance, and secure software solutions.
- Strong understanding of software development methodologies, including Agile and Waterfall.
- Excellent communication, interpersonal, and problem-solving skills.
- Ability to work effectively in a fast-paced, dynamic environment.
- Bachelor's or Master's degree in Computer Science or a related field.
- Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
Brikito — Lead Full-Stack Developer
Job Description
About Brikito
Brikito is an early-stage PropTech startup building a construction management platform for SME developers and contractors. The founder has 7+ years of hands-on construction experience and an MBA from Warwick Business School. We have initial funding, a domain (brikito.com), wireframes ready, and active customer validation underway. We need our first technical leader to take this from wireframes to a live product.
This is a ground-floor opportunity. You will be the first technical hire — the person who makes every architecture decision and writes the first line of production code.
The Role
Title: CTO / Lead Full-Stack Developer (title depends on experience and equity arrangement)
Location: India (remote OK, occasional visits to Chennai office and overseas office planning to set up in Singapore or Dubai)
Type: Full-time
Compensation: ₹1,00,000–₹2,50,000/month + meaningful equity (0.5%–5% depending on role level, vesting over 4 years based on vesting schedule with a cliff.)
Start Date: May 2026
Reports to: Founder/CEO
- What You Will DoMonths 1–3: Build the MVPOwn all technical decisions — architecture, tech stack, database design, hosting
- Build and ship a working MVP with 3 core features: project dashboard, billing/invoicing, and indent/procurement management
- Set up CI/CD pipeline, staging, and production environments
- Integrate payment gateway (Razorpay for India)
- Build both web and mobile-responsive interfaces
- Ship the MVP within 12 weeks
- Months 3–6: Iterate and ScaleOnboard beta users and fix bugs based on real usage
- Build features based on customer feedback (not assumptions)
- Integrate AI capabilities where they add clear user value (e.g., auto-generated progress reports)
- Hire and manage 1–2 junior developers as the team grows
- Set up monitoring, error tracking, and basic analytics
- Months 6–12: Lead the Technical TeamGrow the engineering team to 4–6 people
- Establish code review processes, documentation standards, and sprint rhythms
- Own the technical roadmap alongside the founder
- Participate in investor conversations as the technical co-founder (if CTO-level)
- Make build-vs-buy decisions for new features
- Required SkillsMust Have7+ years of professional software development experience
- Strong proficiency in React or Next.js (frontend)
- Strong proficiency in Node.js (backend) — Express, Nest.js, or similar
- PostgreSQL or MySQL — database design, query optimisation, migrations
- REST API design — clean, well-documented APIs
- Cloud deployment — AWS (EC2, RDS, S3) or GCP equivalent
- Expertise in AI tools and integrations - Anthropic, OpenAI, Perplexity, etc.
- Git — clean branching, PR-based workflow
- Has shipped at least one product that real users used — not just academic or internal tools
- Comfortable working independently — no one will tell you what to do step by step
- Strongly PreferredPrevious experience at a startup (Series A or earlier)
- Experience building SaaS or B2B products
- Experience with mobile development (React Native or Flutter)
- Experience integrating payment gateways (Razorpay, Stripe)
- Experience with third-party API integrations (OpenAI, Twilio, etc.)
- Understanding of CI/CD pipelines (GitHub Actions, Docker)
- Basic understanding of construction, real estate, or field operations (not required, but a plus)
- Nice to HaveExperience with TypeScript
- Experience with real-time features (WebSockets, push notifications)
- Familiarity with Figma (to translate wireframes into UI)
- Experience hiring and mentoring junior developers
- Open source contributions or a personal project portfolio
- What We Are NOT Looking ForSomeone who needs detailed specifications for every task — we move fast and figure things out together
- Someone who only wants to code and not think about the product — you will be in customer calls and strategy discussions
- Someone who optimises for perfect code over shipping — we ship first, refactor later
- Someone looking for a stable corporate job — this is a startup with all the chaos and excitement that comes with it
- What You GetEquity ownership in an early-stage company with a large addressable market ($14.9B global construction SaaS)
- Founding team credit — you will be recognised as a technical co-founder if you take the CTO role
- Direct impact — every line of code you write will be used by real customers within weeks
- Technical freedom — you choose the stack, the tools, the architecture
- A founder who understands the domain — you will never have to guess what contractors need because the CEO has built construction projects himself
- Growth path — as we raise funding and scale, you grow into VP Engineering or CTO of a funded company
How to Apply
Send the following:
- A short note (5–10 lines) on why this role interests you and what you'd bring
- Your LinkedIn profile or resume
- One link to something you've built — a live product, a GitHub repo, an app, anything that shows your work
- Your availability — when can you start?
We will respond within 48 hours. The process is:
- 30-minute video call with the founder
- Small paid technical task (8 hours of work, ₹5,000 paid regardless of outcome)
- Final conversation about role, equity, and start date
- Offer within 1 week of first call
Questions?
DM the founder on LinkedIn: https://www.linkedin.com/in/aashiqahamed/
This is not a job posting from HR. This is a founder looking for his first technical partner. If this excites you, reach out.
WHAT YOU'LL WORK ON
- Build and scale backend services using Node.js & Express
- Architect and optimize MongoDB schemas for performance
- Contribute to frontend features with Next.js & React
- Debug production issues, optimize API latency & CI/CD pipelines
- Integrate MathJax (LaTeX rendering) & VdoCipher (secure video)
WHAT WE'RE LOOKING FOR
- Strong DSA fundamentals — logical thinking over competitive coding
- Deep JavaScript/TypeScript knowledge: Closures, Promises, Event Loop
- 1–2 original projects (no To-Do apps or tutorial clones)
- Ability to independently pick up Docker, Redis, or AWS
- Ownership mindset — ensure it works in production, not just locally
BONUS POINTS
- Docker / containerization basics
- Real-world AWS experimentation (EC2, S3, Lambda)
- Active GitHub profile: open-source contributions or unique projects
AI Usage Policy:
We encourage AI tools (Cursor, Copilot, GPT-4) as force multipliers — but you must own your code, explain rade-offs, and debug without solely relying on AI.
HOW TO APPLY
- Share your GitHub profile link
- Include live demo links to your best, most original projects
We value what you've built far more than what's on your resume.
WHAT YOU'LL WORK ON
- Design and implement scalable APIs and microservices using Node.js & Express
- Manage deployments via GitHub Actions and CodeDeploy; work with Docker & AWS
- Optimize MongoDB queries and use Redis caching for high-concurrency traffic
- Bridge Figma designs to backend logic using Next.js and Tailwind CSS
- Maintain monitoring with Nginx & PM2 to ensure 99.9% uptime
WHAT WE'RE LOOKING FOR
- 1+ year of professional experience building and maintaining production applications
- Deep Node.js knowledge: async programming, RESTful API architecture
- MongoDB mastery: schema design, indexing strategies, complex aggregation pipelines
- Hands-on AWS (EC2/S3 minimum) and practical CI/CD pipeline experience
- Proven ability to take a feature from PRD / Figma to stable production deployment
WHAT WILL MAKE YOU STAND OUT
- Experience maintaining apps with high concurrent user counts
- Comfortable with Nginx configs and Dockerfiles
- Hands-on with payment gateway integration (Cashfree) and webhook handling
- Obsession with maintainable, well-documented, DRY code
AI Usage Policy:
AI tools (Cursor, Copilot, GPT-4) are force multipliers — use them. But you must own your code, reason hrough architectural trade-offs, and debug without relying solely on AI.
HOW TO APPLY:
- Tell us about the most complex bug you've solved or a backend system you built from scratch
- Share your GitHub profile
- Include at least two live project links showcasing your best work
- Your code will directly impact the learning outcomes of thousands of students.
Experience: 1–3 Years
Qualification: B.Tech (Computer Science / IT or related field)
Shift Timing: 5:00 PM – 2:00 AM (Late Evening Shift)
Location: Hyderabad
Job Summary
We are seeking a proactive and detail-oriented Application Support Engineer with 1–3 years of experience in Linux/Windows environments, application servers, and monitoring tools. The candidate will be responsible for ensuring the stability, performance, and availability of applications, along with providing L2/L3 support in a fast-paced production environment.
Key Responsibilities :
- Provide application support and incident management for production systems.
- Monitor system performance using hardware/software monitoring and trending tools.
- Troubleshoot issues in Linux and Windows environments.
- Manage and support Apache and Tomcat servers.
- Analyze logs and debug application/system issues.
- Work on SQL/Oracle databases for query execution, troubleshooting, and performance tuning.
- Handle deployments and support CI/CD pipelines using tools like Docker and Jenkins.
- Ensure SLA adherence and timely resolution of incidents and service requests.
- Coordinate with development, infrastructure, and database teams for issue resolution.
- Maintain documentation for incidents, processes, and knowledge base articles.
- Support SaaS applications hosted in data center environments.
Required Skills :
Strong knowledge of Linux and Windows OS administration
Experience with Apache and Tomcat servers
Hands-on experience with monitoring and alerting tools
Good understanding of log analysis and troubleshooting techniques
Working knowledge of SQL / Oracle databases
Familiarity with Docker and Jenkins (CI/CD pipelines)
Understanding of ITIL processes (Incident, Problem, Change Management)
Knowledge of SaaS applications and data center operations.
Preferred Skills :
Experience with automation/scripting (Shell, Python, etc.)
Exposure to cloud platforms (AWS/Azure/GCP) is a plus
Basic networking knowledge
Soft Skills :
Strong analytical and problem-solving abilities
Good communication skills
Ability to work in night shifts and handle production support
Team player with a proactive attitude
About the Role
Qiro is building the infrastructure powering the next generation of underwriting, credit analytics, and tokenized private credit markets.
We are looking for a Tech Lead — Credit & Blockchain Infrastructure to lead the architecture and execution of our core systems — spanning underwriting engines, credit lifecycle workflows, and blockchain-integrated capital markets infrastructure.
This is not a feature delivery role. This is a system ownership role.
You will be hands-on while leading a growing engineering team in a fast-moving, in-office environment.
What You’ll Own
- Define and evolve the long-term technical vision for Qiro’s programmable credit infrastructure — architecting cohesive systems that unify underwriting engines, credit lifecycle workflows, and tokenized capital markets.
- Own the end-to-end architecture of scalable backend platforms (Python and/or TypeScript), establishing clear boundaries between risk logic, platform APIs, and smart contract integrations while ensuring scalability, auditability, and extensibility.
- Build and standardize configurable underwriting and credit lifecycle systems — from onboarding and drawdown orchestration to repayment waterfalls and early closures — ensuring deterministic, traceable financial state transitions at institutional scale.
- Set integration and infrastructure standards across API contracts, data models, validation layers, and event-driven architectures, enabling reliable synchronization between off-chain services and on-chain contracts.
- Architect secure and resilient blockchain integrations, including wallet interactions, capital flow coordination, and observable on-chain/off-chain state reconciliation.
- Lead high-impact, cross-product initiatives from RFC and system design through production launch — validating architectural decisions, aligning stakeholders, and delivering measurable improvements in reliability, performance, and developer velocity.
- Elevate reliability and operational excellence by defining SLOs, strengthening CI/CD and observability practices, reducing latency, and minimizing systemic risk in financial workflows.
- Build and scale the engineering organization — mentoring senior engineers, shaping hiring standards, driving architecture reviews, and fostering a culture of ownership, craftsmanship, and first-principles thinking.
- Partner closely with Product, Design, Security, and Operations to translate complex lending and capital market mechanics into simple, robust platform primitives.
Who You Are
- 6-8+ years of engineering experience, with 3+ years in technical leadership roles.
- Strong backend architecture experience in Python and/or TypeScript.
- Comfortable designing distributed systems and financial workflows.
- Experience building fintech, lending, underwriting, trading, or blockchain-integrated systems.
- Strong understanding of API design, state management, and data modeling.
- Able to navigate ambiguity and build 0→1 infrastructure.
- Hands-on builder who leads by writing production-grade code.
We Value
- Experience with underwriting engines or policy-driven decision systems.
- Exposure to smart contracts and blockchain integrations.
- Familiarity with PostgreSQL and event-driven architectures.
- Experience in early-stage or high-growth startups.
- Strong product thinking and ability to translate complex financial logic into scalable systems.
Why Join Qiro
- Lead the architecture of a programmable credit infrastructure platform.
- Join the founding technical leadership team.
- High autonomy and ownership — your decisions shape the company.
- In-office collaboration in Bangalore for speed and iteration.
- Competitive compensation and meaningful equity.
Our Culture
We operate with:
- First-principles thinking
- Technical craftsmanship
- High ownership
- Fast execution with long-term architectural discipline
Key Responsibilities:
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills:
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying, finetuning ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or Generative AI, Voice AI, is an added advantage.
Educational Qualification:
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, IT, from IIT/NIT colleges strongly preferred
Key Skills:
• Hands-on experience with AWS services such as EC2, S3, Lambda, API Gateway, RDS, or DynamoDB ☁️
• Basic understanding of AI/ML concepts and experience with Python-based ML libraries (NumPy, Pandas, Scikit-learn, etc.) 🤖
• Experience in Python / Node.js / Java for backend development 💻
• Understanding of REST APIs and microservices architecture
• Familiarity with Git, CI/CD pipelines, and DevOps fundamentals
• Knowledge of Docker / containerization (preferred) 🐳
• Basic understanding of cloud security, IAM roles, and policies 🔐
• Experience in using AI tools (e.g., ChatGPT, GitHub Copilot, or similar tools) for development, debugging, documentation, and productivity in day-to-day tasks ⚡
Roles & Responsibilities:
• Develop and maintain cloud-based applications on AWS ☁️
• Build and integrate APIs and backend services
• Assist in deploying, monitoring, and managing applications on AWS infrastructure
• Work with the team to integrate AI/ML models or AI-powered services into applications 🤖
• Utilize AI tools for coding assistance, debugging, automation, and improving development efficiency
• Optimize applications for performance, scalability, and reliability
• Collaborate with cross-functional teams for design, development, and deployment
• Troubleshoot and resolve cloud or application-related issues
AWS Certification is mandatory
Education Qualification:
B.Tech/M.Tech from CSE/IT/AI/ML/ECE
Key Requirements / Skills
- 6+ years of overall experience in software development with strong expertise in building scalable web applications.
- 2+ years of experience as a Technical Lead, managing development teams and driving project delivery.
- Strong technical decision-making ability, including architecture design, technology selection, and implementation of best practices.
- Front-end expertise: Strong experience in React, JavaScript, TypeScript, and building responsive and user-friendly UI/UX.
- Back-end development: Hands-on experience with Node.js, RESTful APIs, API design, and server-side architecture.
- AI/ML knowledge: Experience in implementing AI/ML models or integrating AI-based solutions to solve business problems.
- Cloud & DevOps exposure: Experience with AWS/Azure, understanding of CI/CD pipelines, and cloud-based deployments.
- Code quality & best practices: Experience in code reviews, Git version control, and ensuring maintainable and secure code.
- Team leadership: Ability to mentor developers, guide technical discussions, and collaborate across teams.
- Strong communication skills to effectively interact with technical and non-technical stakeholders.
- Experience working in high-compliance environments such as healthcare systems is a plus.
Education Qualifications:
- B.Tech/M.Tech in CSE/IT/AI/ML from a good university
EasySLR is pioneering the future of systematic literature reviews through AI and innovative technologies. Our platform, recognized by industry leaders and academic communities alike, redefines the way researchers conduct reviews, making the process faster, smarter, and more intuitive. We've been at the forefront of AI-driven research, presenting at major conferences and setting new standards in evidence synthesis. If you are a visionary leader with a passion for technology and a drive to make a significant impact, we want you to join our mission to transform the research landscape.
Responsibilities :
- Lead and mentor a team of talented engineers, fostering a culture of innovation, collaboration, and continuous learning.
- Architect and oversee the development of a scalable, high-performance platform that integrates cutting-edge AI technologies and industry best practices.
- Drive the engineering strategy, ensuring alignment with our product vision and business goals.
- Collaborate closely with cross-functional teams, including product, design, and AI experts, to deliver a world-class product experience.
- Ensure the robustness, security, and scalability of our infrastructure, leveraging your deep expertise in cloud computing and full-stack development.
- Stay ahead of emerging technologies, incorporating the latest advancements into our platform and maintaining our competitive edge.
- Cultivate a high-performing engineering team through effective hiring, coaching, and professional development opportunities.
Requirements :
- 4+ years of experience in software engineering, with a proven track record of leading high-performing engineering teams.
- Expertise in full-stack development, with hands-on experience in Python, Node.js, and frameworks like Next.js.
- Extensive experience with cloud platforms, particularly AWS, and familiarity with tools like AWS Lambda, AWS CDK, and containerization technologies.
- Strong background in designing and scaling complex, distributed systems with a focus on performance and security.
- Experience in AI/ML-driven product development is a significant plus.
- Exceptional problem-solving skills, with a strategic mindset and the ability to make data-driven decisions.
- Excellent communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders.
What We Offer :
- The opportunity to lead a cutting-edge platform at the intersection of AI and systematic literature reviews.
- Competitive compensation and a clear path to executive leadership.
- A vibrant, inclusive work culture that values diversity, innovation, and work-life balance.
- The chance to make a meaningful impact in a fast-growing, AI-first SaaS company shaping the future of research.
Ready to lead the engineering efforts that will drive the next generation of AI-driven systematic reviews? Join us at EasySLR and be part of a team that's revolutionizing the research process. Apply now and embark on an exciting journey at the forefront of technology and innovation
Key Responsibilities
- Frontend Development: Designing and building responsive, interactive user interfaces using React.js, HTML5, CSS3, and modern JavaScript (ES6+).
- Backend Development: Developing robust, scalable server-side applications and microservices using Java and the Spring Boot framework.
- API Integration: Creating and consuming RESTful APIs to ensure seamless communication between the React frontend and the Java backend.
- Database Management: Designing and optimizing database schemas and queries using SQL (e.g., MySQL, PostgreSQL, Oracle) or NoSQL (e.g., MongoDB) databases.
- State Management: Managing application state in React using tools like Redux, Hooks, or Context API.
- Testing & Quality Assurance: Writing unit and integration tests using frameworks such as JUnit for backend and Jest or React Testing Library for frontend.
- DevOps & Deployment: Collaborating on CI/CD pipelines and using containerization tools like Docker and Kubernetes for application deployment.
- Indeed
- +14
Required Skills & Qualifications
- Core Technical Skills:
- Deep proficiency in Java (8+) and the Spring ecosystem (Spring Boot, Spring Security, Spring Data JPA).
- Expertise in React.js workflows, component-based architecture, and hooks.
- Strong understanding of Microservices architecture and cloud platforms (AWS, Azure, or GCP).
- Experience & Education:
- Typically requires a Bachelor's degree in Computer Science or a related field.
- Proven experience (often 1–5+ years depending on seniority) in full-stack development.
- Tools: Version control systems like Git, build tools like Maven or Gradle, and Agile project management tools like Jira.
- Indeed
- +13
Typical Salary Ranges (India)
- Freshers: ₹3.8 Lakh to ₹12 Lakh per year.
- Experienced (5+ years): ₹18 Lakh to ₹30 Lakh+ per year.
- Average (General): Approximately ₹29 Lakh per year for high-demand specialized roles.
Job Title: Full Stack Engineer (Django + Next.js)
We’re looking for a Full Stack Engineer with strong backend fundamentals and solid frontend experience to build scalable web products and APIs.
Must-Have
• Django + DRF (2+ years): Models, serializers, services, API views, migrations, query optimization (select_related / prefetch_related), transaction.atomic, custom managers
• Next.js + React (2+ years): App Router, SSR, client components, dynamic imports, useQuery, responsive UIs with Tailwind
• REST APIs: Auth, permissions, pagination, error handling, CORS, JWT flows
• PostgreSQL: Schema design, indexes, constraints, JSON fields, raw SQL when needed
• Celery / async tasks: Retry logic, idempotency, task chaining
• Git: Clean commits, branching, PR workflow
Good to Have
• AI / LLM integrations
• AWS S3 and presigned uploads
• Multi-tenancy
• WebRTC / MediaRecorder
• Docker
• Testing with pytest / Django TestCase / factory_boy
We’re looking for someone who can independently own features end-to-end and write clean, scalable code.
Job Summary
We are seeking a skilled Python Platform Developer to join our engineering team. You will be responsible for building, optimizing, and maintaining the core backend infrastructure and internal platforms that power our applications. The ideal candidate will build scalable API architectures, enhance data security, and implement automation to improve developer productivity.
Key Responsibilities
- Platform Development: Design, develop, and maintain robust and scalable backend services, API frameworks, and shared libraries using Python.
- Infrastructure Automation: Build and maintain tools for infrastructure automation using technologies such as AWS (Lambda, EC2, S3), Docker, and Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Performance Optimization: Improve system performance, low-latency API interactions, and data storage solutions.
- CI/CD Optimization: Develop, maintain, and improve automated testing and continuous integration/continuous deployment (CI/CD) pipelines.
- Collaboration: Work closely with product engineers, DevOps, and frontend developers to define requirements and deliver reliable infrastructure solutions.
- Security & Monitoring: Implement strong security protocols and monitoring solutions (e.g., Prometheus, Datadog) to ensure platform reliability.
Required Skills and Qualifications
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
- Experience: 3–5+ years of experience in software development with a heavy focus on Python.
- Core Python: Deep understanding of Python 3.x, object-oriented programming (OOP), and asynchronous programming (e.g., asyncio).
- Frameworks: Hands-on experience with web frameworks like FastAPI, Django, or Flask.
- Cloud Platforms: Experience with AWS or GCP services.
- Tools: Proficient with Git, Docker, and CI/CD pipelines.
- Database: Strong knowledge of SQL and database management.
Preferred Skills
- Experience with serverless architectures.
- Knowledge of Kubernetes.
- Experience in a DevOps or Site Reliability Engineering (SRE) role.
DevOps Engineer
Location: Bangalore office
About Peliqan
Peliqan is an all-in-one data platform combining ELT/ETL pipelines, a built-in data warehouse, SQL and low-code Python transformations, reverse ETL, and AI-powered data activation. We connect 250+ data sources and serve enterprise teams, consultants, and SaaS companies. SOC 2 Type II certified and GDPR compliant.
The Role
Own and evolve the infrastructure powering Peliqan's multi-tenant data platform. You'll manage Kubernetes clusters, cloud resources, CI/CD pipelines, and monitoring — keeping everything reliable, secure, and scalable. You'll be the go-to person for infrastructure support across the engineering team.
Responsibilities
Manage and optimise Kubernetes clusters running production workloads — data pipelines, APIs, and customer-facing services.
Maintain Docker-based local development environments for the engineering team.
Administer cloud infrastructure on AWS and Google Cloud (compute, storage, networking, managed databases).
Build and maintain CI/CD pipelines for automated testing, building, and deploying across staging and production.
Set up and manage monitoring, alerting, and logging for platform health and incident response.
Manage release processes — deployments, rollbacks, and release strategies.
- Maintain infrastructure-as-code using Helm charts.
- Support security hardening and compliance efforts (SOC 2, GDPR).
Requirements
3+ years in a DevOps, SRE, or Infrastructure Engineering role.
Strong hands-on experience with Kubernetes and Helm charts.
Deep familiarity with Docker for containerisation and local dev workflows.
Production experience with AWS and/or Google Cloud.
- Proficiency in Python and Bash scripting for automation and tooling.
- Solid grasp of DevOps principles: infrastructure-as-code, GitOps, observability, continuous delivery.
- Experience with CI/CD platforms (GitHub Actions, GitLab CI, or similar).
Nice to Have
- Experience supporting multi-tenant SaaS platforms or data infrastructure at scale.
- Knowledge of PostgreSQL, MySQL, or cloud-managed database administration.
- Exposure to security compliance frameworks (SOC 2, ISO 27001, GDPR).

Job Title: Consultant – Enterprise Application Development
Location: Bengaluru (Hybrid / On-site)
Engagement: Full-Time
Experience: 10 – 15 years preferred
About Us: Introducing VTT, a comprehensive mobility service provider catering to diverse multinational sectors like IT/ITES, KPO/BPO, Financial, Pharma, and more across Indian cities. Our “Managed Mobility Program” includes Fleet Management, Technology, Resource Management, Car Rentals, Logistics, and Special Services (Ambulance and PWD vehicles). Trusted by Fortune companies such as Cisco, Morgan Stanley, Wells Fargo, Google, PWC, and others, we pride ourselves on leveraging expertise and cutting-edge technology for safe, efficient, and uninterrupted service delivery. With a commitment to excellence, we ensure best-in-class standards for all our clients. Trip to school is now timely, comfy and secure! Our well maintained f leet is here to enrich your child’s commute, keeping students punctual and safe thanks to GPS tracking paired with well-trained drivers. Our routes are carefully planned, our drivers attentive, and everything hassle-free.
Role Overview
We are looking for a seasoned Consultant with comprehensive expertise in enterprise-level application development across backend, frontend, mobile, DevOps, and cloud. The role demands a strong architectural mindset combined with hands-on execution. The Consultant will also play a critical role in understanding the current system architecture end-to-end, driving technical improvements, building the tech team foundation, and establishing structured technical documentation.
Key Responsibilities
• Understand the complete architecture of the existing systems, including web, mobile, backend services, and cloud environment.
• Provide hands-on leadership across backend, frontend, mobile, DevOps, and cloud infrastructure.
• Architect and optimize enterprise-grade applications for scalability, security, performance, and reliability.
• Conduct technical due diligence on current systems and propose improvements or refactoring plans.
• Build the foundation for the internal engineering team including hiring support, role definitions, and best-practice processes.
• Drive engineering workflows including coding standards, branching strategy, CI/CD, monitoring, and release management.
• Create comprehensive technical documentation covering system architecture, API specs, deployment playbooks, and SOPs.
• Review code and provide mentorship to engineering resources.
• Coordinate with product and business teams to translate requirements into technical design and actionable development roadmap.
• Troubleshoot and resolve deep-stack issues during development or production.
Technical Expertise Required
Backend
• Java / Spring Boot
• Node.js
•Microservices architecture
• REST / GraphQL
Frontend
• React js
• Responsive UI, component-based architecture, state management
Mobile
• Flutter
• React Native
Cloud & DevOps
• AWS (ECS / EKS / EC2 / RDS / Lambda / S3 / IAM / CloudWatch etc.)
• CI/CD pipelines (GitHub Actions / Jenkins / GitLab CI or equivalent)
• Docker / Kubernetes
• Infrastructure-as-code (Terraform / CloudFormation)
Database
• MongoDB
• Knowledge of PostgreSQL / MySQL is an added advantage
Professional Attributes
• Strong architectural thinking with the ability to simplify complex systems.
• Excellent communication and stakeholder management skills.
• Ability to work independently without constant supervision.
• Capability to mentor, lead, and build an engineering team from scratch.
• Process-driven mindset with a focus on best practices and documentation.
Deliverables
• Architectural understanding and documentation of current systems.
• Recommendations and implementation plan for system upgrades or restructuring.
• Establishment of core engineering processes and standards.
• Hiring support and technical evaluation of developers.
Job Title : Golang Backend Developer
Experience : 3+ Years
Location : Bangalore (Work From Office)
Notice Period : Immediate to 15 Days (Strict)
🚀 About the Role :
We are looking for a Backend Developer with strong Golang expertise to build scalable, high-performance systems. You will play a key role in designing microservices, handling concurrent workloads, and developing robust backend architectures for production-scale applications.
🔥 Mandatory Skills :
Strong hands-on experience in Golang, Microservices Architecture, REST APIs, Concurrency (goroutines & channels), PostgreSQL/MySQL, Redis, Messaging Systems (Kafka/RabbitMQ/SQS), AWS/GCP, Docker & Kubernetes, and CI/CD pipelines.
🛠️ Key Responsibilities :
- Design, develop, and maintain scalable backend services using Golang.
- Build high-performance REST APIs and microservices.
- Develop concurrent and distributed systems using goroutines and channels.
- Implement event-driven and asynchronous architectures.
- Optimize system performance, latency, and database efficiency.
- Integrate messaging systems and caching layers for scalability.
- Collaborate with cross-functional teams for end-to-end delivery.
- Ensure high code quality, testing, and system reliability.
- Monitor, debug, and enhance production systems.
Required Skills & Qualifications :
- Strong hands-on experience in Golang (must-have).
- Solid understanding of Concurrency in Go (goroutines, channels, worker pools).
- Experience with Microservices Architecture.
- Strong knowledge of RESTful API development.
- Proficiency in Databases : PostgreSQL / MySQL / MongoDB.
- Hands-on experience with Redis (caching).
- Experience with Messaging Systems: Kafka / RabbitMQ / SQS.
- Hands-on experience with AWS or GCP.
- Experience with Docker & Kubernetes.
- Familiarity with CI/CD pipelines (GitHub Actions, Jenkins, etc.).
- Strong understanding of Data Structures, Algorithms, and System Design.
Good to Have :
- Experience with gRPC-based microservices.
- Familiarity with monitoring tools like Prometheus, Grafana.
- Exposure to high-scale distributed systems.
- Experience with event-driven architectures.
- Knowledge of security practices (JWT, OAuth2, RBAC).
What We’re Looking For :
- Strong problem-solving and debugging skills.
- Ownership mindset with end-to-end feature delivery.
- Ability to write clean, efficient, and maintainable code.
- Comfortable working in a fast-paced, high-growth environment.
You can also register yourself on the below platform to proceed further :
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
Role: Sr PostgreSQL Database Administrator (DBA)
Location: Hyderabad, India
Job description
We are looking for a highly capable PostgreSQL DBA who can take complete ownership of database operations in a high-volume, mission-critical environment. This role requires deep expertise in PostgreSQL internals, strong troubleshooting ability, and hands-on experience managing large-scale databases in cloud environments like AWS or Azure.
Responsibilities:
- Manage and maintain PostgreSQL databases across production, staging, and development environments
- Perform and manage logical and physical backups, restores, and recovery strategies
- Execute database upgrades (minor and major versions) with minimal downtime
- Design and manage replication setups, including streaming replication and point-in-time recovery (PITR)
- Troubleshoot performance issues including locks, blocking, long-running queries, and system load bottlenecks
- Tune and manage PostgreSQL configuration parameters and leverage system catalogs for deep analysis
- Perform routine maintenance tasks such as vacuuming, indexing, and database housekeeping
- Deploy and manage PostgreSQL in cloud environments like AWS (RDS/EC2) or Azure
- Implement and maintain high availability (HA) solutions using tools like Repmgr, Pgpool, or EFM
- Manage and optimize multi-terabyte databases ensuring performance and scalability
- Monitor database health and implement proactive measures to prevent outages
- Collaborate with engineering, DevOps, and product teams to support application performance
- Document processes, configurations, and recovery procedures
Requirements:
- Strong knowledge of PostgreSQL architecture and internals
- Hands-on experience with backup/restore strategies, replication, and disaster recovery
- Proven experience in performance tuning and troubleshooting production issues
- Experience managing large-scale (multi-TB) PostgreSQL databases
- Solid understanding of PostgreSQL system catalogs and configuration tuning
- Experience with cloud-based PostgreSQL deployments (AWS or Azure)
- Hands-on experience with high availability and clustering tools (Repmgr / Pgpool / EFM)
- Strong Linux/Unix administration and scripting skills
- Ability to work independently and quickly learn new technologies
- Experience supporting high-volume, mission-critical production environments
- Strong communication and collaboration skills across cross-functional teams
Software Engineer – EdTech (PHP)
Experience: 3+ Years
Work Mode: Permanent Work From Home
Role Summary
We are seeking a highly skilled software developer with strong experience in EdTech platforms and education ERP systems. The ideal candidate will have expertise in core PHP/Laravel and database technologies, with hands-on experience in building and scaling education-focused modules such as LMS, online examination systems, admissions, and fee management.
This role focuses on developing scalable, secure, and high-performance solutions for schools, colleges, and online learning platforms.
Key Responsibilities
- Design, develop, and maintain Education ERP and EdTech platform modules.
- Build and enhance systems for LMS (Learning Management System), online exams, admissions, fee management, HR, and finance.
- Develop and optimize REST APIs/GraphQL services for seamless integration with web and mobile platforms.
- Ensure high performance, scalability, and security for large-scale student and institutional data.
- Work closely with product, QA, and implementation teams to deliver EdTech features.
- Conduct code reviews, maintain coding standards, and mentor junior developers.
- Continuously improve platform capabilities based on EdTech trends and user needs.
Required Skills & Qualifications
- Strong expertise in Core PHP (Laravel Framework).
- Solid experience with MySQL, MongoDB, PostgreSQL (database design & optimization).
- Understanding of EdTech workflows like student lifecycle, course management, and assessments.
- Frontend basics: JavaScript, jQuery, HTML, CSS (React/Vue is a plus).
- Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email services).
- Familiarity with Git/GitHub, Docker, and CI/CD pipelines.
- Knowledge of cloud platforms (AWS, Azure, GCP) is an advantage.
- Minimum 3+ years of development experience, with at least 2 years in education ERP/EdTech systems.
Preferred Experience
- Prior experience working in EdTech companies or education ERP platforms.
- Deep understanding of LMS, online examination systems, admissions, fees, HR, and finance modules.
- Experience handling high-traffic educational platforms (e.g., exam portals, live classes, student dashboards).
- Exposure to scalable architecture for large student/user bases.
About the Company
At Redpin we simplify life's most important payments. Buying a new property overseas can be a stressful time, especially when it comes to moving your money. Through our Currencies Direct and TorFX brands we've been helping people do just that for over 25 years. With recent investment we're now on a mission to build a new range of digital products and services that will make moving money Internationally for Real Estate purchases even easier
We’re on a mission to become the solution for Real Estate payments everywhere. To do this, we are transitioning our business from a horizontal FX platform to a verticalized, embedded software company, as we look to the future and Redpin 2.0.
About the Role
At Redpin, we’re passionate about building software that solves problems. We count on our site reliability engineers (SREs) to empower users with a rich feature set, high availability, and stellar performance level to pursue their missions. As we expand customer deployments, we’re seeking an experienced SRE to deliver insights from massive-scale data in real time. Specifically, we’re searching for someone who has fresh ideas and a unique viewpoint, and who enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences for every interaction.
What You'll Do
- Run the production environment by monitoring availability and taking a holistic view of system health.
- Build software and systems to manage platform infrastructure and applications
- Improve reliability, quality, and time-to-market of our suite of software solutions.
- Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement.
- Provide primary operational support and engineering for multiple large-scale distributed software applications.
- Design, implement, and maintain highly available and scalable infrastructure and systems on AWS.
- Gather and analyze metrics from operating systems as well as applications to assist in performance tuning and fault finding.
- Partner with development teams to improve services through rigorous testing and release procedures.
- Participate in system design consulting, platform management, and capacity planning.
- Create sustainable systems and services through automation and uplifts.
- Balance feature development speed and reliability with well-defined service-level objectives
What You’ll Need
- Bachelor’s degree in computer science, Software Engineering, or a related field. (Master's degree preferred)
- 4+ years of experience as a Site Reliability Engineer or in a similar role.
- Strong knowledge of system architecture, infrastructure design, and best practices.
- Proficiency in scripting and automation using languages like Python, Bash, or similar technologies.
- Experience with cloud platforms such as AWS, including infrastructure provisioning and management.
- Strong understanding of networking principles and protocols.
- Experience with supporting Java, Spring Boot, Hibernate JPA, Python, React, and .NET technologies Application.
- Knowledge of API gateway solutions like Kong and Layer 7.
- Experience working with databases such as Elastic, SQL Server, Postgres SQL.
- Familiarity with messaging systems like MQ, ActiveMQ, and Kafka.
- Proficiency in managing servers such as Tomcat, JBoss, Apache, NGINX, and IIS.
- Experience with containerization using EKS (Elastic Kubernetes Service).
- Knowledge of CI/CD processes and tools like Jenkins, Artifactory, and Ansible.
- Proficiency in monitoring tools such as Coralogix, CloudWatch, Zabbix, Grafana, and Prometheus.
- Strong problem-solving and troubleshooting skills with the ability to analyse and resolve complex technical issues.
- Excellent communication and collaboration skills to work effectively in a team environment.
- Strong attention to detail and ability to prioritize and manage multiple tasks simultaneously.
- Self-motivated and able to work independently with minimal supervision.
We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together.
Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law
Hiring SOC Investigation Specialist on behalf of high-growth technology and enterprise partners building next-generation SOC automation and AI-driven investigation systems. This role is ideal for experienced SOC analysts who can apply real-world investigative judgment to review, validate, and construct high-quality security investigations across SIEM, endpoint, cloud, and identity environments.
Responsibilities
- Review, monitor, and evaluate SOC alerts and investigation outputs based on predefined scenarios and criteria.
- Distinguish true positives from false positives by validating investigative evidence and alert context.
- Perform end-to-end security investigations when required, including log analysis, entity pivoting, timeline reconstruction, and evidence correlation.
- Assess the correctness, completeness, and quality of SOC investigations produced by automated or human workflows.
- Apply consistent investigative judgment while recognizing that multiple valid investigation paths may exist for the same alert.
- Make clear binary determinations (e.g., ACCEPT / PASS) while also producing detailed ground-truth investigations when required.
- Use Splunk extensively to pivot across logs, entities, and timelines, including reading and reasoning about SPL queries.
- Maintain clear and accurate documentation of investigative steps, assumptions, evidence, and conclusions.
- Collaborate with program leads and other expert annotators to uphold high-quality investigation and annotation standards.
- Mentor or support other analysts where applicable, particularly in long-term or lead annotator roles.
Requirements
- 3+ years of hands-on experience as a SOC analyst in a production SOC environment (Tier 2 or above strongly preferred).
- Strong understanding of alert triage, incident investigation workflows, and evidence-based decision-making under time constraints.
- Mandatory hands-on experience with Splunk, including:
- Conducting investigations using Splunk
- Reading, understanding, and reasoning about SPL queries
- Pivoting between logs, entities, and timelines
- Proven ability to evaluate SOC investigations and determine whether conclusions are valid, incomplete, or incorrect.
- Strong investigative judgment and comfort making decisive evaluations.
- Fluent English (written and spoken) with strong documentation and communication skills.
Nice to Have
- Experience with Endpoint Detection & Response (EDR) tools such as CrowdStrike Falcon, Microsoft Defender for Endpoint, or SentinelOne.
- Experience analyzing cloud security logs and signals:
- AWS (CloudTrail, GuardDuty)
- Azure (Activity Log, Defender for Cloud)
- GCP (Cloud Audit Logs)
- Familiarity with Identity & Access Management platforms such as Okta Identity Cloud or Microsoft Entra ID (Azure AD).
- Experience with email security tools like Proofpoint or Mimecast.
- SOC leadership or mentoring experience.
- Basic scripting experience (Python or similar).
- Security certifications (optional): GCIA, GCIH, GCED, Splunk certifications, Security+, CCNA, or cloud security certifications.
About the Role
As an SDET II, you'll own significant parts of our test infrastructure and drive quality strategy across the engineering team. You'll design testing approaches for complex features, mentor junior engineers, and make architectural decisions that impact how we approach automation at scale.
What You'll Do
- Architect and implement test frameworks and infrastructure
- Design testing strategies for new features and platform capabilities
- Mentor SDET I engineers and conduct technical code reviews
- Refactor and optimize existing test suites for maintainability and performance
- Make architectural decisions about test design patterns and abstractions
- Build and manage AWS-based test environments and infrastructure
- Integrate testing earlier in the development lifecycle through cross-team collaboration
- Optimize CI/CD pipeline performance and test execution times
- Develop custom tooling and reporting to surface quality insights
Technical Requirements
Core Skills:
- Advanced TypeScript expertise: generics, decorators, advanced typing patterns, type inference
- Deep understanding of asynchronous programming, concurrency, and race condition prevention
- Strong software design principles with domain-driven design (DDD) approach
- Extensive experience with Playwright including deep knowledge of fixtures architecture
- Expert-level Git, GitHub, and distributed version control workflows
- Layered architecture design: understanding PCOM (Page Component Object Model) and POM patterns
- Object-oriented design in test frameworks—building scalable abstractions over linear scripts
- API testing and orchestration (REST/GraphQL integration with UI workflows)
Infrastructure & DevOps:
- AWS: EC2 configuration, CloudWatch log analysis, debugging cloud environments
- Terraform for infrastructure as code (plus)
- Docker: containerization, docker-compose, image management
- CI/CD debugging: analyzing pipeline failures, optimizing execution
- Advanced reporting: Allure configuration, Playwright HTML reports, custom reporting solutions
Additional Experience:
- Test infrastructure development and framework architecture
- Design patterns implementation (Factory, Builder, Facade, Composite)
- Performance optimization at scale
- npm ecosystem and package management is a good to have
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
We're seeking an experienced Senior Backend Engineer to join our team. As a senior backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop. This includes APIs, databases, and server-side logic.
Responsibilities:
● Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
● Write clean, efficient, and well-documented code that adheres to industry standards and best practices
● Code Quality: Ensure code quality through code reviews, adherence to best practices, and continuous improvement
● Mentorship: Guide and mentor team members, fostering growth and innovation
● Collaboration: Work closely with stakeholders to align technical goals with business objectives
● Problem-Solving: Analyze and resolve technical challenges promptly ● Innovation: Stay updated with the latest technology trends and integrate them into solutions
Requirements:
● At least 7+ years of experience building scalable and reliable backend systems
● Strong expertise in NodeJS/NestJS, Express, PostgreSQL
● Experience with microservices architecture and distributed systems
● Proficiency in database design (SQL and NoSQL)
● Knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines
● Deep understanding of design patterns, data structures, and algorithms
● Hands-on experience with containerization technologies like Docker and orchestration tools like Kubernetes
● Exceptional communication and leadership skills
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to effectively lead and maintain a collaborative team environment
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment. We're seeking an experienced Backend Software Engineer to join our team. As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop. This includes APIs, databases, and server-side logic.
Responsibilities:
● Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
● Write clean, efficient, and well-documented code that adheres to industry standards and best practices
● Participate in code reviews and contribute to the improvement of the codebase
● Debug and resolve issues in the existing codebase
● Develop and execute unit tests to ensure high code quality
● Work with DevOps engineers to ensure seamless deployment of software changes
● Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
● Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
● Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements:
● At least 3+ years of experience building scalable and reliable backend systems
● Strong expertise in NodeJS/NestJS, Express, PostgreSQL
● Experience with microservices architecture and distributed systems
● Proficiency in database design (SQL and NoSQL)
● Knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines
● Deep understanding of design patterns, data structures, and algorithms
● Hands-on experience with containerization technologies like Docker and orchestration tools like Kubernetes
● Exceptional communication and leadership skills
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to effectively lead and maintain a collaborative team environment
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
NonStop io is seeking a proficient React.js developer to join our front-end development team. In this position, you’ll be mainly crafting and integrating UI components through React.js methodologies and workflows like Mobx and Redux.
Responsibilities:
● Developing and implementing UI components using React.js
● Collaborating with cross-functional teams to design and ship new features
● Building reusable components and front-end libraries for future use
● Translating designs and wireframes into high-quality code
● Optimizing components for maximum performance across various web browsers
● Troubleshooting and debugging issues to ensure smooth user experiences
● Participating in code reviews to maintain code quality and consistency ● Improvement and optimization of existing codebase
● Keeping up with industry trends
● Identifying issues with technologies and architecture and then implementing solutions
● Assist with ticket creation, refinement, and estimation
● Participate in sprint planning and ticket distribution for frontend team
Qualifications & Skills:
● Proficiency in React.js and its core principles
● Strong JavaScript, TypeScript, HTML5, and CSS3 skills
● Experience with popular React.js state management libraries and approaches (such as Mobx, Redux, and Context API)
● Familiarity with RESTful APIs and integration
● Knowledge of modern authorization mechanisms, such as JSON Web Tokens
● Understanding of front-end build tools and pipelines
● Excellent problem-solving and communication skills
● A strong attention to detail, and a passion for delivering high-quality code
● Expertise in designing scalable and efficient front-end architecture
● Adaptability to changing project requirements and priorities
● Experience with version control systems, particularly Git
● Experience working in Scrum and familiarity with Atlassian (Jira, Confluence, Bitbucket)
● Proven experience in leading teams
● Experience with diverse applications and architectural patterns
● A degree in computer science, software engineering, or a related field
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Key Responsibilities
AI Architecture & Solution Design
- Design end-to-end AI solution architectures, including:
- Generative AI and LLM-based systems
- Retrieval-Augmented Generation (RAG) pipelines
- Agentic and multi-agent workflows
- Define reference architectures and best practices for AI-enabled features within enterprise products.
- Ensure AI solutions integrate seamlessly with existing applications, data, and cloud architectures.
AI Integration & MCP Servers
- Design and implement Model Context Protocol (MCP) servers to securely expose tools, APIs, and data to AI agents.
- Define standards for tool interfaces, access control, auditing, and safety guardrails.
- Enable product teams to onboard AI tools and capabilities using reusable, scalable integration patterns.
Agentic AI & Workflow Enablement
- Architect AI-driven workflows that support collaboration between humans and AI agents.
- Design AI-to-AI (A2A) and AI-to-system interaction patterns.
- Ensure agent behaviors are deterministic, explainable, and aligned with enterprise requirements.
Hands-On Development & Prototyping
- Build proofs-of-concept and production-ready implementations using Python and/or TypeScript.
- Rapidly validate ideas from ideation to deployment.
- Establish reusable frameworks, libraries, and CI/CD pipelines for AI development.
AI Governance, Quality & Safety
- Implement guardrails to minimize hallucinations, unsafe actions, and data leakage.
- Define evaluation and monitoring strategies for AI systems, including prompt regression and RAG accuracy checks.
- Ensure AI solutions comply with enterprise security, privacy, and governance standards.
Developer Enablement & Collaboration
- Partner with Product, Engineering, QE, Performance, and Security teams to deliver AI capabilities.
- Mentor teams on AI design patterns, tooling, and best practices.
- Contribute to internal AI communities through demos, documentation, and knowledge sharing.
Qualifications :
Required Qualifications
- Bachelor’s degree in computer science, Engineering, or a related technical field, or equivalent practical experience.
- Demonstrated expertise in cloud‑native system design, distributed architectures, and enterprise‑scale integrations.
- Proven ability to architect and implement AI-enabled systems, including integrating Large Language Models (LLMs) into production-grade software.
- Strong ownership of architectural decisions, technical direction, and solution delivery across complex, cross-functional initiatives.
- Hands-on experience applying security, observability, and automation best practices within enterprise environments.
- 6–10 years of experience in software architecture and distributed systems.
- 5+ years of experience building Generative AI or LLM-based solutions.
- Practical experience designing and implementing:
- Retrieval-Augmented Generation (RAG) architectures
- Agentic AI systems
- Tool-calling frameworks and AI integration layers
- Proficiency in Python and/or .Net/TypeScript/Node.js.
- Experience working with major cloud platforms such as Azure, AWS, or Google Cloud Platform (GCP).
Preferred Qualifications
- Experience with OpenAI, Azure OpenAI, Anthropic, or similar LLM platforms.
- Familiarity with Model Context Protocol (MCP) or equivalent AI tool-integration frameworks.
- Experience applying AI engineering practices beyond prototyping, including evaluation, reliability, and scalability considerations.
- Ability to translate ambiguous business problems into clear technical architecture and execution plans.
- History of influencing technical standards and mentoring senior engineers or architects.
- Experience with vector databases, embeddings, and retrieval optimisation.
- Experience building AI-enabled developer tooling and CI/CD pipelines.
- Prior experience in enterprise SaaS environments.
Job Title: Data Engineer
About the Role
We are looking for a highly motivated Data Engineer to join our growing team and play
a critical role in shaping the data foundation of different software platforms. This role sits
at the intersection of data engineering, product, and business stakeholders, and is
responsible for building reliable data pipelines, delivering actionable insights, and
ensuring data quality across systems.
You will work closely with internal teams and external partners to translate business
requirements into scalable data solutions, while maintaining high standards for data
integrity, performance, and usability.
Key Responsibilities
Data Engineering & Architecture
Design, build, and maintain scalable data pipelines and ETL/ELT processes
Develop and optimize data models in PostgreSQL and cloud-native
architectures
Work within AWS ecosystem (e.g., S3, Lambda, RDS, Glue, Redshift, etc.) to
support data workflows
Ensure efficient ingestion and processing of large-scale datasets
Business & Partner Integration
Collaborate directly with business stakeholders and external partners to
gather requirements and deliver reporting solutions
Translate ambiguous business needs into structured data models and
dashboards
Integrate with third-party APIs and other external data sources
Data Quality & Governance
Implement robust data validation, monitoring, and QA processes
Ensure consistency, accuracy, and reliability of data across the platform
Troubleshoot and resolve data discrepancies proactively
Reporting & Analytics Enablement
Build datasets and pipelines that power dashboards and reporting tools
Support internal teams with ad hoc analysis and data requests
Partner with product and engineering teams to embed data into the SaaS product experience
Performance & Scalability
Optimize queries, pipelines, and storage for performance and cost efficiency
Continuously improve system scalability as data volume and complexity grow
Required Qualifications
3–6+ years of experience in Data Engineering or related role
Strong proficiency in Python for data processing and scripting
Advanced experience with PostgreSQL (query optimization, schema design)
Hands-on experience with AWS data architecture (S3, RDS, Lambda, Glue,
Redshift, etc.)
Experience integrating with external APIs
Solid understanding of ETL/ELT pipelines, data modeling, and warehousing
concepts
Experience working cross-functionally with business stakeholders
Preferred Qualifications
Experience in AdTech, eCommerce, or SaaS platforms
Familiarity with BI tools (e.g., Looker, Tableau, Power BI)
Experience with workflow orchestration tools (e.g., Airflow)
Understanding of data governance and compliance best practices
Exposure to real-time or streaming data pipelines
What We’re Looking For
Strong problem-solver who can operate in a fast-paced, ambiguous
environment
Ability to balance technical depth with business context
Excellent communication skills — able to work directly with non-technical
stakeholders
Ownership mindset with a focus on execution and quality
Overview:
We're looking for a Full Stack Developer with strong backend expertise who can build,
manage, and scale AI-driven products end to end. You'll play a critical role in designing
scalable architectures, optimizing performance and cost, and building robust AI and agentic
systems.
Responsibilities
1. Architect and build scalable backend systems using FastAPI, PostgreSQL, and Redis.
2. Design, develop, and maintain AI-driven applications, integrating multiple LLMs, APIs,
and agentic frameworks.
3. Implement vector databases (pgvector, Qdrant, etc.) for RAG and AI memory systems.
4. Orchestrate multi-agent AI systems with LangChain/LangGraph, including function
calling, agent collaboration, and monitoring.
5. Build and integrate RESTful APIs for frontend and external use.
6. Manage DevOps workflows, including CI/CD, cloud deployments (AWS/GCP), server
scaling, and logging/monitoring (Sentry).
7. Optimize application cost, latency, and reliability, balancing speed with LLM call
efficiency and caching strategies.
8. Collaborate with product, design, and AI teams to translate business requirements into
high-performing tech.
9. Maintain documentation and ensure code quality with tests, reviews, and async-first
architecture.
10. Contribute to frontend development (React + TypeScript) when necessary, ensuring
seamless API integration and data visualization.
Requirements
Core Skills
• Strong proficiency in Python and FastAPI.
• Experience with PostgreSQL (including pgvector) and SQLAlchemy (async).
• Solid understanding of Redis, RQ (Redis Queue), and caching mechanisms.
• Proven experience integrating LLMs and AI APIs (OpenAI, Anthropic, etc.).
• Hands-on experience with LangChain / LangGraph, RAG pipelines, and agent
orchestration.
• Experience working with cloud platforms (AWS / GCP) and managing file storage (S3).
• Familiarity with frontend stacks (React, TypeScript, Tailwind, Zustand).
• Working knowledge of DevOps: Docker, CI/CD pipelines, deployment automation, and
observability tools (Sentry, Mixpanel, Clarity).
Bonus / Nice to Have
• Experience building agent monitoring dashboards or AI workflows.
• Prior experience in startup or product-based environments.
• Understanding of LLM cost optimization, token management, and function calling
orchestration.
• Familiarity with external API integrations like BrightData, Hunter.io, Adzuna, and Serper.
• Experience building scalable AI products (e.g., chatbots, AI copilots, data agents, or
automation tools).
Mindset
• Startup-ready: comfortable working in fast-paced, ambiguous environments.
• Deep curiosity about AI systems and automation.
• Strong sense of ownership and accountability for shipped products.
• Pragmatic and cost-conscious in architectural decisions.
• Excellent communication and documentation skills.
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.
Description
Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.
As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.
What you will do...
- Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
- You will work to develop delivery estimates and an estimated project plan.
- You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
- Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
- Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
- Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
- Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.
What we need...
- 5+ years of experience working in a Software Engineering capacity
- Excellent knowledge and experience with Python, and preferably additional languages such as Go
- Strong critical thinking skills, and a bias towards problem solving
- Familiarity with implementing microservice architectures
- Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
- Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
- Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
- Experience deploying production workloads on the public cloud - either GCP or AWS
- Experience using CI/CD tools such as GitHub Actions, GitLab, etc
- Able to work with new tools and technologies where you may not have prior experience
- Comfortable with being on video in meetings internally and with clients
- Strong English communications skills
We are a fully remote company and offer competitive compensation and benefits.
We are looking for a highly skilled Full Stack Developer (MERN Stack) with 3–5 years of experience to join our growing team. You will have the opportunity to work on cutting-edge technology solutions, build products from scratch, and contribute to scalable systems handling large volumes of data.
Key Responsibilities:
- Design, develop, and maintain scalable full-stack applications
- Build responsive and high-performance user interfaces using modern frontend frameworks
- Develop robust backend services and APIs
- Ensure seamless system performance while handling large-scale data without slowdowns
- Collaborate with cross-functional teams (product, design, QA) to meet business goals
- Optimize applications for maximum speed, scalability, and reliability
- Participate in architecture discussions and contribute to technical decisions
Required Skills & Qualifications:
Frontend
- Strong experience in React.js
- Hands-on experience with Next.js (mandatory)
- Good understanding of UI/UX principles and responsive design
Backend
- Solid experience in Node.js
- Experience with Python or Java is a plus
- Strong knowledge of RESTful APIs and microservices architecture
Databases
- Strong experience with SQL (mandatory)
- Experience with MongoDB is a plus
- Caching & Messaging
- Experience with at least one: Redis, Kafka, or Cassandra
Other Requirements
- Strong problem-solving and analytical skills
- Ability to work in a fast-paced, collaborative environment
- Good communication and stakeholder management skills
Good to Have:
- Cloud certifications (AWS / Azure / GCP)
- Experience working on high-scale or distributed systems
- Exposure to DevOps practices and CI/CD pipelines
Why Join Us:
- Opportunity to work on cutting-edge tech and greenfield projects
- Ownership and freedom to build solutions from scratch
- Collaborative and growth-focused work environment
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 4+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.




















