50+ Startup Jobs in Hyderabad | Startup Job openings in Hyderabad
Apply to 50+ Startup Jobs in Hyderabad on CutShort.io. Explore the latest Startup Job opportunities across top companies like Google, Amazon & Adobe.
Software Engineer (Backend) – Kotlin & React
About Us
We are a high-agency startup building elegant technological solutions to real-world problems.
Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.
We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.
Role Overview
As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.
This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.
Key Responsibilities
System Development & Architecture
- Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
- Architect systems that are robust, high-performance, and production-ready.
- Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.
Full Stack Development
- Build fast, maintainable front-end applications using React.
- Ensure seamless integration between frontend systems and backend services.
Cloud Infrastructure
- Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
- Implement scalable deployment pipelines, monitoring, and infrastructure optimization.
Product & Technical Collaboration
- Work closely with founders and product stakeholders to translate business problems into technical solutions.
- Contribute actively to product and engineering roadmap decisions.
Performance Optimization
- Continuously improve system performance, scalability, and reliability.
- Implement efficient algorithms and system optimizations to gain a technical advantage.
Engineering Excellence
- Write clean, well-tested, and maintainable code.
- Maintain strong engineering standards across the codebase.
Required Skills & Qualifications
We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.
Core Requirements
- Strong computer science fundamentals (Data Structures, Algorithms, System Design).
- Experience with Kotlin or JVM languages such as Java or Scala.
- Experience building modern React applications.
- Hands-on experience with cloud platforms (AWS / Azure / GCP).
- Experience designing and deploying scalable distributed systems.
- Strong problem-solving and analytical thinking.
Preferred / Bonus Skills
- Experience with Google Maps APIs or geospatial integrations.
- Prior startup experience.
- Contributions to open-source projects.
- Personal side projects demonstrating strong engineering ability.
Ideal Candidate
You will thrive in this role if you:
- Take ownership of problems, not just tasks.
- Are comfortable working in high-ambiguity environments.
- Have a builder mindset and enjoy creating systems from scratch.
- Learn quickly and execute with speed and precision.
This Role May Not Be For You If
- You prefer strict task assignments and detailed specifications before starting work.
- You want to focus only on coding tickets without product involvement.
- You prefer large teams with multiple layers of management.
Why Join Us
- Build 0 → 1 products with massive ownership.
- Work in a flat organization with no unnecessary hierarchy.
- Collaborate directly with founders and core product builders.
- Your contributions will have immediate and visible impact.
- Flexible remote work environment.
- Opportunity to shape the technology, culture, and future of the company.
If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.
Strong Enterprise Data Modeller profile (Modern Data Platforms)
Mandatory (Experience 1) – Must have 7+ years of experience in Data Modeling or Enterprise Data Architecture, with strong hands-on expertise in designing conceptual, logical, and physical data models for enterprise data platforms
Mandatory (Experience 2) – Must have Strong hands-on experience with enterprise data modeling tools such as Erwin, ER/Studio, PowerDesigner, SQLDBM, or similar enterprise data modeling tools
Mandatory (Experience 3) – Must have Deep understanding of dimensional modeling (Kimball / Inmon methodologies), normalization techniques, and schema design for modern data warehouse environments.
Mandatory (Experience 4) – Proven experience designing data models for modern data platforms such as Snowflake, Databricks, Redshift, Dremio, or similar cloud data warehouse / lakehouse systems.
Mandatory (Experience 5) – Must have strong SQL expertise and schema design skills, with the ability to validate data model implementations and collaborate closely with data engineering teams
Mandatory (Education) – Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related technical field.
Mandatory (Note) – Total experience should not be greater than 14 years
The AI Data Engineer will be responsible for designing, building, and operating scalable data pipelines and curated data assets that power machine learning, generative AI, and intelligent automation solutions in an SLA-driven managed services environment. This role focuses on data ingestion, transformation, governance, and operational reliability across cloud and hybrid environments enabling use cases such as knowledge retrieval (RAG), conversational AI, predictive analytics, and AI-assisted service management. The ideal candidate combines strong data engineering fundamentals with an understanding of AI workload requirements, including quality, lineage, privacy, and performance.
Key Responsibilities
•Design, build, and operate production-grade data pipelines that support AI/ML and generative AI workloads in managed services environments
•Develop curated, analytics-ready datasets and data products to enable model training, grounding, feature generation, and AI search/retrieval
•Implement data ingestion patterns for structured and unstructured sources (APIs, databases, files, event streams, documents)
•Build and maintain transformation workflows with strong testing and validation
•Enable Retrieval-Augmented Generation (RAG) by preparing document corpora, chunking strategies, metadata enrichment, and vector indexing patterns
•Integrate data pipelines with application services
•Support ITSM and enterprise workflow data needs, including ServiceNow data integration, CMDB/incident data quality improvements, and automation enablement
•Implement observability for data pipelines (monitoring, alerting, SLAs/SLOs) and perform root cause analysis for pipeline failures or data quality incidents
•Apply data governance and security best practices
•Collaborate with ML Engineers, DevOps/SRE, and solution architects to operationalize end-to-end AI solutions
•Contribute to reusable patterns, templates, and standards within the Bell Techlogix AI Center of Excellence
Required Qualifications
•Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience
•5+ years of experience in data engineering, analytics engineering, or platform data operations
•Strong proficiency in SQL and Python; experience with data modeling and dimensional concepts
•Hands-on experience with Azure data services (e.g., Data Factory, Synapse, Databricks, Storage, Key Vault) or equivalent cloud tooling
•Experience building reliable pipelines with scheduling, dependency management, and automated testing/validation
•Experience supporting production data platforms with incident management, troubleshooting, and root cause analysis
•Understanding of data security, privacy, and governance principles in enterprise environments
Preferred Qualifications
•Experience enabling AI/ML workloads: feature engineering, training data preparation, and integration with Azure Machine Learning
•Experience with unstructured data processing for generative AI
•Familiarity with vector databases or vector search and RAG patterns
•Experience with event streaming and messaging
•Familiarity with ServiceNow data model and integration patterns (Table API, export, CMDB/ITSM reporting)
•Relevant certifications (Microsoft Azure Data Engineer, Azure AI Engineer, Databricks)
Data Engineer
Overview
We are seeking skilled Data Engineers to join our Data & Digital Twin Foundation team. You will design, build, and maintain data pipelines that power digital twin platforms, real-time operational systems, and AI/ML workloads. Working closely with data architects, simulation engineers, and ML teams, you will transform raw operational data into high-quality, governed datasets that drive intelligent decision-making.
Our core data platform stack includes:
Data Platform & Lakehouse
- Databricks as the single point of truth for all data
- Realtime Data Pipelines implemented using Kafka for data ingestion.
- Databricks SQL for analytical queries
- Unity Catalog for metadata management and governance
- Terradata for data warehouse and business intelligence.
Stream & Event Processing
- Apache Kafka for real-time event ingestion
- Structured Streaming for continuous data processing
- Delta Live Tables for declarative, quality-enforced pipelines
Data Quality
- Delta Live Tables expectations for data validation
- Data profiling and anomaly detection
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Databricks, PySpark, and Delta Lake
- Build real-time and batch data ingestion pipelines from diverse operational systems using high-performance Kafka data pipelines.
- Implement data transformations that serve digital twin platforms and operational analytics
- Integrate Kafka event streams with Databricks for real-time operational state updates
- Implement data quality checks using Delta Live Tables expectations
- Ensure data governance compliance through Unity Catalog (lineage, access control, metadata)
- Optimize pipeline performance, reliability, and cost efficiency
- Write clean, well-documented, and testable code following engineering best practices
- Collaborate with ML engineers to deliver feature-engineered datasets
- Participate in code reviews, knowledge sharing, and continuous improvement initiatives
- Support production data systems through monitoring, troubleshooting, and incident resolution.
- Build business data warehouse solutions using Terradata for business intelligence.
Preferred Qualifications
- 7+ years of hands-on data engineering experience
- Track record of building and maintaining production-grade data pipelines
- Experience with Delta Live Tables for declarative pipeline development
- Experience working in agile, cross-functional teams
- Familiarity with time-series data patterns and operational data modelling
Highly Desirable
- Experience building data pipelines for digital twin or simulation platforms
- Familiarity with operational state modeling for real-time systems
- Exposure to physics-informed or time-series ML feature engineering
- Experience working with distributed, multidisciplinary teams
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
The DevOps Engineer will play a critical role in operationalizing artificial intelligence across Bell Techlogix client environments. This role focuses on building and supporting cloud infrastructure, CI/CD pipelines, and automation frameworks that power AI and machine learning workloads. The ideal candidate has experience supporting AI platforms such as Azure AI, Azure Machine Learning, Azure OpenAI, and ServiceNow or conversational AI platforms, and understands the operational requirements of production AI systems, including reliability, scalability, and security.
Key Responsibilities
•Design, build, and operate cloud infrastructure and platform services that support AI and machine learning workloads in production, SLA-driven managed services environments
•Implement CI/CD and MLOps pipelines to enable automated training, testing, deployment, and rollback of AI and ML models
•Develop and maintain Infrastructure as Code to provision AI-ready environments consistently across dev/test/prod
•Support AI platform operations including monitoring model health, pipeline execution, compute utilization, and data dependencies
•Partner with Machine Learning Engineers and Data Engineers to standardize deployment patterns for AI services and LLM-based solutions
•Enable secure and scalable AI integrations using APIs, messaging, and event-driven architectures
•Implement observability solutions for AI platforms, including logging, metrics, alerting, and drift detection integrations
•Troubleshoot AI platform incidents, perform root cause analysis, and implement remediation to improve reliability and automation coverage
•Apply security best practices for AI environments including secrets management, identity and access controls, network isolation, and policy enforcement
•Support AI-driven automation use cases across platforms such as Microsoft Copilot, ServiceNow, and conversational AI tools
•Collaborate with service desk, security, and architecture teams to continuously improve AI service delivery and operational maturity
Required Qualifications
•Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience
•5+ years of experience in DevOps, cloud engineering, or platform operations, with exposure to AI or data workloads
•Hands-on experience with Microsoft Azure, including compute, networking, storage, and monitoring services
•Experience building CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools
•Working knowledge of Infrastructure as Code (Terraform and/or Bicep/ARM)
•Scripting experience using PowerShell and/or Python
•Experience supporting production platforms with incident management, change control, and root cause analysis
•Understanding of cloud security fundamentals and enterprise governance requirements
Preferred Qualifications
•Experience with Azure Machine Learning, Azure AI Services, Azure OpenAI, or MLOps frameworks
•Exposure to containerization and orchestration technologies (Docker, Kubernetes, AKS)
•Experience supporting data pipelines or feature stores used by machine learning systems
•Familiarity with ServiceNow, AI-driven ITSM workflows, or automation platforms
•Experience with observability tools
•Knowledge of Responsible AI, data governance, and compliance considerations for AI systems
•Relevant certifications (Microsoft Azure Administrator, Azure DevOps Engineer, Azure AI Engineer)
The Machine Learning Engineer will play a critical role in supporting Bell Techlogix clients by building, operating, and optimizing AI solutions in a managed services environment. This role focuses on delivering reliable, secure, and scalable AI capabilities across Microsoft AI platforms, Kore.ai conversational AI, and ServiceNow, while also supporting broader AI initiatives and the AI Center of Excellence.
Key Responsibilities
•Design, deploy, and support machine learning and AI solutions in production, SLA-driven managed services environments
•Provide operational support for AI platforms including incident response, troubleshooting, and root cause analysis
•Monitor AI and ML model performance, data quality, and drift; implement retraining and optimization strategies
•Build and maintain MLOps pipelines supporting model training, validation, deployment, and rollback
•Develop and support AI workloads using Microsoft Azure AI, Azure Machine Learning, Azure OpenAI, and Copilot extensibility
•Design, train, and optimize virtual assistants enterprise workflows
•Implement and support AI capabilities including Predictive Intelligence, Virtual Agent, and AI Search
•Collaborate with service desk, engineering, security, and platform teams to drive automation and continuous service improvement
•Act as a technical escalation point for AI-related client issues and enhancement requests
•Contribute to AI innovation initiatives, proofs of concept, and reusable solution patterns within Bell Techlogix
Required Qualifications
•Bachelor’s degree in Computer Science, Data Science, Machine Learning, or equivalent practical experience
•5+ years of experience in machine learning engineering, AI development, or applied data science
•Strong proficiency in Python, SQL, and API-based integrations
•Hands-on experience supporting machine learning models in production environments
•Experience working in managed services, consulting, or enterprise IT environments
•Strong understanding of cloud platforms (Microsoft Azure preferred)
Preferred Qualifications
•Experience with Azure Machine Learning, Azure AI Services, or Azure OpenAI
•Hands-on experience with Kore.ai XO Platform or enterprise conversational AI
•Experience implementing or supporting ServiceNow AI/ML, Predictive Intelligence, or Virtual Agent
•Familiarity with MLOps, CI/CD pipelines, Infrastructure as Code (Terraform, Bicep, ARM)
•Knowledge of Responsible AI, data governance, and enterprise security practices
•Relevant certifications (Microsoft, ServiceNow, Kore.ai)
Job Summary:
We are looking for a skilled and motivated .NET Full Stack Developer with strong expertise in .NET Core, React, and Microservices architecture. The ideal candidate will be responsible for designing, developing, and maintaining scalable, high-performance applications while collaborating with cross-functional teams.
Key Responsibilities:
- Design, develop, and maintain web applications using .NET Core / ASP.NET Core and React.js
- Build and implement microservices-based architecture for scalable systems
- Develop and consume RESTful APIs
- Collaborate with UI/UX designers to implement responsive and user-friendly interfaces
- Ensure code quality through unit testing, code reviews, and best practices
- Work with databases such as SQL Server / NoSQL databases
- Optimize applications for maximum speed and scalability
- Participate in Agile ceremonies like sprint planning, stand-ups, and retrospectives
- Troubleshoot and debug production issues
Required Skills:
- Strong experience in C#, .NET Core, ASP.NET Core
- Hands-on experience with React.js, JavaScript, HTML, CSS
- Solid understanding of Microservices architecture
- Experience in building and consuming REST APIs
- Knowledge of Entity Framework / ORM tools
- Experience with SQL Server / PostgreSQL / MongoDB
- Familiarity with Git / version control systems
- Understanding of design patterns and clean architecture
Strong Databricks / AWS Data Architect profile
Mandatory (Experience 1) – Must have minimum 5+ years of experience in Data Architecture / Data Engineering, with exposure in enterprise-scale data platform modernization initiatives
Mandatory (Experience 2) – Must have minimum 3+ years of deep hands-on experience in Databricks-based lakehouse architecture on AWS, including large-scale data platform implementations
Mandatory (Experience 3) – Strong expertise in Databricks ecosystem including Delta Lake, Databricks SQL, Unity Catalog, Delta Live Tables, and MLflow with focus on performance optimization and security
Mandatory (Experience 4) – Strong experience with AWS data services including S3, Glue, EMR, Lambda, Redshift, Athena, Lake Formation, and DMS, with strong understanding of cloud-native architecture patterns
Mandatory (Experience 5) – Proven experience designing and implementing Medallion (Bronze/Silver/Gold) architecture, scalable data models (Dimensional/Data Vault), and enterprise lakehouse platforms supporting batch and real-time processing
Mandatory (Experience 6) – Must have hands-on experience building scalable ingestion frameworks including batch, streaming, and CDC pipelines using tools like Kafka, Kinesis, Spark, or similar technologies
Mandatory (Skill 1) – Proven experience implementing CI/CD pipelines for data platforms, including infrastructure as code, automated deployments, and environment management
Mandatory (Skill 2) – Hands-on experience enabling data platforms for AI/ML and Generative AI use cases, including feature stores, vector storage, and secure data access patterns
Mandatory (Skill 3) – Experience with orchestration tools such as Apache Airflow or MWAA and designing integration layers for analytics, BI, and AI consumption
Hiring: Techno-Commercial Sales Manager (Healthcare Technology)
We are looking for a strategic, results-driven Techno-Commercial Sales Manager to lead business development and drive the adoption of our healthcare solutions. If you have a deep understanding of hospital ecosystems and a proven ability to close complex software deals, let’s talk.
The Role
As an individual contributor, you will be responsible for identifying and acquiring new clients—including providers and diagnostic centers—while managing high-level relationships with clinicians and hospital administrators.
Key Responsibilities
- Customer Acquisition: Proactively identify and onboard new providers and diagnostic centers.
- Strategic Growth: Create detailed account plans focused on key stakeholders (CIOs, CXOs, and Administrators).
- Relationship Management: Strengthen ties with existing clinical and technology partners to ensure long-term growth.
- Industry Presence: Represent the brand at health-tech platforms, seminars, and industry meetings.
What You Bring
- Experience: 5–8 years in Business Development or Account Management within Healthcare IT (HIS, EMR, or RIS).
- Network: An established network of contacts among hospital promoters and CXOs in the Indian healthcare industry.
- Domain Expertise: Strong understanding of hospital workflows, clinical software constructs, and healthcare compliance.
- Sales Acumen: Excellent negotiation skills and the ability to manage cross-functional stakeholders across IT, Finance, and Legal.
- Mobility: Ability to travel extensively within the assigned region or pan-India.
Qualifications
- Bachelor’s/Master’s degree in Business Administration, Marketing, or a related field.
- Strong collaborative spirit to work with internal technical and implementation teams.
📍 Locations: Chennai | Hyderabad
Apply at: https://forms.gle/14wKUWYsVkQhdAJL8
Work mode- WFO 5 days
Location: Hyderabad (Onsite)
Experience- 7+
- K8s Hands-on experience
- Linux Troubleshooting Skills
- Experience on OnPrem Servers and Management
- Helm
- Docker
- Ingress and Ingress Controllers
- Networking Basics
- Proficient Communication
Must-Have Skills:
- Hands-on experience with airgap Kubernetes clusters, ideally in regulated industries (finance, healthcare, etc.).
- Strong expertise in CI/CD pipelines, programmable infrastructure, and automation.
- Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery.
- Security & compliance knowledge for regulated industries.
- Preferred: Experience with GKE, RKE, Rook-Ceph, and certifications like CKA, CKAD.
Who You Are
- A Kubernetes expert who thrives on scalability, automation, and security.
- Passionate about optimizing infrastructure, CI/CD, and high-availability systems.
- Comfortable troubleshooting Linux, improving observability, and ensuring disaster recovery readiness.
- A problem solver who simplifies complexity and drives cloud-native adoption.
What You’ll Do
- Architect & automate Kubernetes solutions for airgap and multi-region clusters.
- Optimize CI/CD pipelines & cloud-native deployments.
- Work with open-source projects, selecting the right tools for the job.
- Educate & guide teams on modern cloud-native infrastructure best practices.
- Solve real-world scaling, security, and infrastructure automation challenges.
Why Join Us?
- Work on high-impact Kubernetes projects in regulated industries.
- Solve real-world automation & infrastructure challenges with cutting-edge tools.
- Grow in a team that values learning, open-source contributions, and innovation.
Job Description (JD)
Job Title: Graphic Designer & WordPress Developer
Experience: 3–5+ Years
Location: Gachibowli, Hyderabad
Company: Bridgesoft Solutions
About the Role
We are looking for a creative and detail-oriented Digital Designer & WordPress Developer who can blend design thinking with technical execution. The ideal candidate will be responsible for building engaging websites and creating compelling digital assets that strengthen our brand presence across platforms.
Key Responsibilities
- Design and develop responsive WordPress websites
- Create website wireframes, mock-ups, and UI designs
- Develop social media creatives, flyers, and marketing assets
- Design and edit short-form videos (Reels) & website videos
- Create brochures, pitch decks, and PPT presentations
- Ensure consistency in branding across all digital assets
- Optimize websites for performance, SEO, and user experience
- Collaborate with marketing and product teams for campaigns
- Stay updated with latest design trends and tools
Required Skills
Technical Skills
- Strong experience in WordPress (themes, plugins, customization)
- Knowledge of HTML, CSS, basic JavaScript
- Experience with page builders (Elementor, etc.)
- Basic understanding of SEO & website performance optimization
Design Skills
- Proficiency in Adobe Creative Suite (Photoshop, Illustrator, Premiere Pro/After Effects) or equivalent tools (Figma, Canva, etc.)
- Strong skills in UI/UX design & wireframing
- Experience in video editing and motion graphics (preferred)
Soft Skills
- Creative thinking and attention to detail
- Strong communication and collaboration skills
- Ability to manage multiple projects and deadlines
Preferred Qualifications
- Experience working in a digital agency or product company
- Understanding of branding and marketing design
- Basic knowledge of conversion-focused design (landing pages, CTAs)
Job Types: Full-time, Permanent
Benefits:
- Flexible schedule
- Health insurance
- Leave encashment
- Life insurance
- Paid sick time
- Provident Fund
Ability to commute/relocate:
- Gachibowli, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required)
Work Location: In person
Job Summary
We are looking for a strong QAD Developer to support a US-based client from our Applix offshore delivery center. The role requires a self-driven engineer who can independently handle QAD-related customizations, enhancements, implementation support, troubleshooting, and ongoing production support.
The ideal candidate should be comfortable working directly with functional stakeholders, understanding business requirements, converting them into technical solutions, and supporting deployment and stabilization activities with minimal supervision.
Shift:
- Second shift / US overlap
- Regular working hours will extend up to 11:30 PM IST on certain business days.
Required Skills
- Strong hands-on experience in QAD ERP development and customization
- Good understanding of QAD technical architecture
- Experience in custom development, reports, forms, interfaces, and enhancements
- Good understanding of manufacturing/business process flows in ERP environments
- Ability to troubleshoot production issues independently
- Strong SQL knowledge for data analysis, backend troubleshooting, and query handling
- Experience supporting implementations, rollouts, or enhancement projects
- Good communication skills and ability to interact with US-based teams
Preferred Skills
- Experience in manufacturing industry environments
- Exposure to integrations, EDI, or external system interfaces
- Experience supporting QAD implementations or upgrades
- Familiarity with change management, release processes, and production support practices
Key Traits
- Self-sufficient and proactive
- Able to work with minimal supervision
- Strong ownership mindset
- Comfortable in a client-facing offshore support model
- Able to handle second-shift working hours consistently
- Excellent verbal and written communication skills, with the ability to clearly explain technical issues, progress, risks, and dependencies to US-based client teams
- Proactive ownership mindset, with the ability to independently drive QAD customizations, issue resolution, and implementation tasks from analysis through closure with minimal supervision
Key Responsibilities
- Develop, customize, and support QAD ERP solutions based on business requirements
- Handle QAD-related enhancements, bug fixes, and implementation activities
- Work on forms, reports, custom programs, interfaces, and data handling within the QAD environment
- Analyze functional requirements and convert them into technical design and development tasks
- Support issue investigation, root cause analysis, and defect resolution in production and non-production environments
- Collaborate with client stakeholders, functional teams, and internal delivery teams during requirement clarification, development, testing, and deployment
- Perform unit testing and support SIT/UAT cycles
- Assist in data migration, configuration support, and deployment activities as needed
- Maintain proper technical documentation for customizations, fixes, and implementation changes
- Work independently during offshore support hours and provide timely progress and issue updates
Job Summary
We are looking for a strong SQL Developer to support a US-based client from our Applix offshore delivery center. This role requires a self-sufficient engineer who can independently manage SQL development, database troubleshooting, data fixes, query optimization, backend support for application changes, and support customizations and implementations tied to business needs.
The ideal candidate should be comfortable working closely with application teams and business stakeholders to understand data flows, support development needs, and resolve production issues with minimal supervision.
Shift:
- Second shift / US overlap
- Regular working hours will extend up to 11:30 PM IST on certain business days.
Required Skills
- Strong hands-on experience in SQL development
- Strong experience with stored procedures, views, functions, joins, indexing, and performance tuning
- Good experience in data analysis, troubleshooting, and backend support
- Ability to write efficient, scalable, and maintainable SQL code
- Experience supporting production issues and implementing fixes independently
- Good understanding of database design principles and data integrity
- Ability to work with application teams on customization and implementation needs
- Strong communication and problem-solving skills
Preferred Skills
- Experience supporting ERP applications, preferably manufacturing-related systems
- Experience with data migration, ETL, reporting, or interface support
- Exposure to QAD or similar ERP environments
- Experience in a client-facing offshore support model
Key Traits
- Self-sufficient and dependable
- Strong analytical mindset
- Able to independently own issues from analysis to closure
- Comfortable working in extended overlap with US teams
- Able to manage priorities with minimal supervision
- Excellent verbal and written communication skills, with the ability to clearly document findings, explain data/database issues, and provide timely updates to US-based client teams
- Strong ownership and proactive follow-through, with the ability to independently analyze, troubleshoot, optimize, and close SQL/data-related issues without constant direction
Key Responsibilities
- Develop, maintain, and optimize SQL queries, stored procedures, functions, views, and backend database objects
- Support application customizations and implementations through database development and data-level troubleshooting
- Analyze and resolve production issues related to data, performance, and SQL logic
- Perform query tuning and performance optimization for existing and new database objects
- Support data extraction, transformation, validation, and migration activities
- Work closely with QAD/application teams to support enhancements, integrations, and issue resolution
- Assist in deployment, testing, and stabilization of new changes
- Perform root cause analysis for database and data-related issues
- Maintain technical documentation for database changes, fixes, and support activities
- Provide reliable offshore support during second shift with timely communication and status updates
We are looking for a highly skilled site reliability engineer to manage and scale our on-premise payments infrastructure. You will work on onsite environment spanning virtual machines and containerized workloads on bare metal, ensuring high availability, security, and performance for mission-critical systems.
Key Responsibilities
● Operate and optimize virtualized environments (VMs) and containerized workloads (Docker on bare metal)
● Manage and scale middleware systems like:
o Nginx (traffic routing, reverse proxy, load balancing)
o Redis (caching, HA setup)
o Kafka (streaming, partitioning, fault tolerance)
● Build and maintain CI/CD pipelines using Jenkins
● Manage infrastructure and application configurations using Git-based version control
● Ensure high availability, resilience, and performance tuning across systems
● Work on Linux system administration (RHEL/CentOS/Ubuntu)
● Implement and maintain automation frameworks using:
o Ansible
o Shell scripting
● Manage and troubleshoot networking components:
o TCP/IP, DNS, Load balancing
o Firewalls, WAF policies
o Akamai
● Handle security and compliance requirements
● Maintain accurate inventory and asset management systems
● Participate in incident response, RCA, and system reliability improvements
● Collaborate with application, security, and DevOps teams
Required Skills & Qualifications Core Infrastructure
● Strong hands-on experience with Linux system administration
● Experience managing on-prem data center environments
● Solid understanding of:
o Virtualization (VMware / KVM or similar)
o Bare metal provisioning
Containers & Middleware
● Experience running Docker in production (non-Kubernetes setups preferred)
● Strong operational knowledge of:
o Nginx
o Redis
o Kafka
o RDBMS
o Java
Observability, Alerting & Reliability
● Design and manage observability platforms:
o Elastic Stack (ELK)
o Grafana / Prometheus stack
● Build and maintain:
o Metrics, logs, and tracing pipelines
o Dashboards for system health and business KPIs
● Develop intelligent alerting strategies:
o Reduce noise (alert fatigue)
o Improve signal quality
● Build correlation mechanisms / alert aggregation systems to:
o Reduce MTTD (Mean Time to Detect)
o Reduce MTTR (Mean Time to Recover)
● Drive proactive monitoring and anomaly detection
● Lead incident response, debugging, and RCA with data-driven insights
CI/CD & Version Control
● Hands-on experience with:
o Git (branching strategies, code reviews, infra-as-code workflows)
o Jenkins (pipeline creation, build automation, deployment orchestration)
Networking & Security
● Good understanding of:
o Networking fundamentals (L3/L4 concepts)
o Firewalls and WAF (rule tuning, debugging)
● Experience handling secure production environments
Automation
● Hands-on experience with:
o Ansible and Shell scripting (bash)
Operations
● Experience with:
o Monitoring, alerting, and logging systems
o Incident management & RCA
o Capacity planning
Preferred Qualifications (Good to Have)
● Experience in UPI / Payments domain
● Understanding of:
o High TPS systems
o Low latency architecture
● Exposure to:
o Ceph / SAN / storage systems
o HA/DR design patterns
● Knowledge of observability stacks (Prometheus, ELK, etc.)
Experience working in regulated environments (PCI-DSS, RBI guidelines
About Us:
We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.
Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.
What You'll Do
- Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
- Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
- Craft scalable React UIs that power ops dashboards and parent-facing apps.
- Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
- Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
- Shape the engineering roadmap: propose, prioritise, and ship features with founders.
- Mentor juniors while executing solo on high-impact bets—no layers, just results.
We're Looking For
- 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
- Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
- Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
- Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
- Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.
This Role Is Not For You If…
- You need structured roadmaps, PM hand-holding, or big-tech process.
- Comfort > impact: stable salary over equity upside and chaos.
- You've never worn all hats (dev, ops, product) in a resource-constrained environment.
Why Join Us
- Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
- Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
- Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
- Hungry to Leap? Apply now!
We are looking for a skilled .NET Full Stack Developer to design, develop, and maintain scalable web applications using modern Microsoft technologies and front-end frameworks. The ideal candidate should have strong backend expertise along with hands-on experience in building responsive UI.
🔧 Key Responsibilities
- Develop and maintain applications using .NET Core / ASP.NET MVC
- Build scalable microservices architecture
- Design and develop RESTful APIs
- Develop responsive UI using Angular / React
- Work with cross-functional teams in Agile/Scrum environment
- Write clean, maintainable, and testable code
- Perform code reviews and optimize application performance
- Troubleshoot and enhance legacy systems
🛠 Technical Skills
🔹 Backend:
- C#, .NET Core, ASP.NET MVC
- Web API, REST, SOAP
- Microservices Architecture
🔹 Frontend:
- Angular / AngularJS / React
- HTML5, CSS3, JavaScript, TypeScript
- jQuery (optional)
🔹 Database:
- SQL Server (2016/2019)
- MySQL / PostgreSQL
- Entity Framework / LINQ / ADO.NET
🔹 Tools & Technologies:
- Git / GitHub / Bitbucket
- Azure DevOps / CI-CD pipelines
- Docker (optional)
- RabbitMQ / Kafka (good to have)
🎯 Preferred Skills
- Experience with Cloud (Azure preferred)
- Knowledge of Unit Testing (NUnit, xUnit)
- Familiarity with TDD practices
- Experience in legacy system migration
- Strong debugging and problem-solving skills
🧠 Soft Skills
- Strong communication skills
- Team collaboration
- Analytical thinking
- Ownership mindset
🎁 Nice to Have
- Experience in Domain-driven design (DDD)
- Exposure to DevOps practices
- Knowledge of security best practices
About Superclaims
Superclaims modernizes health insurance claims adjudication with intelligent automation. We help insurers and TPAs replace manual, document-heavy workflows with faster, more accurate decisions at scale.
Role: Python Backend Developer
We are looking for a Python Backend Developer who is excited to build AI-powered automation products in a fast-paced startup environment.
What you'll do
- Build and maintain scalable backend systems and APIs
- Develop intelligent data extraction pipelines using AI/ML
- Design and implement agentic workflows with LangGraph
- Design efficient database schemas and optimize queries in PostgreSQL
- Integrate and work with LLMs (OpenAI, Gemini, or similar)
- Collaborate with product, frontend, and data teams to deliver end-to-end features
- Write clean, tested, and well-documented code
Must-have skills
- Strong proficiency in Python and a modern web framework (FastAPI or similar)
- Experience with PostgreSQL and an ORM (SQLAlchemy preferred)
- Solid understanding of RESTful API design and best practices
- Hands-on experience or strong familiarity with LangGraph
- Experience working with LLMs (OpenAI, Gemini, or similar providers)
- Comfort with Git/version control and collaborative development workflows
Nice-to-have skills
- Experience with Docker and containerized deployments
- Knowledge of Redis for caching or background tasks
- Exposure to cloud platforms (GCP, AWS, or Azure)
- Experience with vector databases and retrieval-augmented generation
- Basic prompt engineering skills
- Experience with object storage (S3/MinIO)
What we're looking for
- 1+ years of Python backend development experience (open to exceptional freshers)
- Fast learner with genuine curiosity about AI/ML and automation
- Prior startup experience preferred
- Ownership mindset, bias for action, and comfort with ambiguity
- Ready to relocate to Hyderabad (work location)
How to apply
Please share:
- Your resume
- GitHub/Portfolio link
- A brief note on why you're interested in AI-powered automation and Superclaims
Must-Have Skills:
- Hands-on experience with airgap Kubernetes clusters, ideally in regulated industries (finance, healthcare, etc.).
- Strong expertise in CI/CD pipelines, programmable infrastructure, and automation.
- Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery.
- Security & compliance knowledge for regulated industries.
- Preferred: Experience with GKE, RKE, Rook-Ceph and certifications like CKA, CKAD.
Who You Are
- A Kubernetes expert who thrives on scalability, automation, and security.
- Passionate about optimizing infrastructure, CI/CD, and high-availability systems.
- Comfortable troubleshooting Linux, improving observability, and ensuring disaster recovery readiness.
- A problem solver who simplifies complexity and drives cloud-native adoption.
What You’ll Do
- Architect & automate Kubernetes solutions for airgap and multi-region clusters.
- Optimize CI/CD pipelines & cloud-native deployments.
- Work with open-source projects, selecting the right tools for the job.
- Educate & guide teams on modern cloud-native infrastructure best practices.
- Solve real-world scaling, security, and infrastructure automation challenges.
Why Join Us?
- Work on high-impact Kubernetes projects in regulated industries.
- Solve real-world automation & infrastructure challenges with cutting-edge tools.
- Grow in a team that values learning, open-source contributions, and innovation.
✅ Mandatory Skills
- Strong programming experience in C++ (C++11/14/17)
- Hands-on experience with Kubernetes (K8s)Application-level understanding
- Experience with StatefulSets & DaemonSets
- Good understanding of Linux systems
- Experience in multithreading and concurrency
- Strong problem-solving and debugging skills
⭐ Good to Have
- Experience in Microservices architecture
- Knowledge of Docker / containerization
- Basic knowledge of Python (for scripting/automation)
- Exposure to Distributed Systems
- Familiarity with CI/CD pipelines
- Experience with cloud platforms (AWS / Azure / GCP)
Lead API Engineer - IND
Engineering - Hyderabad, Telangana
About Gradera
At Gradera, we’re defining a new category of enterprise transformation called Software-Orchestrated Services™ (SoS™) — a governed blend of human expertise and digital intelligence that transforms how enterprises operate.
Our mission is to build intelligent digital workers that augment teams and automate work across the value chain, helping organizations become more efficient, agile, and resilient.
We don’t believe in one-size-fits-all solutions. Every engagement is tailored to the unique needs of each enterprise — grounded in governance, security, and reliability. By aligning technology with strategy, we empower our clients to achieve measurable outcomes and lead with confidence in a rapidly evolving digital landscape.
Lead – API Engineer
Overview
Key Responsibilities
•Provide technical leadership, mentorship, and guidance to the API engineering team
•Lead the design, implementation, and evolution of API architecture and backend services
•Champion API-first development, ensuring clear contracts and comprehensive documentation (OpenAPI/Swagger)
•Lead the implementations of APIs conforming to all NFRs including security, scalability, and performance.
•Ensure robust authentication, authorization, and data protection for all endpoints
•Drive adoption of best practices in API design, versioning, and integration
•Optimize API performance for low-latency, high-frequency interactions
•Oversee observability, monitoring, and alerting for all APIs and backend services
•Collaborate closely with product, UI, and runtime teams to deliver integrated solutions
•Collaborate with the SDET to ensure comprehensive test coverage and effective test data creation
•Lead the adoption of modern API tools, frameworks, and best practices
•Ensure engineering rigor, code quality, and documentation standards are met
•Facilitate clear communication, knowledge sharing, and effective documentation within the team
•Support team growth through coaching, feedback, and skills development
Core Qualities & Skills
•Proven experience leading API or backend engineering teams and delivering complex API projects
•Deep expertise in API architecture, design, and implementation (RESTful, GraphQL, gRPC etc.)
•Strong programming skills in relevant backend languages (e.g., Node.js, Python, Java, Go, etc.)
•Experience with API security, authentication, and authorization (OAuth, JWT, RBAC, PKCE, define fine-grained access controls using ReBAC)
•Experience with API documentation and standards (OpenAPI/Swagger, Use Case, JSON Schemas, etc.)
•Familiarity with data serialization with Protofbuf
•Experience with some API gateway is a must
•Strong understanding of performance optimization, scalability, and reliability for APIs
•Experience with observability, monitoring, and troubleshooting for backend services
•Strong collaboration and alignment skills across disciplines
•Willingness to learn, share knowledge, and adapt to evolving technologies
•System design skills and awareness of technical debt and tradeoffs
•Excellent communication, documentation, and stakeholder management abilities
•Comfort with ambiguity, discovery, and rapid change
•Commitment to engineering excellence, security, and responsible practices
Preferred Qualifications
•7+ years of hands-on API or backend engineering experience, with 2+ years in a technical leadership role
•Track record of architecting and delivering scalable, reliable API systems
•Experience with modern development practices (CI/CD, automated testing, code reviews)
•Demonstrated ability to mentor and grow engineers
•Experience working in cross-functional, agile teams
Highly Desirable
•Experience with API gateways (e.g., Envoy, Kong, Apigee) and service mesh patterns (e.g., Istio)
•Experience with event-driven and streaming architectures (pub/sub, callbacks, Kafka, etc.)
•Experience with cloud-native infrastructure and automated provisioning (e.g., Terraform)
•Experience building developer portals and self-service onboarding for APIs
•Experience with API Gateways (Apache APISIX, KrakenD, …)
•Ability to perform API Domain Modelling with a strong product orientation, aligning technical design with user and business needs
•Experience thriving in fast-paced, ambiguous environments and balancing rapid delivery with technical excellence
•Experience leading or working with distributed, multidisciplinary teams
Success Metrics
•API response time (median and p95)
•Uptime and reliability of critical services
•Test coverage for API and backend logic
•Developer velocity and time from API design to production
•Time to First Prototype (TTFP)
•Integration lead time for API consumers
•Technical debt reduction and architectural alignment
•Team growth, engagement, and retention
•Stakeholder satisfaction and cross-team collaboration
We are hiring an L1 IT Support Engineer with 2–3 years of experience in desktop/helpdesk support to provide first-level technical assistance across end-user systems, cloud, and enterprise IT environments.
Key Responsibilities
- Troubleshoot Windows OS and Office 365 issues (Outlook, Teams, OneDrive)
- Manage Active Directory tasks: password resets, access/user management
- Install/configure laptops, desktops, printers, and software
- Perform basic network troubleshooting (Wi-Fi, VPN, DNS, DHCP)
- Support AWS CloudWatch alerts and basic Linux troubleshooting
- Handle patching, RCA, documentation, and SOP updates
- Manage tickets in ServiceNow/Jira and meet SLA timelines
- Support onboarding/offboarding and escalate complex issues to L2
Required Skills
- 2–3 years in IT Support / Helpdesk / Desktop Support
- Strong in Windows 10/11, Office 365, Active Directory
- Basic exposure to AWS / CloudWatch and Linux/Unix
- Familiarity with ServiceNow/Jira, ITIL/SLA processes
- Knowledge of SIP/VoIP basics is a plus
- Strong communication and troubleshooting skills
AuxoAI is seeking a techno-functional Developer with strong expertise in enterprise platforms such as NetSuite and/or HubSpot, combined with hands-on experience in AI engineering. The ideal candidate will bridge business processes and technical implementation, leveraging domain knowledge and AI capabilities to design intelligent, scalable solutions.
This role offers an exciting opportunity to work at the intersection of business systems and AI building automation, insights, and intelligent workflows that enhance client operations and decision-making.
Additionally, you are required to stay up to date on AI trends, enterprise platform innovations, and best practices, contributing to continuous learning and capability building within the team.
Location - Mumbai/Bangalore/Hyderabad/Gurgaon (Hybrid - 3 Days a week in Office)
Responsibilities:
• Design and implement AI-powered solutions integrated with NetSuite and/or HubSpot to optimize business workflows and decision-making.
• Collaborate with business stakeholders to understand functional requirements across domains such as CRM, ERP, sales, marketing, and finance.
• Develop and customize NetSuite (SuiteScript, SuiteTalk) and/or HubSpot (Workflows, APIs, Custom Objects) solutions aligned with business needs.
• Build intelligent automation using AI/ML models, LLMs, and rule-based systems to enhance platform capabilities.
• Integrate enterprise systems with data platforms and AI services to enable end-to-end data-driven workflows.
• Translate business processes into scalable technical architectures and reusable components.
• Ensure data quality, governance, and security across integrated systems.
• Troubleshoot and optimize system performance, integrations, and AI pipelines.
• Act as a techno-functional advisor to clients, identifying opportunities for AI-driven transformation and platform optimization.
• Contribute to solution accelerators, reusable frameworks, and internal knowledge sharing.
Requirements:
• Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
• 3–7 years of experience in a techno-functional role involving NetSuite and/or HubSpot implementations.
• Strong domain knowledge in CRM (HubSpot) and/or ERP (NetSuite) processes such as sales, marketing automation, finance, and operations.
• Hands-on experience with platform customization:
NetSuite (SuiteScript, SuiteFlow, SuiteTalk APIs)
HubSpot (HubL, Workflows, APIs, CRM customization)
• Experience in AI , including working with LLMs, prompt engineering and AI application development.
• Strong programming skills in Python (preferred).
• Experience with REST APIs, system integrations, and middleware platforms.
• Familiarity with cloud platforms (AWS preferred) and data pipelines is a plus.
• Understanding of data models, ETL processes, and analytics workflows.
• Strong problem-solving skills with the ability to bridge business and technical perspectives.
• Excellent communication and stakeholder management skills.
AuxoAI is seeking a skilled and experienced Senior AI Engineers to join our dynamic team. The ideal candidate will have 5+ years of prior experience in software engineering. This role involves collaborating with cross-functional teams to drive innovation and deliver impactful AI-driven products. This role is responsible for implementing our strategic direction on AI, intelligent automation, and data-powered operations. This role will also guide the implementation of AI solutions across various projects, with an eye on AI governance.
Location - Mumbai/Bangalore/Hyderabad/Gurgaon (Hybrid - 3 Days a week in Office)
Responsibilities:
· AI/ML Solution Development: Design, develop, and deploy AI/ML technology stacks from concept to production and deployment
· Technical Leadership: Provide technical leadership and mentorship to junior engineers, guiding them in best practices and advanced techniques.
· Develop and optimize Generative AI workflows, including prompt engineering, fine-tuning, RAG and LLM-based applications.
· Work with Large Language Models (LLMs) such as Claude, Llama, Mistral, and GPT, ensuring efficient adaptation for various use cases.
· Design and implement AI-driven automation using agentic AI systems and orchestration frameworks like Autogen, LangGraph, and CrewAI.
· Leverage cloud AI infrastructure (AWS, Azure, GCP) for scalable deployment and performance tuning.
· Collaborate with cross-functional teams to deliver AI-driven solutions.
· Front ending customer discussions, customer engagement and success stories
· Collaborate with stakeholders to gather requirements and translate them into technical specifications
Requirements
Bachelor’s in computer science, Engineering, or a related field
· Overall 5+ year’s experience in software engineering and 2+ years of experience in AI/ML, with expertise in Generative AI and LLMs.
· Experience with AWS Bedrock or Azure OpenAI studio or similar enterprise AI environments
· Strong proficiency in Python and experience with AI/ML frameworks like PyTorch and TensorFlow
· Experience with containerization (e.g., Docker, Kubernetes), version control systems (e.g., Git) and software development methodologies (e.g., Agile, Scrum)
· Knowledge of advanced prompt engineering techniques
· Experience in AI workflow automation and model orchestration
· Hands-on experience with API development using Flask or Django
Note: Given the urgency of the role, we are currently prioritizing candidates who can join immediately or within 2-3 weeks.
Job Description:
We are looking for skilled Design for Test (DFT) Engineers with 3–8 years of experience to join our growing team. The ideal candidate should have strong expertise in DFT architecture, implementation, and validation for complex SoCs.
Key Responsibilities:
- Develop and implement DFT architectures (Scan, MBIST, LBIST, Boundary Scan)
- Work on scan insertion, ATPG pattern generation, and fault coverage analysis
- Collaborate with design and verification teams for testability improvements
- Perform DFT verification and debug of test patterns
- Handle silicon bring-up, debug, and yield improvement activities
- Ensure high-quality test coverage and optimize test time
Required Skills:
- Strong hands-on experience in Scan, ATPG, MBIST, LBIST
- Good knowledge of DFT tools like Tessent / Modus / Encounter Test
- Experience in JTAG / Boundary Scan (IEEE 1149.1)
- Understanding of SoC architecture and digital design concepts
- Scripting knowledge in Perl / Python / Tcl
- Experience in debugging silicon issues is a plus
Business Development Executive (Inside Sales / SDR)
Location: Hyderabad (Onsite)
Experience: 1+ year in Inside Sales, SDR, or similar B2B sales role Department: Sales
The Opportunity
We’re looking for a driven, confident, and high-energy Business Development Executive to join our inside sales team. You’ll be at the front line of our growth — identifying high-potential prospects, engaging senior decision-makers, and creating qualified sales opportunities for our enterprise sales team.
At WINIT, we believe the future of sales is AI-powered. This means you won’t just work hard — you’ll work smart, using tools like ChatGPT, AI-driven research, and automation platforms to boost your productivity, personalize outreach, and uncover opportunities faster than ever before.
If you thrive in a fast-paced environment, love building relationships over the phone and online, and are excited about selling technology that’s transforming industries, this role is for you.
What You’ll Do
● Own your prospecting game — research target accounts, map decision-makers, and identify opportunities.
● Engage with C-level executives & senior leaders through calls, emails, and LinkedIn outreach to spark interest in WINIT’s solutions.
● Use AI tools (ChatGPT, DeepResearch, LinkedIn Sales Navigator AI insights, etc.) to craft compelling outreach, generate insights, and accelerate lead qualification.
● Qualify leads against clear criteria to ensure strong pipeline quality for the sales team.
● Leverage multiple lead generation channels (LinkedIn, outbound email, industry databases, events) to meet and exceed targets.
● Collaborate with sales and marketing to refine messaging, target the right segments, and increase conversion rates.
● Maintain accurate and up-to-date lead data in CRM tools.
● Stay on top of industry trends, AI advancements, and competitive insights to strengthen your pitch.
What You Bring
● 1–2 years of experience in B2B inside sales, SDR, or lead generation role (preferably in tech/SaaS/enterprise software).
● Excellent business communication skills — fluent in spoken and written English.
● Strong research and account mapping skills.
● Confidence to engage senior decision-makers across industries.
● Comfortable with high-volume outreach (calls, emails, LinkedIn messages).
● Target-oriented, self-motivated, and driven by performance metrics.
● Familiarity with AI-powered productivity tools like ChatGPT, DeepResearch, and automation platforms.
● Knowledge of CRM tools (HubSpot, Salesforce, or similar) is a plus.
Why WINIT
● Work on market-leading AI-powered products trusted by top global brands.
● Clear career growth path from SDR to Enterprise Sales roles.
● A culture that values innovation, ownership, and collaboration.
● Competitive salary + performance-based incentives.
● Work with a passionate, high-performing team in a collaborative environment.
If you’re ready to accelerate your sales career, master the art of AI-powered selling, and be part of a company that’s transforming how enterprises sell and distribute — we’d love to hear from you.
Role Description
We are looking for a Interior Sales Lead with a strong background in the Interior design industry who brings hands-on experience in client handling, team management, and driving sales closures.The role includes meeting with clients, nduerstanding their interior design and renovation needs, creating tailored proposals, and ensuring seamless collaboration with designers and contractors. Additionally, the role involves contributing to the training and development of the sales team and achieving sales targets.
Job Description:
- Proven experience 4-6yrs of work exp into interior sales
- Idial candidate should be willing to work as an individual contributor
- Strong exposure in Interior sales, client relationship management, and team handling.
- Experience in leading a team and driving performance metrics.
- Excellent communication, negotiation, and stakeholder management skills.
- Ability to meet the sales targets with customer satisfaction.
External Skills And Expertise
- Bachelor's Qualification: B-Arch or Bachelors in Interior Design | If any other, then with relevant industry exposure. (MANDETORY)
- 4+ years of relevant work-experience in Interior Sales

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
About the Role
We are looking for a Senior Backend Engineer to join our core engineering team and help build high-
throughput, low-latency services that power real-time trading at scale. What We’re Looking For
You are someone who has built backend systems in a regulated, high-stakes environment — ideally fintech, brokerage, payments, or banking. You think in terms of system reliability, data correctness, and operational
excellence. You take ownership of services end-to-end: from design and implementation through deployment, monitoring, and incident response. You communicate clearly, make pragmatic trade-offs, and hold yourself and
your peers to a high engineering bar. What You’ll Do
• Design, build, and own backend microservices for a real-time trading platform — from API contracts
through to production observability. • Work with databases, caches, and event-driven architectures to ensure high availability and data
consistency across distributed systems. • Build integrations with third-party financial services — clearing, settlement, identity verification, and
payment rails. • Define and enforce engineering standards — code reviews, testing strategies, API design conventions, and incident response processes. • Collaborate with product, design, and cross-functional teams to translate business requirements into
well-scoped technical deliverables. • Participate in on-call rotations and own production reliability for the services you build. • Mentor junior engineers and contribute to a culture of technical excellence and continuous improvement. Must-Have
• 5+ years backend engineering experience with Golang and Java in production. • Strong experience with PostgreSQL, Redis, and event-driven messaging (Kafka, NATS, or RabbitMQ). • Experience building and maintaining REST/gRPC APIs at scale with proper error handling, rate limiting, and versioning. • Understanding of financial systems — ledgers, reconciliation, order lifecycle, or payment processing.
• Experience with microservices architecture, API gateways, and service-to-service communication
patterns. • Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and cloud infrastructure (AWS or
GCP). • Strong debugging and incident-response skills in distributed systems. Nice-to-Have
• Prior experience at a brokerage, wealth-tech, neo-bank, or payments company. • Experience with clearing broker integrations or introducing broker models. • Knowledge of compliance and regulatory requirements for cross-border financial products. • Experience with search infrastructure (Typesense, Elasticsearch). • Background in performance engineering — profiling, load testing, and latency optimization. Tech Stack
Languages Golang (primary), Java; Python (analytics/scripting)
Databases PostgreSQL, Redis, Typesense
Messaging Kafka, NATS, SSE / WebSocket
Infrastructure Docker, Kubernetes, AWS/GCP, Terraform
Integrations Clearing broker APIs, KYC providers, payment gateways
Observability Datadog / Grafana, PagerDuty, structured logging (ELK)
About the Role
We are looking for a Senior Android Engineer with deep React Native expertise to build and own the trading
experience on Android — real-time data, interactive charting, fluid interactions, and rock-solid performance. What We’re Looking For
You have shipped production React Native apps that handle real-time data streams, complex UI states, and
performance-sensitive rendering. You understand the Android platform deeply — native modules, bridge
performance, and platform-specific behaviour. Ideally, you have worked on trading, fintech, or data-intensive
mobile products. You take ownership of your features end-to-end, care about code quality, and are comfortable
driving technical decisions independently. What You’ll Do
• Own the mobile trading experience on Android — architecture, performance, and end-to-end quality. • Build and optimize real-time data rendering — WebSocket lifecycle management, efficient list rendering, and minimal re-renders for streaming data. • Integrate complex WebView-based components with bidirectional JavaScript bridge communication. • Collaborate with product and design to deliver polished, intuitive interfaces for a financial product where
trust and clarity are paramount. • Define mobile engineering standards — component architecture, state management patterns, testing
strategy, and performance benchmarks. • Drive code reviews, mentor team members, and champion best practices across the mobile team. • Participate in stabilization and release cycles — profiling, device matrix testing, and regression analysis. Must-Have
• 4+ years React Native experience with production apps on Google Play Store. • Strong TypeScript skills and deep understanding of React Native internals — bridge, native modules, and
performance optimization. • Experience with real-time data handling — WebSockets, efficient UI updates for streaming data, and
state synchronization. • Experience integrating WebView-based components with JS bridge communication. • Proficiency with state management at scale (Redux, Zustand, or MobX).
• Experience with push notifications (FCM), deep linking, and complex navigation patterns. • Strong debugging skills — Flipper, React DevTools, native crash analysis, and performance profiling. Nice-to-Have
• Prior experience building trading, brokerage, or fintech mobile apps. • Experience with charting library integration in mobile apps. • Knowledge of server-driven UI (SDUI) patterns. • Experience with feature flags, A/B testing frameworks, and app modularization. • Familiarity with native Android development (Kotlin) for bridge modules. Tech Stack
Framework React Native
Language TypeScript / JavaScript
State Management Redux / Zustand
Real-time WebSocket, SSE
Charting Charting library via WebView bridge
Testing Jest, Detox (E2E), device matrix testing (Android 10+)
About the Role
We are looking for a Senior Web Engineer to architect and build the entire web experience using Next.js and
React — a consumer-facing trading app with real-time data, SEO-optimised public pages, and internal admin
tooling. What We’re Looking For
You have built production web applications from zero to launch using Next.js. You are comfortable across the
full web stack — SSR/SSG, real-time WebSocket handling, responsive design, and performance optimization.
Ideally, you have worked on trading platforms, dashboards, or data-rich fintech products. You take ownership of
your codebase, care about developer experience, and can work independently to deliver high-quality, production-ready code. What You’ll Do
• Architect and build the Next.js web platform from scratch — project structure, CI/CD, component library, and design system. • Build real-time, data-intensive interfaces — live price rendering, interactive charting, and responsive
dashboards. • Build SEO-optimised content pages and public-facing discovery surfaces. • Build internal admin and operations tooling — dashboards, data tables, configuration interfaces, and
monitoring views. • Define frontend engineering standards — component architecture, testing strategy, accessibility, and
performance budgets. • Own cross-browser compatibility, Core Web Vitals, and responsive design across devices. • Drive code reviews, mentor team members, and champion best practices across the web team. Must-Have
• 4+ years frontend engineering with React in production; 2+ years with Next.js specifically. • Strong TypeScript skills and experience with SSR, SSG, and incremental static regeneration. • Experience building real-time features with WebSockets in web applications. • Experience with Tailwind CSS or utility-first CSS frameworks. • Experience building both consumer-facing and internal admin/dashboard UIs. • Understanding of SEO best practices — structured data, meta tags, Core Web Vitals.
• Familiarity with CI/CD, CDN deployment, and web performance optimization. Nice-to-Have
• Prior experience building trading platforms, brokerage dashboards, or fintech web apps. • Experience with charting library embeds (TradingView, Lightweight Charts, Highcharts). • Experience building complex admin portals with advanced table views, filters, and real-time updates. • Familiarity with search integration (Typesense, Algolia). • Experience with accessibility standards (WCAG 2.1) and internationalization (i18n). Tech Stack
Framework Next.js 14+, React 18+
Language TypeScript
Styling Tailwind CSS, Headless UI / Radix
Real-time WebSocket, SWR / React Query
Charting Charting library embed, Recharts / D3
Admin TanStack Table, React Hook Form
Deployment Vercel / AWS CloudFront, CDN, Edge functions
Testing Jest, Playwright (E2E), Lighthouse CI
About the Role
We are looking for a Senior iOS Engineer to build and own the trading experience on iOS in Swift — real-time
data via Combine, interactive charting, smooth animations, and the premium feel expected of a financial
application. What We’re Looking For
You have shipped native Swift apps that handle real-time data, complex reactive pipelines, and
performance-sensitive UI. You know UIKit and SwiftUI deeply and can bridge between them confidently. Ideally, you have worked on trading, fintech, or data-intensive iOS products. You take ownership of your domain, drive
technical decisions, and hold yourself and your team to a high quality bar. What You’ll Do
• Own the mobile trading experience on iOS — module architecture, navigation, performance, and
end-to-end quality. • Build and optimize real-time data rendering using Combine — WebSocket streams, subscription
management, and efficient SwiftUI/UIKit binding. • Integrate complex WKWebView-based components with JavaScript–Swift message passing. • Collaborate with product and design to deliver polished, trust-building interfaces for a financial product. • Define iOS engineering standards — architecture patterns, testing strategy, accessibility, and
performance benchmarks. • Drive code reviews, mentor team members, and champion best practices across the iOS team. • Participate in stabilization and release cycles — Instruments profiling, device matrix testing, and
regression analysis. Must-Have
• 4+ years native iOS development with Swift in production apps on the App Store. • Strong Combine experience for reactive programming, data streams, and async coordination. • Experience with both SwiftUI and UIKit — ability to compose views and bridge between the two. • Experience with real-time data rendering and WebSocket integration on iOS. • Experience integrating WKWebView with JavaScript bridge (WKScriptMessageHandler). • Understanding of iOS architecture patterns — MVVM, Coordinator, Clean Architecture. • Experience with push notifications (APNs), deep linking, and Universal Links.
Nice-to-Have
• Prior experience building trading, brokerage, or fintech iOS apps. • Experience with charting library integration on iOS. • Experience building custom UI components — animated charts, calendar views, card-based layouts. • Knowledge of server-driven UI (SDUI) patterns for dynamic rendering. • Familiarity with accessibility best practices (VoiceOver, Dynamic Type). Tech Stack
Language Swift 5.9+
UI SwiftUI + UIKit (hybrid)
Reactive Combine, async/await
Real-time URLSessionWebSocketTask / Starscream, Combine streams
Charting Charting library via WKWebView bridge
Testing XCTest, XCUITest (E2E), Instruments profiling
Job Description: Salesforce Intern
Company: SmartBridge Educational Services Pvt. Ltd.
Job Type: Internship
Location: Hyderabad / Noida.
Duration: 3–6 Months
Role Overview
We are seeking enthusiastic and motivated Salesforce Interns who are passionate about building a career in the Salesforce ecosystem. This internship program is designed to provide structured learning, hands-on development experience, and real-time project exposure under expert mentorship, preparing candidates for entry-level Salesforce Developer roles.
Key Responsibilities
1. Participate in structured training sessions on Salesforce fundamentals, including Apex, Lightning Web Components (LWC), and SOQL.
2. Complete assigned learning modules, exercises, and Trailhead tasks to build core platform knowledge.
3. Assist in developing and customizing Salesforce applications using Apex, Triggers, LWC, and automation tools.
4. Support real-time project implementations and contribute to business use cases under guidance.
5. Work on mini-projects and capstone projects, translating basic business requirements into technical solutions.
6. Debug, test, and troubleshoot application issues to ensure functional and optimized solutions.
7. Collaborate with trainers, mentors, and peers; actively participate in code reviews and knowledge-sharing sessions.
8. Complete regular assessments and coding challenges, demonstrating continuous progress through deliverables.
9. Possess a basic understanding of programming (Java/OOP), web technologies (HTML, CSS, JavaScript), and databases (SQL).
10. Demonstrate strong learning ability, communication skills, problem-solving mindset, and the ability to work effectively in a team environment.
What You Will Gain
• Hands-on experience with Salesforce development tools and technologies
• Exposure to real-world project scenarios and use cases
• Mentorship from experienced industry professionals
• Preparation for Salesforce Developer roles and certifications
We are looking for a Staff Engineer - PHP to join one of our engineering teams at our office in Hyderabad.
What would you do?
- Design, build, and maintain backend systems and APIs from requirements to production.
- Own feature development, bug fixes, and performance optimizations.
- Ensure code quality, security, testing, and production readiness.
- Collaborate with frontend, product, and QA teams for smooth delivery.
- Diagnose and resolve production issues and drive long-term fixes.
- Contribute to technical discussions and continuously improve engineering practices.
Who Should Apply?
- 4–6 years of hands-on experience in backend development using PHP.
- Strong proficiency with Laravel or similar PHP frameworks, following OOP, MVC, and design patterns.
- Solid experience in RESTful API development and third-party integrations.
- Strong understanding of SQL databases (MySQL/PostgreSQL); NoSQL exposure is a plus.
- Comfortable with Git-based workflows and collaborative development.
- Working knowledge of HTML, CSS, and JavaScript fundamentals.
- Experience with performance optimization, security best practices, and debugging.
- Nice to have: exposure to Docker, CI/CD pipelines, cloud platforms, and automated testing.
About the Role
At CAW Studios, we are building the future with agentic AI systems, RAG pipelines, and intelligent automation. From
autonomous AI agents at KnackLabs to developer productivity tools at CodeKnack, we ship production-ready AI
products that solve real problems for enterprises and startups alike.
This is your chance to work on cutting-edge GenAI, LLM fine-tuning, and agent frameworks—and see your code
power products used in the real world. If you’re excited about experimenting, shipping fast, and solving complex AI
challenges hands-on, you’ll love it here.
Who should apply
Engineer with 2 to 4 years of full-time experience building high-scale software systems, with a proven track record
of deploying complex Generative AI products to production.
Role Overview
We are hiring an AI/ML Engineer II (SE2) to own the architectural implementation and deployment of production-grade
agentic AI systems. This role requires a hybrid of traditional engineering rigour (OOPS, SOLID, high-concurrency)
and advanced AI specialization to build the next generation of intelligent tools.
Responsibilities
● Independently design modular and maintainable multi-agent AI systems aligned with SOLID principles
● Build high-concurrency, async FastAPI backends for complex AI workloads with enterprise stability
● Architect sophisticated agentic workflows using LangGraph with a focus on state persistence and error-recovery
● Design and optimize RAG pipelines involving advanced chunking, hybrid search, and re-ranking
● Take ownership of containerization and cloud deployment for observable, cost-efficient AI services
● Collaborate on reusable AI components and internal frameworks to enhance team engineering velocity
Expectations
● Deep obsession with automation, DevOps, OOPS, and SOLID principles
● Advanced experience deploying RAG or agent-based systems with LangGraph orchestration
● Expert-level mastery of async Python, system thinking, and building scalable backends
● High ownership and a "production-first" mindset for end-to-end system reliability
● Hands-on experience across multiple AI modalities (Vision, Audio, Text) and their architectures
Key Responsibilities:
- Work with distributed systems and implement asynchronous programming patterns
- Design and develop scalable backend applications using Python
- Build and integrate applications leveraging LLMs or traditional Machine Learning techniques
- Develop and maintain microservices-based architectures
- Work with databases and caching systems to optimize application performance
- Participate in code reviews and maintain high code quality standards
- Write clean, maintainable, and well-documented code following best practices
Required Skills:
- 3+ years of relevant experience
- Strong understanding of distributed systems and asynchronous programming in Python
- Experience building scalable applications using LLMs or traditional ML techniques
- Hands-on experience with databases, caching mechanisms, and microservices architecture
- Good problem-solving and debugging skills
Marketing Assistant @Knacklabs
Hyderabad | Full-time | 0-1 years
You don't need an AI degree. You need to be obsessed.
We build AI agents that run inside enterprises the kind that actually work, not the kind that demo well and break in production.
GraphRAG, Voice Agents, autonomous workflows. The real stuff.
Now we need someone to tell that story.
The Role: You'll be our first marketing hire.
No "assist the senior marketing manager" energy there is no senior marketing manager. It's you, a Canva account, ChatGPT, and a mandate to make KnackLabs impossible to ignore.
What you'll actually do:
• Run meetups & events organize AI/tech community events in Hyderabad. Real humans, real conversations, real brand.
• Own LinkedIn outreach craft lead messages that don't read like spam. Build campaigns that open doors to enterprise decision-makers.
• Create content that educates LinkedIn posts, Instagram reels, carousels. Not "AI is transforming the world" fluff. Actual insight. Stuff engineers respect.
• Build our brand from zero to "oh yeah, the AI agents company." That's the job.
Who you are:
• Fresh out of college (or close). No experience required. Hunger required.
• You live on social media and understand what makes people stop scrolling
• You write "people actually read it" well, not just "good grammar" well
• You already use AI tools daily ChatGPT, Claude, Midjourney, whatever. If you haven't touched them, this isn't the role
• You're not waiting for instructions. You're three ideas ahead
You'll love this if:
• You notice good branding instantly
• You want to work close to real AI builders, not just post recycled trends • You want your first job to actually be interesting
Why this is a cheat code:
Most freshers learn about AI from courses.
You'll learn it by marketing it sitting with engineers building agents, attending enterprise demos, writing about tech that's 18 months ahead of the market.
✨ Join us to build, break, and innovate AI systems that redefine what’s possible.
Job Overview
Experienced Coupa Implementation and Configuration Consultant to lead and support end-to-end implementations of the Coupa Business Spend Management (BSM) platform, including Supplier Information Management (SIM).
The ideal candidate will possess strong Procure-to-Pay (P2P), Source-to-Contract (S2C), and financial process expertise, along with hands-on Coupa configuration and ERP integration experience.
This role requires close collaboration with Finance, Procurement, IT teams, and executive stakeholders to deliver scalable, compliant, and optimized spend management solutions.
Key Responsibilities
Implementation & Roll-Out
- Lead full lifecycle implementation of Coupa BSM modules.
- Drive Business Process Design workshops and requirement gathering.
- Manage global or multi-entity roll-outs.
- Conduct SIT, UAT, and go-live support.
Configuration & Technical Expertise
- Configure Procurement, Sourcing, Contracts, Catalogues, Invoicing, and Expenses modules.
- Manage Supplier Information Management (SIM) and onboarding workflows.
- Configure PR, PO, Receipt, and Invoicing lifecycle.
- Implement approval workflows, compliance controls, and security configurations.
- Handle advanced system configurations and policy enforcement.
Integration & Technical
- Lead API-based integrations between Coupa and ERP systems (SAP / Oracle / Workday, etc.).
- Support data migration, reconciliation, and validation.
- Ensure system performance and compliance alignment.
Reporting & Governance
- Enable spend visibility through dashboards and analytics.
- Support audit controls and procurement governance frameworks.
Required Skills
- Strong hands-on experience in Coupa BSM implementation.
- Expertise in P2P and S2C processes.
- Experience in Supplier Information Management (SIM).
- ERP integration exposure (API-based preferred).
- Business process design and documentation capability.
- Experience in enterprise or multi-country roll-outs.
- Strong stakeholder management skills.
- Coupa certification is mandate
Lead Software Development Engineer in Test (SDET) – UI, focused on driving UI test automation strategy, framework design, and quality leadership for modern web applications. The role is hands-on and cross-functional, emphasizing scalable, maintainable, and reliable Playwright-based UI automation and a strong culture of quality across the SDLC.
Role Overview
- Acts as a technical leader and mentor for the UI SDET/QA automation team.
- Owns the UI automation vision and framework architecture, primarily using Playwright with TypeScript.
- Partners closely with Product, Engineering, UX, and DevOps to ensure high-quality frontend delivery.
- Champions continuous improvement, engineering excellence, and rapid feedback.
Core Technology Stack
- Playwright – primary UI automation framework
- TypeScript – main automation language
- Jest – unit testing and utilities
- Docker & Kubernetes – containerized test environments
- GitHub Actions – CI/CD integration
- Karate – API and E2E testing support
Key Responsibilities
Technical Leadership & Strategy
- Lead the design, implementation, and evolution of UI automation frameworks.
- Enforce best practices such as Page Object Model (POM), data-driven testing, and atomic test design.
- Reduce flakiness, improve execution speed, and ensure meaningful assertions.
Quality & Process Ownership
- Own and execute comprehensive UI test plans aligned with BDD practices.
- Establish and maintain robust regression testing.
- Drive root cause analysis and continuous improvement for UI defects.
- Incorporate feedback from test outcomes and production issues to improve coverage.
Culture & Collaboration
- Promote a TDD mindset and shared ownership of test automation among engineers.
- Champion a culture of quality, learning, and continuous improvement.
- Support team growth through coaching, feedback, and skills development.
- Ensure strong documentation, code quality, and knowledge sharing.
Core Skills & Qualities
- Strong expertise in Playwright, POM, and UI automation best practices.
- Advanced knowledge of TypeScript, waits, locators, and test stability techniques.
- Experience with CI/CD pipelines, test reporting, and analytics.
- Strong collaboration, communication, and stakeholder management skills.
- Awareness of technical debt, system design trade-offs, and test strategy balance.
- Comfort with ambiguity, fast change, and evolving technologies.
Preferred & Highly Desirable Experience
Preferred
- 5+ years of UI SDET/QA/software engineering experience.
- 2+ years in a technical leadership role.
- Proven success delivering scalable and reliable UI automation systems.
- Experience mentoring engineers in agile, cross-functional teams.
Highly Desirable
- Testing non-deterministic systems, including AI/ML or GenAI-driven UIs.
- Using AI to accelerate testing and SDLC processes.
- Experience with Docker/Kubernetes for test environments.
- Understanding of the Test Pyramid and balancing UI, integration, and unit tests.
- Experience in distributed or multidisciplinary teams.
We are seeking a high-caliber Firmware Lead to join our Engineering team at Gradera. In this role, you will be the technical anchor for the firmware squad, responsible for translating high-level architectural visions into robust, executable low-level designs (LLD). You will lead the design and development of firmware solutions on NXP-based hardware platforms, ensuring seamless real-time data acquisition and integration with cloud-based Machine Learning (ML) platforms. We are looking for a seasoned expert who can work independently without any supervision, taking full ownership of the firmware lifecycle from hardware abstraction to cloud-edge synchronization.
Our Core Tech Stack
Embedded & OS
- NXP SoCs/MCUs: i.MX, LPC, and Kinetis series.
- Yocto Project: Custom layers, recipes, BitBake, and kernel configuration for Linux.
- RTOS Platforms: Deterministic performance, task scheduling, and interrupt handling.
Development & Integration
- Languages: Mandatory proficiency in C/C++ and C# (.NET on embedded targets/IoT).
- Communication: MQTT, WebSockets, CAN, UART, SPI, and I2C.
- Cloud & ML: Azure IoT Hub, AWS IoT Core, and data streaming via Kafka or Kinesis.
Infrastructure & Security
- Security: Secure boot, encryption, and device authentication.
- DevOps: Containerization (Docker) and CI/CD for firmware.
Key Responsibilities
- Architectural Ownership: Convert high-level blueprints into detailed technical designs for NXP-based systems, ensuring optimal performance across hardware and software layers.
- Autonomous Execution: Lead the end-to-end development of firmware modules, making critical technical decisions and resolving complex blockers without supervision.
- ML Pipeline Leadership: Collaborate with Data Engineering and ML teams to architect streaming and batch ingestion pipelines, ensuring data is correctly structured for ML training.
- Cloud-Edge Synchronization: Design secure and reliable transmission protocols for device-to-cloud communication, focusing on edge-to-cloud integration.
- Standards Enforcement: Act as the guardian of engineering excellence, implementing security best practices (secure boot, TLS) and ensuring high code quality.
- Technical Mentoring: Act as a technical beacon for the squad, conducting rigorous code reviews and mentoring senior engineers in Yocto Linux and RTOS concepts.
- Strategic Troubleshooting: Lead the debugging of critical firmware issues across hardware and software layers, including OTA update implementations.
Preferred Qualifications
- 8 to 10 years of professional experience in embedded firmware development.
- Proven ability to work independently and lead technical squads in a fast-paced environment.
- Expert-level mastery of the Yocto Project and RTOS constraints.
- Deep proficiency in C/C++ and C# for embedded systems.
- Demonstrated track record of delivering low-level designs for edge-to-cloud ML systems.
Highly Desirable
- Industry Experience: Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is highly regarded.
- Experience with Edge AI / TinyML and industrial protocols (Modbus, OPC-UA).
- Knowledge of Cybersecurity standards for secure device provisioning.
Experience: 1–3 Years
Qualification: B.Tech (Computer Science / IT or related field)
Shift Timing: 5:00 PM – 2:00 AM (Late Evening Shift)
Location: Hyderabad
Job Summary
We are seeking a proactive and detail-oriented Application Support Engineer with 1–3 years of experience in Linux/Windows environments, application servers, and monitoring tools. The candidate will be responsible for ensuring the stability, performance, and availability of applications, along with providing L2/L3 support in a fast-paced production environment.
Key Responsibilities :
- Provide application support and incident management for production systems.
- Monitor system performance using hardware/software monitoring and trending tools.
- Troubleshoot issues in Linux and Windows environments.
- Manage and support Apache and Tomcat servers.
- Analyze logs and debug application/system issues.
- Work on SQL/Oracle databases for query execution, troubleshooting, and performance tuning.
- Handle deployments and support CI/CD pipelines using tools like Docker and Jenkins.
- Ensure SLA adherence and timely resolution of incidents and service requests.
- Coordinate with development, infrastructure, and database teams for issue resolution.
- Maintain documentation for incidents, processes, and knowledge base articles.
- Support SaaS applications hosted in data center environments.
Required Skills :
Strong knowledge of Linux and Windows OS administration
Experience with Apache and Tomcat servers
Hands-on experience with monitoring and alerting tools
Good understanding of log analysis and troubleshooting techniques
Working knowledge of SQL / Oracle databases
Familiarity with Docker and Jenkins (CI/CD pipelines)
Understanding of ITIL processes (Incident, Problem, Change Management)
Knowledge of SaaS applications and data center operations.
Preferred Skills :
Experience with automation/scripting (Shell, Python, etc.)
Exposure to cloud platforms (AWS/Azure/GCP) is a plus
Basic networking knowledge
Soft Skills :
Strong analytical and problem-solving abilities
Good communication skills
Ability to work in night shifts and handle production support
Team player with a proactive attitude
Key Responsibilities:
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills:
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying, finetuning ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or Generative AI, Voice AI, is an added advantage.
Educational Qualification:
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, IT, from IIT/NIT colleges strongly preferred
Key Skills:
• Hands-on experience with AWS services such as EC2, S3, Lambda, API Gateway, RDS, or DynamoDB ☁️
• Basic understanding of AI/ML concepts and experience with Python-based ML libraries (NumPy, Pandas, Scikit-learn, etc.) 🤖
• Experience in Python / Node.js / Java for backend development 💻
• Understanding of REST APIs and microservices architecture
• Familiarity with Git, CI/CD pipelines, and DevOps fundamentals
• Knowledge of Docker / containerization (preferred) 🐳
• Basic understanding of cloud security, IAM roles, and policies 🔐
• Experience in using AI tools (e.g., ChatGPT, GitHub Copilot, or similar tools) for development, debugging, documentation, and productivity in day-to-day tasks ⚡
Roles & Responsibilities:
• Develop and maintain cloud-based applications on AWS ☁️
• Build and integrate APIs and backend services
• Assist in deploying, monitoring, and managing applications on AWS infrastructure
• Work with the team to integrate AI/ML models or AI-powered services into applications 🤖
• Utilize AI tools for coding assistance, debugging, automation, and improving development efficiency
• Optimize applications for performance, scalability, and reliability
• Collaborate with cross-functional teams for design, development, and deployment
• Troubleshoot and resolve cloud or application-related issues
AWS Certification is mandatory
Education Qualification:
B.Tech/M.Tech from CSE/IT/AI/ML/ECE
Key Requirements / Skills
- 6+ years of overall experience in software development with strong expertise in building scalable web applications.
- 2+ years of experience as a Technical Lead, managing development teams and driving project delivery.
- Strong technical decision-making ability, including architecture design, technology selection, and implementation of best practices.
- Front-end expertise: Strong experience in React, JavaScript, TypeScript, and building responsive and user-friendly UI/UX.
- Back-end development: Hands-on experience with Node.js, RESTful APIs, API design, and server-side architecture.
- AI/ML knowledge: Experience in implementing AI/ML models or integrating AI-based solutions to solve business problems.
- Cloud & DevOps exposure: Experience with AWS/Azure, understanding of CI/CD pipelines, and cloud-based deployments.
- Code quality & best practices: Experience in code reviews, Git version control, and ensuring maintainable and secure code.
- Team leadership: Ability to mentor developers, guide technical discussions, and collaborate across teams.
- Strong communication skills to effectively interact with technical and non-technical stakeholders.
- Experience working in high-compliance environments such as healthcare systems is a plus.
Education Qualifications:
- B.Tech/M.Tech in CSE/IT/AI/ML from a good university
About the Company
TurboHire is redefining enterprise hiring through cutting-edge AI and intelligent automation.Our platform enables large organizations to hire smarter, faster, and at scale. As a rapidly growing HR Tech company, we offer an environment driven by innovation, ownership, and high-impact problem solving.
Job Overview
At TurboHire, we’re looking for a Senior Full Stack Developer who wants to build products that truly scale. This isn’t just another development role — it’s an opportunity to architect and deliver enterprise-grade AI products that solve complex, real-world challenges.
You will work across the stack with strong ownership and technical influence, solving large-scale engineering problems while collaborating with a sharp, driven, and fast-moving team. If you are passionate about clean architecture, performance, scalability, and building products that matter, this role offers the platform to do exactly that — while growing with a company that is scaling rapidly.
Key Responsibilities
●Architect and build high-impact, enterprise-grade AI-driven products
●Collaborate with designers and engineers to develop scalable web applications
●Write clean, efficient, and production-ready code
●Take end-to-end ownership of features from design to deployment and maintenance
●Ensure adherence to software engineering best practices for performance andscalability
●Leverage modern tools, frameworks, and third-party services to accelerate delivery
Eligibility Criteria
●Bachelor’s degree in Engineering, MCA, or equivalent
●Strong foundation in Object-Oriented Programming
●Experience working in Agile development environments
●Familiarity with version control systems, build tools, and web frameworks
●Proficiency in Git
●Knowledge of C# or Java is an added advantage
●1-3 years of relevant work experience
●Tech Stack: Backend = Asp.net or Java or C++
●Frontend = React or Angular or Knockout etc
●RESTful APIs
●Database - SQL Server
Role & Responsibilities
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
Key Responsibilities-
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using **Terraform** and support CI/CD pipelines
- Support **Amazon EKS** clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to **Amazon ECR**
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using **Python, Lambda, and Systems Manager**
Ideal Candidate
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Minimum 8+ years of hands-on experience in AWS Cloud Engineering / Cloud Operations / Cloud-native environments, with deep expertise in production cloud setups.
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform, Ansible
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Individual Role): The candidate will work independently and be fully responsible for managing and delivering all aspects of AWS cloud operations and engineering.
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
Role Summary:
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-8 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Responsibilities:
• Design, develop, and maintain data pipelines using Databricks (PySpark / Spark SQL)
• Build and manage data pipelines across Bronze, Silver, and Gold layers using Delta Lake
• Implement ETL/ELT workflows for batch and near real-time processing
• Work with Databricks Workflows for orchestration and job scheduling
• Leverage Unity Catalog for data governance, access control, and metadata management
• Optimize Spark jobs, cluster configurations, and cost efficiency
• Collaborate with business and analytics teams to translate requirements into scalable data models
• Integrate data from multiple sources (APIs, databases, cloud storage)
• Ensure data quality, validation, and observability across pipelines
• Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring
Qualifications:
• Bachelor’s degree in computer science, Engineering, or a related field.
• Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines
• Hands-on experience with Databricks platform and ecosystem
• Strong proficiency in Python (PySpark) and SQL
• Experience working with Delta Lake (ACID transactions, time travel, schema evolution)
• Good understanding of data warehousing concepts and dimensional modeling
• Familiarity with Unity Catalog (data governance, RBAC, lineage basics)
• Understanding of Spark performance tuning and optimization techniques
• Experience with cloud platforms (AWS / Azure / GCP)
• Working knowledge of Git and CI/CD practices
• Familiarity with implementing CI/CD processes or other orchestration tools is a plus.
Key Responsibilities:
Manage end-to-end US IT recruitment operations
Monitor submissions, interviews, and placements
Coordinate with recruiters, account managers, and clients
Build and maintain vendor/client relationships
Track KPIs, team performance, and delivery metrics
Requirements:
5+ years of experience in US IT staffing
Strong understanding of W2, C2C, and 1099 hiring models
Proven team handling experience
Excellent communication and leadership skills
Role: Sr PostgreSQL Database Administrator (DBA)
Location: Hyderabad, India
Job description
We are looking for a highly capable PostgreSQL DBA who can take complete ownership of database operations in a high-volume, mission-critical environment. This role requires deep expertise in PostgreSQL internals, strong troubleshooting ability, and hands-on experience managing large-scale databases in cloud environments like AWS or Azure.
Responsibilities:
- Manage and maintain PostgreSQL databases across production, staging, and development environments
- Perform and manage logical and physical backups, restores, and recovery strategies
- Execute database upgrades (minor and major versions) with minimal downtime
- Design and manage replication setups, including streaming replication and point-in-time recovery (PITR)
- Troubleshoot performance issues including locks, blocking, long-running queries, and system load bottlenecks
- Tune and manage PostgreSQL configuration parameters and leverage system catalogs for deep analysis
- Perform routine maintenance tasks such as vacuuming, indexing, and database housekeeping
- Deploy and manage PostgreSQL in cloud environments like AWS (RDS/EC2) or Azure
- Implement and maintain high availability (HA) solutions using tools like Repmgr, Pgpool, or EFM
- Manage and optimize multi-terabyte databases ensuring performance and scalability
- Monitor database health and implement proactive measures to prevent outages
- Collaborate with engineering, DevOps, and product teams to support application performance
- Document processes, configurations, and recovery procedures
Requirements:
- Strong knowledge of PostgreSQL architecture and internals
- Hands-on experience with backup/restore strategies, replication, and disaster recovery
- Proven experience in performance tuning and troubleshooting production issues
- Experience managing large-scale (multi-TB) PostgreSQL databases
- Solid understanding of PostgreSQL system catalogs and configuration tuning
- Experience with cloud-based PostgreSQL deployments (AWS or Azure)
- Hands-on experience with high availability and clustering tools (Repmgr / Pgpool / EFM)
- Strong Linux/Unix administration and scripting skills
- Ability to work independently and quickly learn new technologies
- Experience supporting high-volume, mission-critical production environments
- Strong communication and collaboration skills across cross-functional teams
About the Company
At Redpin we simplify life's most important payments. Buying a new property overseas can be a stressful time, especially when it comes to moving your money. Through our Currencies Direct and TorFX brands we've been helping people do just that for over 25 years. With recent investment we're now on a mission to build a new range of digital products and services that will make moving money Internationally for Real Estate purchases even easier
We’re on a mission to become the solution for Real Estate payments everywhere. To do this, we are transitioning our business from a horizontal FX platform to a verticalized, embedded software company, as we look to the future and Redpin 2.0.
About the Role
At Redpin, we’re passionate about building software that solves problems. We count on our site reliability engineers (SREs) to empower users with a rich feature set, high availability, and stellar performance level to pursue their missions. As we expand customer deployments, we’re seeking an experienced SRE to deliver insights from massive-scale data in real time. Specifically, we’re searching for someone who has fresh ideas and a unique viewpoint, and who enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences for every interaction.
What You'll Do
- Run the production environment by monitoring availability and taking a holistic view of system health.
- Build software and systems to manage platform infrastructure and applications
- Improve reliability, quality, and time-to-market of our suite of software solutions.
- Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement.
- Provide primary operational support and engineering for multiple large-scale distributed software applications.
- Design, implement, and maintain highly available and scalable infrastructure and systems on AWS.
- Gather and analyze metrics from operating systems as well as applications to assist in performance tuning and fault finding.
- Partner with development teams to improve services through rigorous testing and release procedures.
- Participate in system design consulting, platform management, and capacity planning.
- Create sustainable systems and services through automation and uplifts.
- Balance feature development speed and reliability with well-defined service-level objectives
What You’ll Need
- Bachelor’s degree in computer science, Software Engineering, or a related field. (Master's degree preferred)
- 4+ years of experience as a Site Reliability Engineer or in a similar role.
- Strong knowledge of system architecture, infrastructure design, and best practices.
- Proficiency in scripting and automation using languages like Python, Bash, or similar technologies.
- Experience with cloud platforms such as AWS, including infrastructure provisioning and management.
- Strong understanding of networking principles and protocols.
- Experience with supporting Java, Spring Boot, Hibernate JPA, Python, React, and .NET technologies Application.
- Knowledge of API gateway solutions like Kong and Layer 7.
- Experience working with databases such as Elastic, SQL Server, Postgres SQL.
- Familiarity with messaging systems like MQ, ActiveMQ, and Kafka.
- Proficiency in managing servers such as Tomcat, JBoss, Apache, NGINX, and IIS.
- Experience with containerization using EKS (Elastic Kubernetes Service).
- Knowledge of CI/CD processes and tools like Jenkins, Artifactory, and Ansible.
- Proficiency in monitoring tools such as Coralogix, CloudWatch, Zabbix, Grafana, and Prometheus.
- Strong problem-solving and troubleshooting skills with the ability to analyse and resolve complex technical issues.
- Excellent communication and collaboration skills to work effectively in a team environment.
- Strong attention to detail and ability to prioritize and manage multiple tasks simultaneously.
- Self-motivated and able to work independently with minimal supervision.
We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together.
Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law
Technical Lead – Full Stack
Overview
We are seeking a high-caliber Tech Lead – Full Stack to join our Digital Twin Platform and Simulation team. In this role, you will translate high-level architectural visions into robust, executable low-level designs (LLD). You will be the technical anchor for the squad by providing deep technical guidance and maintaining rigorous standards you will enable the team to deliver scalable interfaces and services that power our digital twin ecosystem.
Our Core Full Stack Stack
Front-End Engineering
- React + TypeScript (with Next.js), ShadCN/UI, Tailwind CSS for building complex, state-heavy interactive dashboards.
- JavaScript (ES6+) and TypeScript for type-safe management of simulation data
- State management (Redux/Zustand) optimized for high-frequency data updates
· Experience in ESRI ArcGIS Map usage in UI is a big plus.
Back-End & Microservices
- Java and Spring Boot for building high-scale, resilient microservices
- REST APIs for seamless communication between services and front-end consumers
- Microservices architecture and system integration patterns
- Experience designing, building, and integrating RESTful or GraphQL APIs, Protobuf with gRPC and gRPC-Web
Engineering Excellence
- GitHub for version control and rigorous Code Reviews
- CI/CD pipelines, Docker, and Kubernetes for cloud-native deployment
Key Responsibilities
- Low-Level Design (LLD): Convert high-level architectural blueprints into detailed technical designs, including class diagrams, sequence diagrams, and API specifications.
- Technical Mentoring: Lead and coach the engineering team through pair programming, technical 1-on-1s, and hands-on guidance to elevate overall team competency.
- Standards Enforcement: Ensure all code adheres to the defined engineering standards, SOLID principles, and design patterns established by the Architects.
- Code Quality & Review: Conduct comprehensive code reviews to maintain high quality, ensuring the team delivers clean, testable, and maintainable code.
- Technical Anchoring: Serve as the "go-to" expert for the squad to resolve complex technical blockers and provide clarity on implementation details.
- Hands-on Development: Direct implementation of critical and complex modules, setting the benchmark for performance and reliability.
- System Integration: Oversee the technical execution of integrations between full-stack applications and core data layers (Databricks/Neo4j).
- Delivery Governance: Ensure the squad meets sprint objectives by maintaining a high standard of execution and managing technical debt effectively.
Preferred Qualifications
- 8 to 10 years of professional experience in full-stack software development.
- Proven track record in a Tech Lead capacity, with strong experience in creating Low-Level Designs.
- Expert-level proficiency in Java, Spring Boot, and React.
- Deep understanding of Microservices architecture and RESTful API design.
- Familiarity with ShadCN/UI familiarity along with Material UI, Storybook, and/or similar tools
- Demonstrated ability to mentor engineering teams and drive technical excellence in an Agile environment.
- Experience working in the India tech region, preferably within high-growth product engineering teams.
Highly Desirable
- Experience building platforms for Digital Twin, IoT, or Simulation environments.
- Familiarity with visualizing complex networks or real-time operational data.
- Knowledge of performance tuning for both front-end rendering and back-end processing.
- Experience leading teams in an Agile/Scrum environment.
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus.
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
















