50+ Startup Jobs in Hyderabad | Startup Job openings in Hyderabad
Apply to 50+ Startup Jobs in Hyderabad on CutShort.io. Explore the latest Startup Job opportunities across top companies like Google, Amazon & Adobe.
🚨 Hiring Alert 🚨Hiring Alert 🚨 Hiring Alert 🚨
📌 Job Role: System Integration Engineer (SW Integration)
📍 Location: Bangalore
💼 Experience: Minimum 6 Years
🎓 Qualification: Bachelor’s in Electrical / Computer Engineering
We are looking for an experienced System Integration Engineer to contribute to system integration and testing activities for next-generation vehicle controller & software platforms.
If you have strong expertise in vehicle networks, E/E architecture, and automotive system testing, this opportunity is for you.
🔹 Essential Skills
✅ Strong knowledge of E/E Architecture & Vehicle Networks
✅ Solid understanding of LIN (ISO 17987-1) & CAN (ISO 11898-1)
✅ Basic knowledge of Ethernet (OA TC8) & SOME/IP protocol
✅ Understanding of automotive testing standards (ISTQB preferred)
✅ Agile test tools: Doors Next Gen, IBM RTC, Jira
✅ Programming knowledge: Python, CAPL, C, C++
🔹 Desired Skills
⭐ Experience in CI/CD (GitHub, Artifactory, Conan, TeamCity)
⭐ Exposure to Linux, AUTOSAR, Android, QNX environments
⭐ Model-driven testing (Mockups, PoCs)
⭐ Cybersecurity knowledge – MACSec (IEEE 802.1AE)
⭐ Experience in electrical architecture industrialization
🔹 Experience Required
✔ Minimum 6 years in Systems Engineering / Vehicle Network Development
✔ Experience in sub-system or vehicle-level industrialization
📩 Interested candidates can share their updated CV
📌 Subject Line: Application for System Integration Engineer
#Hiring #SystemIntegration #SWIntegration
#AutomotiveSoftware #VehicleNetworks
#EEArchitecture #CAN #LIN #SOMEIP
#AUTOSAR #EmbeddedSystems #AutomotiveEngineering
#CI_CD #Python #CAPL #BangaloreJobs
#AutomotiveJobs #TechHiring #EngineeringCareers
#VehicleController #DistributedSystems
Role Overview
We are hiring a Principal Datacenter Backend Developer to architect and build highly scalable, reliable backend platforms for modern data centers. This role owns control-plane and data-plane services powering orchestration, monitoring, automation, and operational intelligence across large-scale on-prem, hybrid, and cloud data center environments.
This is a hands-on principal IC role with strong architectural ownership and technical leadership responsibilities.
Key Responsibilities
- Own end-to-end backend architecture for datacenter platforms (orchestration, monitoring, DCIM, automation).
- Design and build high-availability distributed systems at scale.
- Develop backend services using Java (Spring Boot / Micronaut / Quarkus) and/or Python (FastAPI / Flask / Django).
- Build microservices for resource orchestration, telemetry ingestion, capacity and asset management.
- Design REST/gRPC APIs and event-driven systems.
- Drive performance optimization, scalability, and reliability best practices.
- Embed SRE principles, observability, and security-by-design.
- Mentor senior engineers and influence technical roadmap decisions.
Required Skills
- Strong hands-on experience in Java and/or Python.
- Deep understanding of distributed systems and microservices.
- Experience with Kubernetes, Docker, CI/CD, and cloud-native deployments.
- Databases: PostgreSQL/MySQL, NoSQL, time-series data.
- Messaging systems: Kafka / Pulsar / RabbitMQ.
- Observability tools: Prometheus, Grafana, ELK/OpenSearch.
- Secure backend design (OAuth2, RBAC, audit logging).
Nice to Have
- Experience with DCIM, NMS, or infrastructure automation platforms.
- Exposure to hyperscale or colocation data centers.
- AI/ML-based monitoring or capacity planning experience.
Why Join
- Architect mission-critical platforms for large-scale data centers.
- High-impact principal role with deep technical ownership.
- Work on complex, real-world distributed systems problems.
Title:TeamLead– Software Development
(Lead ateam of developers to deliver applications in line withproduct strategy and growth)
Experience:8– 10 years
Department:InformationTechnology
Classification: Full-Time
Location:HybridinHyderabad,India (3days onsiteand2days remote)
Job Description:
Lookingforafull-time Software Development Team Lead to lead our high-performing Information
Technology team. Thisperson will play a key rolein Clarity’s business by overseeing a development
team, focusingonexisting systems and long-term growth.Thisperson will serveas the technical leader,
able to discuss data structures, new technologies, and methods of achieving system goals. This person
will be crucialin facilitating collaborationamong team members and providing mentoring.
Reporting to the Director, SoftwareDevelopment,thispersonwillberesponsible for theday-to-day
operations of their team, be the first point of escalation and technical contactfor theteam.
JobResponsibilities:
Manages all activities oftheir software developmentteamand sets goals for each team
member to ensure timely project delivery.
Performcode reviews andwrite code if needed.
Collaborateswiththe InformationTechnologydepartmentand business management
team to establish priorities for the team’s plan and manage team performance.
Provide guidance on project requirements,developer processes, andend-user
documentation.
Supports anexcellent customer experience bybeingproactive in assessing escalations
and working with the team to respond appropriately.
Uses technical expertise to contribute towards building best-in-class products. Analyzes
business needs and develops a mix of internal and externalsoftware systems that work
well together.
Using Clarity platforms, writes, reviews, and revises product requirements and
specifications. Analyzes software requirements,implements design plans, andreviews
unit tests. Participates in other areas of the software developmentprocess.
RequiredSkills:
ABachelor’s degree inComputerScience,InformationTechnology, Engineering,or a
related discipline.
Excellentwritten and verbalcommunication skills.
Experiencewith .Net Framework,WebApplications,WindowsApplications, andWeb
Services
Experience in developing andmaintaining applications using C#.NetCore,ASP.NetMVC,
and Entity Framework
Experience in building responsive front-endusingReact.js,Angular.js,HTML5, CSS3 and
JavaScript.
Experience in creating andmanaging databases, stored procedures andcomplex queries
with SQL Server
Experiencewith Azure Cloud Infrastructure
8+years of experience indesigning andcoding software inabove technology stack.
3+years ofmanaging a teamwithin adevelopmentorganization.
3+years of experience in Agile methodologies.
Preferred Skills:
Experience in Python,WordPress,PHP
Experience in using AzureDevOps
Experience working with Salesforce, orany othercomparable ticketing system
Experience in insurance/consumer benefits/file processing (EDI).
Title: Sr. Database Developer
Experience: 6 – 8 years
Department:Information Technology
Classification:Full-Time
Location:Hybrid in Hyderabad,India (3daysonsite and 2 days remote)
Job Description:
We are seeking a highly skilled Senior Database Developer to lead the design, development, and
optimization of enterprise-grade databases using MicrosoftSQL Server. The ideal candidate will
also have hands-onexperience with MS Excel and MS Access, particularly in building data models,
reports, and automation tools that support business operations.
Job Responsibilities:
Design,develop,andmaintain complexSQL Server databases, stored procedures,views,and
functions.
Optimize database performance throughindexing,query tuning,and
capacity planning.
Develop andmaintain Excel-based tools andreports using advanced formulas,pivot tables,
and VBA macros.
Modernize legacyMSAccess applications andmigrate data to SQL Server
where appropriate.
Collaboratewith application developers to supportintegrations and data pipelines.
Implementdata governance, security,andbackupstrategies.
Documentdatabase architecture, workflows,and technicalspecifications.
Provide mentorship to junior developers and contribute to code reviews and best practices.
Required Skills:
Bachelor’s degree inComputerScience,InformationTechnology,or a related field.
5 -8 years of experience indatabase development using SQL Server.
Proficiency in T-SQL,SSIS, SSRS, andSQL Server ManagementStudio (SSMS).
Strong experiencewith MSExcel(includingVBA) and MSAccess.
Familiarity with data migration, ETL processes, and jobscheduling.
Excellent analytical,problem-solving,andcommunication skills
Preferred Skills:
Experiencewithcloud platforms like AzureSQL or AWSRDS.
Knowledge ofPower BI,SharePoint,or other reporting tools.
Experiencewith MySQL
Exposure to Agile methodologies and DevOps practices
Exposure to insurance, consumer benefits,or COBRA domains.
Job Role: Teamcenter Admin
• Teamcenter and CAD (NX) Configuration Management
• Advanced debugging and root-cause analysis beyond L2
• Code fixes and minor defect remediation
• AWS knowledge, which is foundational to our Teamcenter architecture
• Experience supporting weekend and holiday code deployments
• Operational administration (break/fix, handle ticket escalations, problem management
• Support for project activities
• Deployment and code release support
• Hypercare support following deployment, which is expected to onboard approximately 1,000+ additional users
Hands-on experience in leading R2R function for US based Company, Restaurant Accounting. · Should have experience in R365, US GAAP accounting and Financial Statement preparation. · Should be able to lead client meetings, Business review calls. · Work with US counterparts in driving key process initiatives. · Manage and publish daily, weekly, and monthly performance scorecards. · Manage and own the process SLAs agree with client. · Able to interpret Financial Statements to help Executive team to make decisions. · Conduct, monthly, quarterly, and annual one-on-one with team and perform the year end appraisal and performance management. · Should be an Individual contributor. · Managing client experience. · Team management skills. · Subject Matter Expert.
B Com Graduate with 7-8 years of work experience having 3-5 years of team and client management experience. • Strong analytical skills and problem-solving skills. • Proactive, takes initiative, self-motivated, team player. • Strong stakeholder management and interpersonal skills. • Extensive understanding of financial trends both within the company and general market patterns. • Business acumen, Analytical approach, understanding of general business development and operations. • Prior experience in similar BPO/Shared Service Function of MNC. • Should have experience in US Accounting. • Should have excellent communication skills and client interaction experience. • Maintain general ledger, post transactions in R365. • Strong willingness to learn and grow. • Should be willing to work in US Shifts. • Can join immediately. • Should work from Office.
Hiring for Salesforce Project Manager
Exp : 8 - 13 yrs
Work Location : Hyderabad Hybrid
Edu : BE/B.Tech
Work Timings : 11 Am - 8 PM
Skills :
Proven experience managing Salesforce implementation or enhancement projects.
Strong understanding of the Salesforce platform, preferably Sales Cloud and/or Service Cloud.
Knowledge of Salesforce development lifecycle, integration concepts, APIs, security model, and release management.
Excellent communication, presentation, and stakeholder management skills. Strong analytical, problem-solving, and decision-making abilities.
Experience leading cross-functional teams in complex project environments.
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Role Summary
We are a growing technology services company working on product engineering and growth marketing projects. We are looking for an Operations Manager who can bring structure, accountability, and execution discipline across teams.
Key Responsibilities
- Drive day-to-day operational execution across tech and marketing teams
- Track project timelines, team bandwidth, and delivery commitments
- Identify bottlenecks and ensure timely resolution
- Improve internal processes for better predictability
- Coordinate between leadership, delivery teams, and clients
- Ensure consistent status reporting and follow-ups
Requirements
- 2–5 years of experience in operations / project coordination / delivery
- Experience in startup, agency, or services-based companies preferred
- Strong execution mindset
- Comfortable handling multiple parallel projects
- Familiar with tools like Jira, ClickUp, Asana, or similar
Location & Compensation
Hyderabad (On-site / Hybrid). Compensation aligned to experience.
Role Summary
We are hiring a hands-on IT Project Manager to manage end-to-end execution of software development projects.
Key Responsibilities
Own delivery timelines for web and product development projects
Break requirements into actionable tasks and milestones
Coordinate with developers, designers, and QA
Manage scope, risks, and dependencies
Conduct client status calls and ensure expectation alignment
Ensure projects are delivered on time and within scope
Requirements
2–5 years of IT project management experience
Experience handling client-facing software projects
Strong understanding of SDLC
Experience with tools like Jira, ClickUp, Asana, etc.
Strong communication and stakeholder management skills
Location & Compensation
Hyderabad (On-site preferred). Compensation as per experience
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Job Title: AAA Implementation Engineer
Location: REMOTE
Duration: 12+ months
The AAA Implementation Engineer is responsible for delivering technical implementation services that support the evolution and ongoing maintenance of the AAA infrastructure. This includes involvement in a variety of projects, system upgrades, service and feature enhancements, as well as remediation and break-fix activities. All work must adhere to the organization’s current architectural standards, technology roadmaps, governance, and change management policies. While the primary focus is on implementation and validation engineering, the role also requires a strong understanding of design engineering, AAA policies, security posture, protocols, and cluster deployment and maintenance. The AAA Implementation Engineer will collaborate directly with both internal and external stakeholders—including Architecture, Product Engineering, Design/Implementation Engineering, Change Management, Service and Product Management, Finance, Business Management, and Operations teams—as well as various levels of senior management.
Key Responsibilities:
• Attend Project Meetings as needed.
• Creates Change documents and changes
• Scheduling of the changes with the assigned engineer
• Ensures change documents are peer reviewed are approved
• Representing change records (CRQs) on various calls
• Representing on regional pre-CAB Weekly
• Socializing changes to the other peer teams regionally so they can represent changes in region.
• Coordinates with other teams for change knowledge transfer
• Following up with Testers / Testing Coordination for all Changes
• Ensuring Peer Reviews are attached to CRQs / Chasing Approvals
• Attends various change review calls including AAA weekly internal change calls – weekly
• Reviews test plans and results, ability to assist in driving to root cause.
• Collaborate with other internal/external Bank teams such as Operations, Engineering, and requestors on core design requirements/standards and risk assessment.
• Leverage designated tools and resources to create NCDs that will drive implementation during a pre-approved change window as necessary.
• Ensure initiatives\changes are well defined with success criteria, ownership, and realistic but firm schedules.
• Rehearse changes in the lab.
• Works during weekends to implement changes. Low risk changes can be performed during the week.
• Ensures no risks are associated with the change.
• Ensure changes are user acceptance tested and authentication logs are successful after post implementation.
• Building, updating and sending Change Communication templates for weekend changes
• Works with release managers to create changes.
• Update schedule as changes is completed and new work orders are added.
• Coordinates with vendors during changes if devices need to be swapped or any type of datacenter local onsite support is needed.
• Create work orders and other requests to engage Blackbox and device, firewall and IP services updates
• Validates changes via working with users as part of user acceptance testing, creation and implementation of test plans (automated and manual), verify logs and test results.
Preferred Experience and Attributes
• Strong subject matter expertise across various enterprise identity authentication technologies ranging from AAA (RADIUS/TACACS), 802.1X technologies (Wired/Wireless), RSA and token-based systems.
• Experience with Aruba ClearPass Policy Server or Cisco Identity Services Engine (ISE) is required.
• Experience with Network Access Control (NAC) 802.1X for Wired and Wireless networks is required.
• Experience working with SSL Certificate Authorities and certificate management.
• Strong experience and detailed technical knowledge in security engineering, system and network security, authentication and authorization protocols, cryptography, application security, load balancing.
• Experience with tools such as Splunk, Excel, ideally experience in automation.
• Expert understanding of network protocols TCP/IP, HTTP, HTTPS, SSL, TLS, 802.1.X, etc.
• Experience with testing and change validation, root cause analysis, risk mitigation, security assessments, analysis of security threats, trends and architectural preferred.
• Experience with Remote Access (VPN posture) is preferred.
• Experience with Secure Cloud Analytics (Stealthwatch) is preferred.
• Project Management, ITSM
• Experience with Change Management and CAB processes and procedures.
• Focused on execution, delivery, and commitment to dates. Ability to work in a high-paced environment. Can manage risk - is a good decision maker. Understands the big picture; ability to relate to the firm’s strategy and actions and how they support our business results.
• Leadership: be a self-starter, self-directed and show initiative.
• Demonstrates ownership: Is accountable and influential/can hold others accountable (professionally).
• Strong written and verbal communications skills. Ability to communicate and influence upward as well as laterally.
• Organized and detail oriented.
• Familiarity with working in regulated and/or large global enterprises is a plus
Requirements:
• Bachelor’s degree in engineering, computer science, business, finance or related field/technical training. Post Graduate Degree a plus
• Must have strong analytical skills.
• Minimum of 8-12 years’ experience required in technical role supporting network project(s)/program(s).
• Experience with: Clearpass, Stealthwatch, ICE, AAA, SPLUNK, load balancing, captive portals, NA3RC, automation, network configuration, certificates, cluster build, upgrade and configuration.
• Working knowledge of Excel and MS Project
• Financial services (Insurance, Banking, Investment banking), is a plus.
• Ability to be nimble and flexible; prioritize workload, proactively react to issues and consistently react to shifting deadlines.
• Ability to work weekends (as needed) for migration work
Role Summary
We are looking for a Brand Manager – Digital & Technology Accounts who will own client communication, ensure delivery visibility, and coordinate seamlessly across marketing and tech teams.
Key Responsibilities
- Act as the primary point of contact for assigned client accounts
- Conduct regular status calls and share structured progress updates
- Work closely with tech, design, and marketing teams
- Track tasks and ensure timely delivery
- Maintain documentation of requirements and change
- Identify upsell opportunities in collaboration with leadership
Requirements
- 2+ years of experience in digital agency / marketing client servicing
- Strong communication and stakeholder management skills
- Experience handling multiple client account
- Ability to coordinate across creative, marketing, and tech teams
- Strong follow-up discipline and attention to detail
Location & Compensation
Hyderabad (On-site / Hybrid). Compensation based on experience and fit.
Experience with AWS RDS (PostgreSQL) and related services like ECS/Fargate, Lambda, S3, Route53.
Proven ability to execute zero or low-downtime schema changes and large-table migrations.
Familiarity with infrastructure-as-code tools (Terraform, CloudFormation) and CI/CD pipelines.
Solid knowledge of security practices — RLS, RBAC, secret management, and encryption standards

Global digital transformation solutions provider.
JOB DETAILS:
* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka
* Industry: Global digital transformation solutions provider
* Salary: Best in Industry
* Experience: 5-8 years
* Location: Hyderabad
Job Summary
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.
Key Responsibilities
ETL Pipeline Development & Optimization
- Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
- Optimize data pipelines for performance, scalability, fault tolerance, and reliability.
Big Data Processing
- Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
- Ensure fault-tolerant, scalable, and high-performance data processing systems.
Cloud Infrastructure Development
- Build and manage scalable, cloud-native data infrastructure on AWS.
- Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.
Real-Time & Batch Data Integration
- Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
- Ensure consistency, data quality, and a unified view across multiple data sources and formats.
Data Analysis & Insights
- Partner with business teams and data scientists to understand data requirements.
- Perform in-depth data analysis to identify trends, patterns, and anomalies.
- Deliver high-quality datasets and present actionable insights to stakeholders.
CI/CD & Automation
- Implement and maintain CI/CD pipelines using Jenkins or similar tools.
- Automate testing, deployment, and monitoring to ensure smooth production releases.
Data Security & Compliance
- Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
- Implement data governance practices ensuring data integrity, security, and traceability.
Troubleshooting & Performance Tuning
- Identify and resolve performance bottlenecks in data pipelines.
- Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.
Collaboration & Cross-Functional Work
- Work closely with engineers, data scientists, product managers, and business stakeholders.
- Participate in agile ceremonies, sprint planning, and architectural discussions.
Skills & Qualifications
Mandatory (Must-Have) Skills
- AWS Expertise
- Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
- Strong understanding of cloud-native data architectures.
- Big Data Technologies
- Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
- Experience with Apache Spark and Apache Kafka in production environments.
- Data Frameworks
- Strong knowledge of Spark DataFrames and Datasets.
- ETL Pipeline Development
- Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
- Database Modeling & Data Warehousing
- Expertise in designing scalable data models for OLAP and OLTP systems.
- Data Analysis & Insights
- Ability to perform complex data analysis and extract actionable business insights.
- Strong analytical and problem-solving skills with a data-driven mindset.
- CI/CD & Automation
- Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
- Familiarity with automated testing and deployment workflows.
Good-to-Have (Preferred) Skills
- Knowledge of Java for data processing applications.
- Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
- Familiarity with data governance frameworks and compliance tooling.
- Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
- Exposure to cost optimization strategies for large-scale cloud data platforms.
Skills: big data, scala spark, apache spark, ETL pipeline development
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer
F2F Interview: 14th Feb 2026
3 days in office, Hybrid model.
We are seeking a seasoned Senior Developer to join our team. The ideal candidate is a C# expert who doesn't just write code but understands how to orchestrate complex business processes using the Microsoft ecosystem. You will be responsible for building scalable backend services, optimizing SQL databases, and leveraging Azure and Power Automate to deliver end-to-end automation solutions.
Responsibilities:
- Design and maintain robust, high-performance applications using C# and .NET Core.
- Write complex SQL queries, stored procedures, and optimize database schemas for performance and security.
- Deploy and manage cloud resources within Azure (App Services, Functions, Logic Apps).
- Design enterprise-level automated workflows using Microsoft Power Automate, including custom connectors to bridge the gap between Power Platform and legacy APIs.
- Provide technical mentorship, conduct code reviews, and ensure best practices in the Software Development Life Cycle (SDLC).
Technical Skills:
- C# / .NET: 8+ years of deep expertise in ASP.NET MVC, Web API, and Entity Framework.
- Database: Advanced proficiency in SQL Server
- Azure: Hands-on experience with Azure cloud architecture and integration services.
- Power Automate: Proven experience building complex flows, handling error logic, and integrating Power Automate with custom-coded environments.
- DevOps: Familiarity with CI/CD pipelines (Azure DevOps or GitHub Actions).
Company Description: Bits in Glass - India
- Industry Leader:
- Bits in Glass(BIG) has been in business for more than 20 years. In 2021 Bits in Glass joined hands with Crochet Technologies, forming a larger organization under the Bits In Glass brand to better serve customers across the globe.
- Offices across three locations in India: Pune, Hyderabad & Chandigarh.
- Specialized Pega partner since 2017, delivering Pega solutions with deep industry expertise and experience.
- Proudly ranked among the top 30 Pega partners, Bits In Glass has been one of the very few sponsors of the annual PegaWorld event.
- Elite Appian partner since 2008, delivering Appian solutions with deep industry expertise and experience.
- Operating in the United States, Canada, United Kingdom, and India.
- Dedicated global Pega CoE to support our customers and internal dev teams.
- Specializes in Databricks, AI, and cloud-based data engineering to help companies transition from manual to automated workflows.
- Employee Benefits:
- Career Growth: Opportunities for career advancement and professional development.
- Challenging Projects: Work on innovative, cutting-edge projects that make a global impact.
- Global Exposure: Collaborate with international teams and clients to broaden your professional network.
- Flexible Work Arrangements: Support for work-life balance through flexible working conditions.
- Comprehensive Benefits: Competitive compensation packages and comprehensive benefits including health insurance, and paid time off.
- Learning Opportunities- Great opportunity to upskill yourself and work on new technologies like AI-enabled Pega solutions, Data engineering, Integration, cloud migration etc.
- Company Culture:
- Collaborative Environment: Emphasizes teamwork, innovation, and knowledge sharing.
- Inclusive Workplace: Values diversity and fosters an inclusive environment where all ideas are respected.
- Continuous Learning: Encourages professional development through ongoing learning opportunities and certifications.
- Core Values:
- Integrity: Commitment to ethical practices and transparency in all business dealings.
- Excellence: Strive for the highest standards in everything we do.
- Client-Centric Approach: Focus on delivering the best solutions tailored to client needs.
Role Overview
We are hiring for Humming Apps Technologies LLP who are seeking a Senior Threat Modeler to join the security team and act as a strategic bridge between architecture and defense. This role focuses on proactively identifying vulnerabilities during the design phase to ensure applications, APIs, and cloud infrastructures are secure by design.
The position requires thinking from an attacker’s perspective to analyze trust boundaries, map attack paths, and influence the overall security posture of next-generation AI-driven and cloud-native systems. The goal is not only to detect issues but to prevent risks before implementation.
Key Responsibilities
Architectural Analysis
• Lead deep-dive threat modeling sessions across applications, APIs, microservices, and cloud-native environments
• Perform detailed reviews of system architecture, data flows, and trust boundaries
Threat Modeling Frameworks & Methodologies
• Apply industry-standard frameworks including STRIDE, PASTA, ATLAS, and MITRE ATT&CK
• Identify sophisticated attack vectors and model realistic threat scenarios
Security Design & Risk Mitigation
• Detect weaknesses during the design stage
• Provide actionable and prioritized mitigation recommendations
• Strengthen security posture through secure-by-design principles
Collaborative Security Integration
• Work closely with architects and developers during design and build phases
• Embed security practices directly into the SDLC
• Ensure security is incorporated early rather than retrofitted
Communication & Enablement
• Facilitate threat modeling demonstrations and walkthroughs
• Present findings and risk assessments to stakeholders
• Translate complex technical risks into clear, business-relevant insights
• Educate teams on secure design practices and emerging threats
Required Qualifications
Experience
• 5–10 years of dedicated experience in threat modeling, product security, or application security
Technical Expertise
• Strong understanding of software architecture and distributed systems
• Experience designing and securing RESTful APIs
• Hands-on knowledge of cloud platforms such as AWS, Azure, or GCP
Modern Threat Knowledge
• Expertise in current attack vectors including OWASP Top 10
• Understanding of API-specific threats
• Awareness of emerging risks in AI/LLM-based applications
Tools & Practices
• Practical experience with threat modeling tools
• Proficiency in technical diagramming and system visualization
Communication
• Excellent written and verbal English communication skills
• Ability to collaborate across engineering teams and stakeholders in different time zones
Preferred Qualifications
• Experience in consulting or client-facing professional services roles
• Industry certifications such as CISSP, CSSLP, OSCP, or equivalent
Customer Support & Quality Assurance Executive
Location: Hyderabad (Onsite)
Experience: 1–3 years in QA, tech support, or similar role
Department: Product & Customer Success
The Opportunity
At WINIT, we don’t just build products — we deliver experiences our customers love. As a Customer Support & Quality Assurance Executive, you’ll play a dual role: ensuring our solutions meet the highest quality standards and being the friendly, capable voice that helps customers get the most out of our technology.
This is a perfect role if you enjoy solving problems, improving processes, and making customers feel supported — while also working hands-on with cutting-edge enterprise software. You’ll also have the opportunity to use AI tools like ChatGPT, AI-powered testing assistants, and automation platforms to work smarter, resolve queries faster, and improve efficiency across both QA and support.
What You’ll Do
Quality Assurance (QA)
● Review and analyze product specifications and user requirements to ensure complete understanding.
● Design, execute, and maintain test cases for web and mobile applications.
● Log, track, and manage bugs; work closely with developers to ensure timely fixes.
● Conduct regression, functional, and usability testing to ensure every release is rock-solid.
● Use AI-powered testing tools to generate test scenarios, identify edge cases, and speed up validation.
Customer Support
● Provide timely, professional assistance via email, chat, or phone to global customers.
● Manage and support multiple customers simultaneously, prioritizing effectively.
● Use AI-driven knowledge bases and tools to quickly resolve common queries.
● Document and escalate complex issues to the right teams for resolution.
● Help onboard new customers by guiding them through key features and best practices.
● Collect feedback, identify recurring pain points, and share insights with the product team.
What You Bring
● Any bachelor’s degree — we value skills and attitude over specific majors.
● 1–3 years of experience in QA, customer support, or a similar technical/customer-facing role (SaaS/B2B tech experience preferred).
● Excellent English communication skills (verbal & written).
● Ability to handle multiple customers and tickets simultaneously while staying organized.
● Strong understanding of QA processes and familiarity with bug tracking tools (JIRA, TestRail, etc.).
● Experience with support platforms like Zendesk, Freshdesk, or Intercom is a plus.
● Familiarity with AI productivity tools for testing, ticket triage, and customer communications.
● A proactive, problem-solving mindset and the ability to manage multiple priorities.
Why WINIT
● Be part of a global leader in AI-powered Sales & Distribution solutions.
● Work in a role that blends technical expertise with customer interaction — no two days are the same.
● Learn and apply the latest AI tools to improve your efficiency and impact.
● Collaborate with talented teams in a culture that values innovation and continuous improvement.
● Competitive salary + growth opportunities within QA, Customer Success, or Product teams.
If you’re ready to combine your eye for quality with your passion for helping multiple customers succeed — and do it in an AI-first environment — we’d love to meet you.
Job Title: Android Developer (Vibe Coder)
Location: Hyderabad, India
Employment Type: Full-time
Experience: 2+ years
Department: Technology / Mobile Development
About the Role
We are looking for a passionate Android Developer (Vibe Coder) to build sleek, high-performing, and scalable mobile applications. The ideal candidate should be well-versed in Android development, comfortable using AI-assisted IDEs such as Cursor, Copilot, Claude Code, or similar environments, and eager to deliver clean and efficient code. You’ll collaborate with cross-functional teams to design, develop, and optimize Android applications that provide seamless user experiences.
Key Responsibilities
- Design, develop, and maintain robust Android applications.
- Write clean, reusable, and efficient Java code using AI-based tools (Cursor, Claude Code, GitHub Copilot, etc.).
- Build dynamic and responsive UI layouts using XML, optimized for various screen sizes and resolutions.
- Work with SQLite and Room Database for local data storage and management, including writing optimized SQL queries.
- Integrate SOAP and RESTful APIs for backend connectivity.
- Ensure app performance through efficient multi-threading, memory management, and performance tuning.
- Implement cloud messaging, push notifications, and MVVM architectural patterns.
- Collaborate with designers, backend developers, and QA teams to deliver feature-rich applications.
- Conduct thorough testing, debugging, and code reviews to ensure app quality and stability.
- Stay updated with the latest Android technologies, AI coding tools, and Google guidelines.
Requirements & Qualifications
- Minimum 2 years of hands-on experience in Android application development.
- Strong programming, debugging, and analytical skills.
- Expertise in Android SDK, Android Studio, and Material Design principles.
- Proficiency in XML, SQLite, and Room Database.
- Solid understanding of Object-Oriented Programming (OOP) and Data Structures.
- Experience integrating SOAP and RESTful APIs.
- Familiarity with third-party libraries, tools, and open-source frameworks in the Android ecosystem.
- Exposure to cloud messaging APIs, push notifications, and modern architectures like MVVM.
- Excellent communication and collaboration skills with a proactive mindset.
Job Description
As a graphic designer, you should have
- Strong Ideation and visualization skills. - Sound layout sense.
- Should be able to work with the complete Adobe suite including Illustrator, PhotoShop, InDesign or Canva.
- Never ending urge to learn.
- Should be talented and skilled to bring creative ideas to life either digitally or in print.
- Work from a verbal brief or from a rough scribble.
Responsibilities:
- Prepare creatives for social media platforms like Facebook , instagram , Twitter , Linkedin etc.
- Analyze marketing challenges and create designs that meet measurable business goals and requirements.
- Create visual stories that transform information, messages and concepts into high-quality assets that educate, inspire, and accurately depict and promote the brand, keeping in mind the visual language and brand tone.
- Create landing pages and web page designs.
Required skills:
- Experience in a small/mid sized ad agency or a digital team.
- 2-5 years of experience in the field of social media graphic designing (Twitter / Instagram / Twitter / Linkedin etc)
- A solid understanding of composition, typography, colors, iconography & design in general.
- Design communication materials for all internal, external communication and for other
- Should be passionate about advertising, PLUS you may possess a degree/diploma in commercial art from an ART institute.
- Out of the box thinker, attitude to work with good interpersonal skills.
- Well conversed with graphic and layout software like Adobe Indesign, Adobe Photoshop, Illustrator, CorelDRAW and Canva.
- Good aesthetic sense and creative ability.
- Having knowledge of basic web designing.
Roles & Responsibilities:
- Design creative visuals for social media, websites, email campaigns, banners, brochures, and marketing materials.
- Create engaging graphics aligned with brand guidelines and company standards.
- Assist in developing branding elements such as logos, templates, and presentations.
- Collaborate with the marketing and content team to understand project requirements and deliver creative concepts.
- Edit images, retouch photos, and enhance visual content as needed.
- Work on multiple projects simultaneously while meeting deadlines.
- Revise designs based on feedback from internal stakeholders.
- Stay updated with the latest design trends, tools, and industry best practices.
- Ensure consistency in designs across all digital and print platforms.
Required Skills (Must & Should Have):
- Proficiency in design tools such as Adobe Photoshop, Illustrator, InDesign, Canva, Figma (any relevant tools).
- Basic understanding of typography, color theory, layout, and composition principles.
- Knowledge of social media design formats and digital marketing creatives.
- Basic video editing skills (Premiere Pro / After Effects) – added advantage.
- Strong attention to detail and creativity.
- Ability to manage time effectively and handle multiple tasks.
Communication Skills (Mandatory):
- Must have strong verbal and written communication skills.
- Ability to clearly understand design briefs and explain creative ideas confidently.
- Good coordination skills to work effectively with cross-functional teams.
- Open to feedback and able to communicate revisions professionally.
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3- 5 years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Experience : 3 - 5years
Notice : Immediate to 15days
Responsibilities :
Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
Implement data governance and security best practices to ensure compliance and data integrity.
Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications :
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
3 - 5 years of prior experience in data engineering, with a focus on designing and building data pipelines.
Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
Strong programming skills in languages such as Python, Java, or Scala.
Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.
Marketing Intern (B2B Pipeline & Lead Generation)
Company: VRT Management Group
Location: Santosh Nagar, Hyderabad (Onsite)
Type: Full-time Internship
Duration: 3 Months
Perks: Performance-based PPO + LOR, Completion Certificate
Reporting: Work directly with the CEO
About VRT Management Group
VRT Management Group is an entrepreneurial consulting company founded in 2008 in the USA. Our mission is to empower small and medium-scale business leaders across the United States through high-impact programs such as EGA, EGOS, and Entrepreneurial Edge.
With our growing Hyderabad operations, we’re building a high-performing marketing team to scale our impact.
Role Overview
As a Marketing Intern, you’ll help build and run our B2B lead generation engine—supporting campaigns that drive Leads → Calls Booked → Show-ups. This is a hands-on role where you’ll learn real growth marketing by executing LinkedIn and email campaigns, tracking weekly funnel metrics, improving conversions, and supporting content distribution.
If you enjoy marketing + numbers + execution, and want direct exposure to how a CEO scales pipeline, this role is for you.
Key Responsibilities
- Support monthly pipeline targets (Leads → Calls Booked → Show-ups).
- Assist in executing B2B lead generation campaigns (LinkedIn, Email, Webinars, Partnerships).
- Support performance marketing execution (LinkedIn & Email focus; Meta/YouTube/Instagram is a plus).
- Help manage the content distribution calendar across platforms.
- Assist in setting up retargeting + lead nurture flows.
- Track funnel metrics weekly and identify areas to improve conversion rates.
- Draft/assist with marketing copy (ads, emails, landing page sections, LinkedIn posts).
- Coordinate with designers/editors using creative briefs.
- Help prepare weekly KPI and performance reports.
- Support testing of new campaigns and growth experiments.
Required Skills / What We’re Looking For
- Interest or experience in B2B marketing / lead generation / growth (internship/projects acceptable).
- Strong understanding of LinkedIn + Email marketing fundamentals.
- Comfort with tracking metrics (leads, booked calls, show rate, conversions).
- Good communication + execution skills (you finish what you start).
- Basic copywriting ability and willingness to learn fast.
- Analytical mindset (you like checking what’s working and improving it).
Tools Exposure (Good to Have)
- CRM basics (HubSpot/Zoho/any CRM).
- Email automation tools.
- LinkedIn Ads / campaign dashboards.
- Google Sheets/Excel reporting.
What You’ll Gain
- Real B2B growth marketing experience (not just “social media posting”).
- Direct mentorship and weekly feedback while working with the CEO.
- Strong portfolio-worthy work: funnels, campaigns, reports, creatives.
- Completion certificate + Performance- based LOR + PPO opportunity.
We are looking for a ReactNative Software Engineer to join our engineering teams at our office in Hyderabad.
Experience?
- 6months - 1 year of experience working on Reactjs. Experience building a Product, preferably SAAS.
Technical Skillset?
- Good knowledge of JavaScript. Experience in TypeScript is preferred but not mandatory; we prefer to develop all JS in TypeScript.
- Hands-on experience in ReactNative Framework.
- Good knowledge of state management patterns and libraries such as Redux, Context API, React Hooks, and other modern state management solutions.
- Must have extensive experience working on javascript, HTML, and CSS.
- Knowledge of backend technologies is a plus.
- Understanding of offline handling capabilities using SQLite or similar DBs.
- Experience designing and maintaining mobile CI/CD pipelines using Fastlane, and industry best practices.
- Experience implementing and maintaining logging and crash reporting solutions using BugSnag, Firebase Crashlytics, and Sentry.
- Experience with user analytics frameworks like MixPanel or WebEngage.
Functional Skillset?
- Comfortable “working virtually” with teammates and customers worldwide - we do a lot of Slack, Zoom, and Google Meet.
- Good proficiency in the English language.
- An inclination to get things done based on clear/aggressive goals setting and improving on productivity metrics.
About Clink :
Clink is revolutionizing how restaurants build lasting customer relationships. We're a customer retention platform serving 100+ restaurants, helping them break free from expensive third-party aggregators and own their customer data. Our Instagram-integrated loyalty system, QR-based ordering, and data-driven insights are transforming the restaurant industry.
The Role :
We're looking for a passionate frontend developer intern who wants to build products that real businesses depend on daily. You'll work on both our customer-facing mobile app (React Native) and restaurant dashboard (Next.js), shipping features that directly impact how thousands of diners and hundreds of restaurants interact.
This isn't a shadowing role—you'll own real features from day one, working closely with our founding team.
What You'll Work On :
- Build customer mobile app features in React Native
- Develop restaurant dashboard components in Next.js
- Optimize performance and implement state management using Redux/Zustand
- Translate Figma designs into responsive, pixel-perfect interfaces
What We're Looking For :
Must Have :
- Strong JavaScript fundamentals
- Hands-on experience with React
- Understanding of component lifecycle, hooks, and state management
- Git basics and collaborative development workflows
- Ability to translate designs into clean, reusable components
Nice to Have :
- Experience with React Native or Next.js
- Knowledge of Redux, Zustand, or other state management libraries
- Understanding of REST APIs and TypeScript
- Contributions to open source projects
Mindset :
- Ownership mentality—you don't wait to be told what to do Comfortable with ambiguity and rapid iteration
- User-focused thinking—you care about the end experience
- Quick learner who can pick up new tools and patterns
Why Join Clink :
- Real Impact: Your code will be used by real restaurants and customers from day one. See your work drive actual business metrics.
- Learn Fast: Work directly with the founding team. Get mentorship on architecture decisions, not just coding tasks.
- Own Your Growth: Opportunity to transition to full-time role and grow with the company as we scale.
- Modern Stack: Work with latest tools and best practices. We build custom solutions in-house.
Our Tech Stack :
Mobile: React Native, Redux/Zustand, React Navigation
Web: Next.js, React, Javascript
Backend: Ruby on Rails, PostgreSQL
Internship Details :
Duration: 6-12 months (with potential for full-time conversion)
Stipend: ₹25,000/month
Commitment: Full-time (40 hours/week)
Design, develop, and maintain backend services using Java
Build scalable RESTful APIs and microservices
Work with Spring / Spring Boot frameworks
Implement business logic, data access layers, and integrations
Optimize application performance, scalability, and security
Write clean, testable, and maintainable code
Perform unit and integration testing
Participate in code reviews and technical discussions
Collaborate with frontend, DevOps, and QA teams
Troubleshoot and resolve produc
Job Summary
We are seeking an experienced Java Drools Developer to design, develop, and maintain rule-based applications using Drools. The ideal candidate will have strong backend development skills and hands-on experience with business rule management systems in enterprise environments.
Key Responsibilities
- Design, develop, and maintain business rules using Drools (BRMS)
- Create and manage Drools Rule Files (DRL), decision tables, and rule flows
- Integrate Drools with Java / Spring Boot applications
- Optimize rule execution, performance, and scalability
- Develop and consume RESTful APIs
- Collaborate with business analysts to translate requirements into rules
- Participate in code reviews and ensure best practices
- Support testing, debugging, and production issues
Required Skills
- Strong hands-on experience in Java
- Solid experience with Drools Rule Engine / BRMS
- Experience with Spring / Spring Boot and microservices architecture
- Knowledge of REST APIs and backend integrations
- Understanding of rule lifecycle management and versioning
- Good problem-solving and analytical skills
We are good with S4,ECC Payroll, migration from ECC to S4.
This is implementation role.
2 end to end life cycle of Payroll, have configuration exp, configuration exp, understand us Tax & worked on US payroll
Exp in Design & requirement gathering , even if its on-prem payroll, we are good with it
2 end to end implementation cycle,
We will providing them payroll requirements & they need to configure system unit test, & support activities.
Key job responsibilities:
Responsible for successful end to end delivery and payroll configuration in large scale global projects. This role involves in-depth understanding of processes across various HCM modules and the ability to define various strategies and leading small teams. Practitioner must be able to analyze, map, and transform HR business processes; design target operating models , configure and unit-test SAP/SuccessFactors modules, focusing on US Payroll and integrations , develop and review functional specifications, configuration documentation, and test scenarios and ensure high client satisfaction through quality, communication, and timely delivery.
1) 4+ years of SAP US payroll experience.
2) Experience of US Payroll and HR processes in a technical capacity – PA/OM, Payroll
3) Should have worked on at least two implementation projects
4) Should have experience of working in large/disparate teams
5) Understand Business Processes
6) Identify, document as-is processes and design & recommend to-be processes
7) Map Business Processes to SAP/SuccessFactors
8) Analyze & identify system limitations and requirement gaps
9) Configure & unit test in relevant SAP/SuccessFactors modules
10) Write/ review Functional Specification / Configuration Workbook documents
11) Develop test scenarios, scripts, cases, data
12) Lead and/or support cutover preparation, go-live activities and hypercare support
13) Should have troubleshooting knowledge of payroll schema, PCR’s , Identify and fix errors
14) Strong payroll skills with good understanding of benefits, time management & OM/PA
15) Interfaces testing with mapping and validation
16) Good knowledge on country specific tax forms and year end activities
17) Understand Business Processes. Map Business Processes to SAP/SuccessFactors
18) Identify, document as-is processes and design & recommend to-be processes
19) Establish open, fluid, and timely communications with all interested parties, stakeholders, and project teams
20) Play lead roles preferably on multiple projects. Understands project statuses, understands project lifecycle, manage mitigation of risks and issues, set up change control processes and escalation points
21) Work closely with customer, technical and functional teams to deliver project scope on time, on budget and with high quality deliverables
22) Establish open, fluid, and timely communications with all interested parties, stakeholders, and project teams
23) Use prior experience to develop accurate work estimates, project budgets and timelines
24) Ensure client satisfaction targets are achieved
25) Be a key contributor in the team, drive desired professional behaviors and motivate the team to the highest levels of performance and ensure that team resources have the best conditions to perform successfully
26) Exercise required controls and propose improvements as required, including quality plans, issues and action logs, risk management plans and change control plans, for all aspects of the assignment
27) Follow SLAs and projects/CRs plan effectively
28) Drive Innovation/Process improvements
29) Responsible for Statement of Work preparation, purchase orders, price quotes and other contractual materials as part of scheduling and deploying resources
30) Ensure business and assignment risks are identified, monitored and managed to achieve minimal disruption to the project delivery and success
31) Act as an SAP HR and SuccessFactors subject matter expert by providing best-practice guidance on HCM business processes
32) Responsible for adhering to the current project management methodology while influencing best practices, process improvements, and procedures
33) Mentor associates and foster a learning and growth environment
34) Retrieving Data from Employee Central
35) Update configuration workbooks when necessary.
36) Configure & unit test in relevant SAP/SuccessFactors modules
37) Write/ review Functional Specification / Configuration Workbook documents
38) Develop test scenarios, scripts, cases, dat
39) Participate in internal and client knowledge transfer sessions.
40) Proactively review data and deliverables for issues and escalates as appropriate.
41) Liaise with 3rd party vendors for product/integration issues.
42) Good to have finance and 3rd party integration knowledge, integration to ADP
Experience:
· Min 4 yrs in SAP HCM/Successfactors
· Valid ECP certification is Optional
Key Skills:
· Minimum 1-year experience with successful implementations of the SuccessFactors or HCM solutions
· SAP HCM – US Payroll , CA and Global countries
- Minimum of 2 HCM payroll implementations.
- Excellent verbal and written communication skills
Qualification: Required B. Tech/B.E/MBA/MCA
The Role:
We at CAW Studios are looking for a passionate recruitment coordinator, who knows how to identify the resource need, source, screen, and acquire them to be part of our growth journey. Regardless of the result, you’ll always leave an excellent and professional customer experience.
What would you do?
- Support the recruitment team in coordinating with hiring managers, interview panels, and internal teams.
- Assist with candidate sourcing, screening, and interview scheduling.
- Help manage communication with candidates, vendors, and external partners.
- Maintain accurate records and data in the ATS and tracking sheets with high attention to detail.
- Support job postings, hiring drives, and recruitment activities.
- Ensure a smooth and positive candidate experience.
- Learn end-to-end talent acquisition and HR processes on the job.
Who Should Apply?
- Currently pursuing or recently completed a degree in HR, Business, or a related field.
- Strong communication and coordination skills.
- High attention to detail and good follow-up habits.
- Willingness and curiosity to learn and grow in recruitment and HR.
- Comfortable using basic tools like MS Office / GSuite.
- Prior internship or exposure to recruitment is a plus, but not mandatory.
Work Location: Hyderabad
Job Summary:
We are seeking a detail-oriented and organized Accounts Associate to join our finance team. The candidate will support day-to-day accounting operations, ensure accurate financial recordkeeping, assist in reconciliation processes, and help maintain compliance with financial regulations and internal controls.
Key Responsibilities:
- Maintain accurate and up-to-date financial records.
- Assist with accounts payable and receivable functions.
- Prepare, verify, and process invoices and payments.
- Reconcile bank statements and other financial records.
- Assist in month-end and year-end closing processes.
- Maintain supporting documentation for all transactions.
- Support audits by providing necessary documents and explanations.
- Help with payroll processing and statutory compliance (e.g., GST, TDS, etc.).
- Use accounting software Zoho Books for data entry and report generation.
- Perform other finance-related duties as assigned by the supervisor.
Requirements:
- Bachelor’s degree in Accounting, Finance, Commerce, or related field.
- 2–4 years of experience in a similar role preferred.
- Working knowledge of accounting principles and practices.
- Proficiency in Microsoft Excel and accounting software.
- Strong attention to detail and accuracy.
- Good organizational and time-management skills.
- Ability to work independently and collaboratively.
Preferred Qualifications (Optional):
- Experience with Zoho Books
- Knowledge of local taxation laws and compliance requirements.
- Pursuing CA/ICWA or other relevant certifications is a plus.
Role
We are looking for 4–6 years of experienced React Native and ReactJS engineers who can join us in our office in Hyderabad.
What would you do?
- Own development of mobile and web features from implementation to release.
- Build complex UI flows, state management, and performance optimizations.
- Ensure stability, offline support, logging, analytics, and release readiness.
- Collaborate with product, backend, and QA teams to deliver quality features.
- Drive code quality through reviews and mentor junior engineers.
Technical Skillset
- Strong JavaScript; TypeScript preferred.
- Hands-on with React Native, React.js, Redux, and Hooks.
- Solid HTML, CSS, and responsive UI fundamentals.
- Offline handling using SQLite or similar.
- Mobile CI/CD: AppCenter / Bitrise.
- Logging & crash reporting: BugSnag / Crashlytics.
- Analytics: Mixpanel / WebEngage.
- Backend API understanding is a plus.
About the Role
We are looking for a Staff Engineer - MERN stack to join one of our engineering teams at our office in Hyderabad. Ideal candidate would be strong hands-on IC with the ability to own systems end-to-end and lead technical delivery.
What would you do?
- Own end-to-end delivery from requirements and LLDs to production.
- Lead technical design across frontend, backend, and cloud.
- Ensure scalability, performance, security, and production readiness.
- Drive architectural decisions, reviews, and execution discipline.
- Mentor engineers and act as the primary technical POC.
Who Should Apply?
- 5+ years of full-stack experience with Node.js, React, JavaScript/TypeScript.
- Strong experience with MongoDB and API-first system design.
- Hands-on exposure to AWS and DevOps practices (CI/CD, Docker).
- Proven ownership of design, delivery, and team execution.
- Strong communication and leadership skills.
Nice to Have
- Redis, Elasticsearch, Kafka/RabbitMQ
- Next.js (SSR/SSG)
- Cloud platforms (AWS/GCP/Azure) and monitoring tools
Role Overview
We are looking for a skilled Generative AI Developer to design, develop, and deploy AI-powered applications using Large Language Models (LLMs) and multimodal AI systems. The role involves building intelligent automation, chatbots, copilots, and content generation solutions aligned with business use cases.
Key Responsibilities
- Design and develop applications using Generative AI models (LLMs, diffusion models, etc.)
- Build AI chatbots, virtual assistants, and knowledge copilots
- Integrate LLM APIs (OpenAI, Anthropic, Google, etc.) into web/mobile apps
- Develop prompt engineering strategies for optimized outputs
- Implement Retrieval-Augmented Generation (RAG) pipelines
- Fine-tune and customize foundation models where required
- Work with vector databases for semantic search
- Collaborate with product, data, and engineering teams
- Ensure AI solutions are scalable, secure, and cost-efficient
- Monitor model performance, hallucinations, and output quality
Required Skills
- Strong programming in Python
- Experience with LLM frameworks (LangChain, LlamaIndex, Haystack)
- Hands-on with OpenAI / Gemini / Claude APIs
- Knowledge of Prompt Engineering
- Experience with Vector Databases (Pinecone, Weaviate, FAISS, Chroma)
- Understanding of RAG architectures
- Familiarity with REST APIs and microservices
- Knowledge of Docker / Cloud (AWS, Azure, GCP)
About our company:
We are an mSFA technology company that has evolved from the industry expertise we have gained over 25+ years. With over 600 success stories in mobility, digitization, and consultation, we are today the leaders in mSFA, with over 75+ Enterprises trusting WINIT mSFA across the globe.
Our state-of-the-art support center provides 24x7 support to our customers worldwide. We continuously strive to help organizations improve their efficiency, effectiveness, market cap, brand recognition, distribution and logistics, regulatory and planogram compliance, and many more through our cutting-edge WINIT mSFA application.
We are committed to enabling our customers to be autonomous with our continuous R&D and improvement in WINIT mSFA. Our application provides customers with machine learning capability so that they can innovate, attain sustainable growth, and become more resilient.
At WINIT, we value diversity, personal and professional growth, and celebrate our global team of passionate individuals who are continuously innovating our technology to help companies tackle real-world problems head-on.
We are looking for a Staff Engineer - PHP to join one of our engineering teams at our office in Hyderabad.
What would you do?
- Design, build, and maintain backend systems and APIs from requirements to production.
- Own feature development, bug fixes, and performance optimizations.
- Ensure code quality, security, testing, and production readiness.
- Collaborate with frontend, product, and QA teams for smooth delivery.
- Diagnose and resolve production issues and drive long-term fixes.
- Contribute to technical discussions and continuously improve engineering practices.
Who Should Apply?
- 4–6 years of hands-on experience in backend development using PHP.
- Strong proficiency with Laravel or similar PHP frameworks, following OOP, MVC, and design patterns.
- Solid experience in RESTful API development and third-party integrations.
- Strong understanding of SQL databases (MySQL/PostgreSQL); NoSQL exposure is a plus.
- Comfortable with Git-based workflows and collaborative development.
- Working knowledge of HTML, CSS, and JavaScript fundamentals.
- Experience with performance optimization, security best practices, and debugging.
- Nice to have: exposure to Docker, CI/CD pipelines, cloud platforms, and automated testing.
We are looking for a Staff Engineer - Python to join one of our engineering teams at our office in Hyderabad.
What would you do?
- Own end-to-end delivery of backend projects from requirements and LLDs to production.
- Lead technical design and execution, ensuring scalability, reliability, and code quality.
- Build and integrate chatbot and AI-driven workflows with third-party systems.
- Diagnose and resolve complex performance and production issues.
- Drive testing, documentation, and engineering best practices.
- Mentor engineers and act as the primary technical point of contact for the project/client.
Who Should Apply?
- 5+ years of hands-on experience building backend systems in Python.
- Proficiency in building web-based applications using Django or similar frameworks.
- In-depth knowledge of the Python stack and API-first system design.
- Experience working with SQL and NoSQL databases including PostgreSQL/MySQL, MongoDB, ElasticSearch, or key-value stores.
- Strong experience owning design, delivery, and technical decision-making.
- Proven ability to lead and mentor engineers through reviews and execution.
- Clear communicator with a high-ownership, delivery-focused mindset.
Nice to Have
- Experience contributing to system-level design discussions.
- Prior exposure to AI/LLM-based systems or conversational platforms.
- Experience working directly with clients or external stakeholders.
- Background in fast-paced product or service environments.
About the Role
Have you ever wanted to shape not just features, but frontend architecture and engineering culture across multiple products? As a Staff Frontend Engineer, you will play a critical role in defining UI architecture, setting engineering standards, and mentoring senior engineers while still being hands-on with complex frontend systems.
You will work closely with Product, Design, Backend, and DevOps teams to build scalable, high-performance SaaS products from the ground up—often version 1—across CAW’s growing product portfolio.
This role is ideal for engineers who combine deep React expertise, systems thinking, and technical leadership.
📍 Location: Hyderabad (Office) or Remote
What Will You Do?
Technical & Architecture Ownership
- Own frontend architecture and design decisions across one or more products.
- Define and evolve scalable UI patterns, component libraries, and frontend frameworks.
- Lead initiatives for performance optimization, accessibility, and cross-browser consistency.
- Drive adoption of best practices in TypeScript, state management, testing, and observability.
- Partner with backend teams on API contracts, data flow, and system boundaries.
Product & Execution
- Translate ambiguous product requirements into robust technical solutions.
- Build pixel-perfect, buttery-smooth user interfaces for complex workflows.
- Ensure frontend systems are reliable, testable, and production-ready.
- Diagnose and resolve hard performance, scalability, and reliability issues.
- Take end-to-end ownership of critical frontend modules.
Leadership & Mentorship
- Mentor Senior and Mid-level Frontend Engineers.
- Conduct design reviews, code reviews, and technical deep dives.
- Set quality benchmarks for frontend engineering across teams.
- Influence roadmap discussions with a strong engineering perspective.
- Be a role model for ownership, execution discipline, and technical excellence.
Experience Required
- 5-8 years of overall frontend engineering experience.
- 5+ years of deep React.js experience, building and scaling production SaaS products.
- Proven experience owning frontend systems at product or platform level.
- Experience working in fast-paced, product-driven environments.
Technical Skillset
Must Have
- Expert-level knowledge of JavaScript and TypeScript.
- Deep expertise in React.js, including Hooks and modern patterns.
- Strong experience with state management (Redux, Context, or equivalent).
- Advanced proficiency in HTML, CSS, responsive design, and browser internals.
- Hands-on experience with performance profiling and optimization.
- Experience with frontend testing strategies (unit, integration, E2E).
- Strong understanding of web storage, caching, and browser lifecycle.
- Experience with logging and monitoring tools (e.g., Sentry).
- Experience with analytics frameworks (Mixpanel, WebEngage, or similar).
Good to Have
- Experience designing or maintaining design systems / component libraries.
- Familiarity with backend technologies and API design.
- Exposure to micro-frontend or large-scale frontend architectures.
- Knowledge of CI/CD pipelines and frontend DevOps practices.
Functional & Behavioral Skills
- Strong written and verbal English communication skills.
- Comfortable working with distributed global teams (Slack, Zoom, Google Meet).
- Ability to drive clarity in ambiguous problem spaces.
- Strong bias for action, ownership, and delivery.
- Data-driven mindset with focus on productivity and quality metrics.
About CAW Studios
CAW Studios is a Product Engineering Company of 70+ engineers building and scaling SaaS products end-to-end.
Products we’ve built and run:
- Interakt
- CashFlo
- KaiPulse
- FastBar
We also power engineering for:
- Haptik
- EmailAnalytics
- GrowthZone
- Reliance General Insurance
- KWE Logistics
We are obsessed with automation, DevOps, OOPS, and SOLID principles. We don’t believe in rigid tech stacks—we believe in solving real problems well.
🌐 Website: https://www.caw.tech || https://www.knacklabs.ai/
Key Responsibilities
- Design, develop, and maintain advanced database solutions, procedures, and modules for the ERP system using SQL Server (T-SQL, schema design, indexing, query optimization)
- Develop, enhance, and maintain backend features and services using C# (.NET Core) with a focus on robust data access and business logic
- Analyze and optimize database performance, scalability, and security across a high-volume, mission-critical environment
- Collaborate with cross-functional teams, including QA, DevOps, Product Management, and Support, to deliver reliable and high-performing solutions
- Lead and participate in code and schema reviews, database architecture discussions, and technical planning sessions
- Contribute to the improvement of CI/CD pipelines and automated deployment processes for database and backend code
- Troubleshoot and resolve complex data and backend issues across the stack
- Ensure code and database quality, maintainability, and compliance with best practices
- Stay current with emerging technologies and recommend improvements to maintain a cutting-edge platform
Qualifications
- Curiosity, passion, teamwork, and initiative
- Extensive experience with SQL Server (T-SQL, query optimization, performance tuning, schema design)
- Strong proficiency in C# and .NET Core for enterprise application development and integration with complex data models
- Experience with Azure cloud services (e.g., Azure SQL, App Services, Storage)
- Ability to leverage agentic AI as a development support tool, with a critical thinking approach
- Solid understanding of agile methodologies, DevOps, and CI/CD practices
- Ability to work independently and collaboratively in a fast-paced, distributed team environment
- Excellent problem-solving, analytical, and communication skills
- Master's degree in Computer Science or equivalent; 5+ years of relevant work experience
- Experience with ERP systems or other complex business applications is a strong plus
Role Summary
The Performance Architect is responsible for driving performance, scalability, and reliability improvements across the Vantagepoint enterprise platform. This role focuses on identifying, analyzing, and resolving complex performance issues spanning databases, reporting, APIs, and application architecture, with a strong emphasis on real-world impact in high-concurrency, enterprise environments.
Position Responsibilities
Mandate Skill set -SQL Server, Performance tuning, performance optimization, Performance Engineer,query tune,
Performance Analysis & Optimization
- Analyze and resolve high-impact performance issues affecting critical client workflows
- Optimize SQL queries, execution plans, indexes, and transaction scopes
- Identify and mitigate deadlocks, blocking, and contention hotspots
- Improve performance of reports, dashboards, and batch processing workflows
- Tune API and application-layer performance
- Validate improvements through benchmarking and regression testing
Architecture & Scalability
- Design solutions balancing short-term remediation and long-term scalability
- Evaluate tradeoffs across multiple active release versions
- Address performance-related technical debt
- Support schema evolution and data access optimization
Collaboration & Execution
- Work closely with Development, QE, Automation, DBA, and Architecture teams
- Participate in design reviews and spike investigations
- Support testing cycles and defect resolution
- Document performance findings and best practices
AI-Specific Responsibilities
- Use AI-assisted tools to accelerate performance analysis and root cause investigation
- Analyze execution plans, deadlock graphs, and query behavior using AI
- Compare performance characteristics across releases and environments
- Identify recurring performance antipatterns
- Validate AI-generated insights through empirical testing
Qualifications
Technical Skills & Expertise
Database & SQL Server (Required)
- Advanced SQL Server performance tuning (query optimization, execution plans, index design)
- Deadlock diagnosis and resolution using Extended Events and deadlock graphs
- Locking and blocking analysis (wait stats, lock escalation, transaction optimization)
- Stored procedure optimization (parameter sniffing mitigation, plan cache management)
- Index strategies, including covering, filtered indexes, and fragmentation management
- Statistics and cardinality understanding
Performance Analysis Tools (Required)
- SQL Server Profiler and Extended Events
- Execution plan analysis (SSMS, Azure Data Studio)
- SET STATISTICS IO/TIME analysis
- Query Store and performance insights
- Wait statistics and blocking chain analysis
Enterprise Application Architecture (Required)
- Multitier application performance patterns (.NET, C#, VB.NET, JavaScript, TypeScript)
- API performance optimization and parameter management
- Caching strategies (plan cache, data cache, predictive paging)
- Large-scale reporting architectures (SSRS, custom frameworks)
- Database schema evolution and backward compatibility
Performance Methodologies (Required)
- Performance testing and benchmarking
- Load testing and scalability analysis
- Root cause analysis for performance regressions
- A/B testing for optimization validation
- Performance monitoring and alerting
Job Title: Marketing Manager – US Market (Hybrid, Night Shift)
Location: Hyderabad, India
Shift: US Hours (3:00 PM – 1:00 AM IST)
Experience: 3–7 years
Mode: Hybrid (Office + Work from Home)
Job Overview
We are seeking an experienced Marketing Manager to lead demand generation and performance marketing initiatives focused on the US market. The ideal candidate will have strong expertise in developing and executing Go-To-Market (GTM) strategies, managing multi-channel campaigns, analytics, and delivering measurable ROI.
Key Responsibilities
· Develop and implement GTM strategies tailored for US clients across various channels.
· Plan, execute, and optimize campaigns across SEO, SEM, paid media, social media (Meta, LinkedIn, Instagram, YouTube), email, and retargeting.
· Experience in setting up and managing tracking tools like Google Tag Manager, Microsoft Clarity etc... to monitor conversions and campaign performance.
· Continuously optimize campaigns to maximize ROAS (Return on Ad Spend) and reduce CAC (Customer Acquisition Cost).
· Drive conversion rate optimization (CRO) through A/B testing, landing page improvements, and funnel analysis.
· Prepare and present detailed performance reports and insights for management review.
· Must have experience in design & maintaining websites.
· Collaborate closely with Founders and Directors to ensure lead attribution accuracy and pipeline alignment.
· Knowledge of HIPAA-compliant marketing practices for healthcare accounts is a plus.
Required Skills & Qualifications
· 5–12 years of digital marketing experience managing US clients, preferably in demand generation and performance marketing.
· Bachelor’s degree in any discipline.
· Proven expertise in paid social campaigns, PPC, SEO, and paid media strategies.
· Exposure to website designing.
· Strong analytical, reporting, and communication skills.
· Willingness to work in US shift hours (6 PM – 3 AM IST).
· Professional communication and collaboration skills across organizational levels.
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-7 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Location : Bangalore, Hyderabad, Mumbai, and Gurgaon
Responsibilities:
· Designing, building, and operating scalable on-premises or cloud data architecture
· Analyzing business requirements and translating them into technical specifications
· Design, develop, and implement data engineering solutions using DBT on cloud platforms (Snowflake, Databricks)
· Design, develop, and maintain scalable data pipelines and ETL processes
· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness
· Implement data governance and security best practices to ensure compliance and data integrity
· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring
· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Requirements
· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
· Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines
· Experience of working with DBT to implement end-to-end data engineering processes on Snowflake and Databricks
· Comprehensive understanding of the Snowflake and Databricks ecosystem
· Strong programming skills in languages like SQL and Python or PySpark.
· Experience with data modeling, ETL processes, and data warehousing concepts.
· Familiarity with implementing CI/CD processes or other orchestration tools is a plus.
Relationship Manager – Insurance Sales.
- Identifying, recruiting, and onboarding new agents - POSP, ensuring they meet company standards and are properly trained.
- Achieving Sales Target
- Providing ongoing training and development opportunities for agents, including product knowledge, sales techniques, and professional skills.
- Guiding agents in achieving sales targets, developing sales strategies, and identifying new business opportunities.
- Building and maintaining strong relationships with agents, providing support and addressing their needs.
- Tracking agent performance, identifying areas for improvement.
- Developing and implementing strategies to grow the agency channel, expand market share, and increase profitability.
- Serving as a point of contact between the agency and the company, coordinating with various departments to support the agent network.
- Required skills
- Proven working experience as an Insurance Agent or relevant experience from Insurance Broking Industry
- Familiarity with all types of insurance plans (automobile, fire, life, property, medical etc)
- Basic computer knowledge and statistical analysis
Relationship Manager – Insurance Sales.
Identifying, recruiting, and onboarding new agents - POSP, ensuring they meet company standards and are properly trained.
Achieving Sales Target
Providing ongoing training and development opportunities for agents, including product knowledge, sales techniques, and professional skills.
Guiding agents in achieving sales targets, developing sales strategies, and identifying new business opportunities.
Building and maintaining strong relationships with agents, providing support and addressing their needs.
Tracking agent performance, identifying areas for improvement.
Developing and implementing strategies to grow the agency channel, expand market share, and increase profitability.
Serving as a point of contact between the agency and the company, coordinating with various departments to support the agent network.
Required skills
Proven working experience as an Insurance Agent or relevant experience from Insurance Broking Industry
Familiarity with all types of insurance plans (automobile, fire, life, property, medical etc)
Basic computer knowledge and statistical analysis
Relationship Manager – Insurance Sales.
Identifying, recruiting, and onboarding new agents - POSP, ensuring they meet company standards and are properly trained.
Achieving Sales Target
Providing ongoing training and development opportunities for agents, including product knowledge, sales techniques, and professional skills.
Guiding agents in achieving sales targets, developing sales strategies, and identifying new business opportunities.
Building and maintaining strong relationships with agents, providing support and addressing their needs.
Tracking agent performance, identifying areas for improvement.
Developing and implementing strategies to grow the agency channel, expand market share, and increase profitability.
Serving as a point of contact between the agency and the company, coordinating with various departments to support the agent network.
Required skills
- Proven working experience as an Insurance Agent or relevant experience from Insurance Broking Industry
- Familiarity with all types of insurance plans (automobile, fire, life, property, medical etc)
- Basic computer knowledge and statistical analysis
Relationship Manager – Insurance Sales.
Identifying, recruiting, and onboarding new agents - POSP, ensuring they meet company standards and are properly trained.
Achieving Sales Target
Providing ongoing training and development opportunities for agents, including product knowledge, sales techniques, and professional skills.
Guiding agents in achieving sales targets, developing sales strategies, and identifying new business opportunities.
Building and maintaining strong relationships with agents, providing support and addressing their needs.
Tracking agent performance, identifying areas for improvement.
Developing and implementing strategies to grow the agency channel, expand market share, and increase profitability.
Serving as a point of contact between the agency and the company, coordinating with various departments to support the agent network.
Required skills
- Proven working experience as an Insurance Agent or relevant experience from Broking Industry Insurance
- Familiarity with all types of insurance plans (automobile, fire, life, property, medical etc)
- Basic computer knowledge and statistical analysis
5–10 years of experience in backend or full-stack development (Java, C#, Python, or Node.js preferred).
•Design, develop, and deploy full-stack web applications (front-end, back-end, APIs, and databases).
•Build responsive, user-friendly UIs using modern JavaScript frameworks (React, Vue, or Angular).
•Develop robust backend services and RESTful or GraphQL APIs using Node.js, Python, Java, or similar technologies.
•Manage and optimize databases (SQL and NoSQL).
•Collaborate with UX/UI designers, product managers, and QA engineers to refine requirements and deliver solutions.
•Implement CI/CD pipelines and support cloud deployments (AWS, Azure, or GCP).
•Write clean, testable, and maintainable code with appropriate documentation.
•Monitor performance, identify bottlenecks, and troubleshoot production issues.
•Stay up to date with emerging technologies and recommend improvements to tools, processes, and architecture.
•Proficiency in front-end technologies: HTML5, CSS3, JavaScript/TypeScript, and frameworks like React, Vue.js, or Angular.
•Strong experience with server-side programming (Node.js, Python/Django, Java/Spring Boot, or .NET).
•Experience with databases: PostgreSQL, MySQL, MongoDB, or similar.
•Familiarity with API design, microservices architecture, and REST/GraphQL best practices.
•Working knowledge of version control (Git/GitHub) and DevOps pipelines.
Understanding of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
We’re looking for a Business Developement Manager who understands how mid-market and enterprise sales really work.You’ll be expected to take ownership—from identifying the right accounts to closing deals and ensuring a smooth handover to implementation.
Core Expectations (Non-Negotiable)
- Strong experience selling to 500+ employee organizations
- Ability to independently generate and close Mid size & Enterprise opportunities
- Proven success in handling Mid & longer sales cycles, multiple stakeholders, and higher ticket sizes
- Understanding of HR, payroll, statutory compliance, and buying behavior
Key Responsibilities
- Enterprise & Mid-Market Revenue Ownership
- Own end-to-end sales responsibility for mid-market and enterprise accounts.
- Drive revenue through new logo acquisition, expansion opportunities, and strategic upsells.
- Consistently achieve and exceed monthly, quarterly, and annual revenue targets.
Strategic Lead Generation & Account Mining
- Build and manage a self-sourced Mid & enterprise pipeline through LinkedIn, CXO connects, referrals, partnerships, events, and outbound campaigns.
- Identify large accounts, map stakeholders, and execute account-based selling (ABS) strategies.
- Work closely with marketing but take direct ownership of pipeline creation.
Sales Governance & Forecasting
- Maintain high CRM (HUBSPOT) hygiene with accurate pipeline tracking, forecasts, and deal updates.
- Provide leadership with revenue forecasts, deal risks, and market insights.
- Actively contribute to sales strategy, pricing feedback, and go-to-market improvements.
Experience
- 3-4 years of B2B SaaS sales experience, with strong exposure to mid-market and enterprise clients.
- 2+ years in HRMS / Payroll / HCM SaaS is highly preferred.
- Demonstrated track record of closing high-value, multi-month SaaS deals.
Education
- Graduation is mandatory.
- MBA (Sales / Marketing / HR) preferred.
What You’ll Do:
We’re looking for a Full Stack Software Engineer to join us early, own critical systems, and help shape both the product and the engineering culture from day one.
Responsibilities will include but are not limited to:
- Own end-to-end product development, from user experience to backend integration
- Build and scale a modern SPA using React, TypeScript, Vite, and Tailwind Design intuitive, high-trust UIs for finance workflows (payments, approvals, dashboards)
- Collaborate closely with backend systems written in Go via well-designed APIs
- Translate product requirements into clean, maintainable components and state models
- Optimize frontend performance, bundle size, and load times for complex dashboards
- Work directly with founders and design partners to iterate rapidly on product direction
- Establish frontend best practices around architecture, testing, and developer experience
- Contribute across the stack when needed, including API design and data modeling discussions.
What You’ll Need:
- Strong experience with Go in production systems
- Solid backend fundamentals: APIs, distributed systems, concurrency, and data modeling
- Hands-on experience with AWS, including deploying and operating production services
- Deep familiarity with Postgres, including schema design, indexing, and performance considerations
- Comfort working in early-stage environments with ambiguity, ownership, and rapid iteration
- Product mindset — you care about why you’re building something, not just how
- Strong problem-solving skills and the ability to make pragmatic tradeoffs
Set Yourself Apart With:
- Experience with Tailwind or other utility-first CSS frameworks
- Familiarity with design systems and component libraries
- Experience building fintech or enterprise SaaS UIs
- Exposure to AI-powered UX (LLM-driven workflows, assistants, or automation)
- Prior experience as an early engineer or founder
- product engineering culture from the ground up.















