50+ Databases Jobs in India
Apply to 50+ Databases Jobs on CutShort.io. Find your next job, effortlessly. Browse Databases Jobs and apply today!
Location: Mumbai, Maharashtra, India
Sector: Technology, Information & Media
Company Size: 500 - 1,000 Employees
Employment: Full-Time, Permanent
Experience: 10 - 14 Years (Engineering Leadership)
Level: Engineering Manager / Group EM
ABOUT THIS MANDATE :
Recruiting Bond has been exclusively retained by one of India's most prominent and well-established digital platform organisations operating at the intersection of Technology, Information, and Media to identify and place an exceptional Engineering Manager who can lead engineering teams through an enterprise-wide AI adoption and digital transformation agenda.
This is a high-impact, hands-on leadership role at the nexus of people, product, and technology. The organisation is executing one of the most ambitious AI transformation programmes in its sector and this Engineering Manager will be a core driver of that change. You will lead multiple squads, own engineering delivery end-to-end, embed AI tooling and practices into the team's DNA, and shape the engineering culture of tomorrow.
We are seeking leaders who code when it matters, who build systems and teams with equal conviction, and who view AI not as a trend but as a fundamental shift in how great software is built.
THE OPPORTUNITY AT A GLANCE :
AI-First Engineering Culture :
- Own AI adoption across your squads - from LLM tooling integration to automation-first delivery workflows. Make AI a default, not an afterthought.
Hands-On Engineering Leadership :
- Stay close to the code. Lead architecture reviews, unblock engineers, and set the technical bar - not just the management agenda.
People & Org Builder :
- Grow engineers into leaders. Build squads of 615 across functions. Drive hiring, career frameworks, and a culture of psychological safety.
KEY RESPONSIBILITIES :
1. Hands-On Technical Engagement :
- Remain deeply embedded in the technical work participate in design reviews, architecture decisions, and critical code reviews
- Set and uphold the engineering quality bar : performance benchmarks, security standards, test coverage, and release quality
- Provide technical direction on backend platform strategy, API design, service decomposition, and data architecture
- Identify and resolve systemic technical debt and architectural risks across team-owned services
- Unblock engineers by diving into complex problems debugging, pair programming, and system analysis when it matters
- Own key technical decisions in collaboration with Tech Leads and Principal Engineers; balance pragmatism with long-term sustainability
2. AI Adoption, Integration & Transformation (2026 Mandate) :
- Define and execute the team's AI adoption roadmap - from developer tooling to product-facing AI features
- Champion the integration of GenAI tools (GitHub Copilot, Cursor, Claude, ChatGPT) across the full engineering workflow coding, testing, documentation, incident response
- Embed LLM-powered capabilities into the product : recommendation engines, intelligent search, conversational interfaces, content generation, and predictive systems
- Lead evaluation and adoption of AI-assisted SDLC practices : automated code review, AI-generated test suites, intelligent observability, and anomaly detection
- Partner with Data Science and ML Platform teams to productionise ML models with robust MLOps pipelines
- Build team literacy in prompt engineering, RAG (Retrieval-Augmented Generation), and AI agent frameworks
- Create an experimentation culture : run structured AI pilots, measure productivity impact, and scale what works
- Stay ahead of the AI tooling landscape and advise senior leadership on strategic AI investments and engineering implications
3. People Leadership & Team Development :
- Lead, manage, and grow squads of 6 - 15 engineers across seniority levels (L2 through L6 / Junior through Staff)
- Conduct structured 1 : 1s, career growth conversations, and development planning with every direct report
- Design and execute personalised AI upskilling programmes ensure every engineer develops practical AI fluency by end of 2026
- Build and maintain a high-performance team culture : clarity of ownership, accountability, fast feedback loops, and psychological safety
- Drive performance management fairly and rigorously recognise top performers, manage underperformance constructively
- Lead technical hiring end-to-end : define job requirements, conduct bar-raising interviews, and make data-driven hire decisions
- Contribute to engineering career frameworks and level definitions in partnership with the VP / Director of Engineering
4. Engineering Delivery & Execution Excellence :
- Own end-to-end delivery for multiple product squads from planning and scoping through production release and post-launch stability
- Implement and refine agile delivery frameworks (Scrum, Kanban, Shape Up) calibrated to squad needs and product cadence
- Drive predictable delivery : maintain healthy sprint velocity, manage WIP limits, and ensure dependency resolution across teams.
- Establish and own engineering KPIs : DORA metrics (deployment frequency, lead time, MTTR, change failure rate), uptime SLOs, and velocity trends
- Lead incident management : build blameless post-mortem culture, own RCA processes, and drive systemic reliability improvements
- Balance technical debt repayment with feature velocity negotiate prioritisation transparently with Product leadership
5. Strategic Leadership & Cross-Functional Influence :
- Serve as the primary engineering partner for Product, Design, Data, and Business stakeholders translate ambiguity into executable engineering plans
- Participate in quarterly roadmap planning, capacity forecasting, and OKR definition for engineering teams
- Represent engineering in leadership forums articulate technical constraints, risks, and opportunities in business terms
- Contribute to org-wide engineering strategy : platform investments, build-vs-buy decisions, and shared infrastructure priorities
- Build relationships across geographies (Mumbai HQ + distributed teams) to maintain alignment and delivery cohesion
- Act as a culture carrier and ambassador for engineering excellence, innovation, and responsible AI use
AI TRANSFORMATION LEADERSHIP 2026 EXPECTATIONS :
In 2026, Engineering Managers at this organisation are expected to be active architects of AI transformation not passive observers. The following outlines the specific AI leadership expectations for this role :
AI Developer Productivity
- Drive measurable uplift in developer velocity through AI tooling adoption. Target : 30%+ reduction in code review cycle time and 40%+ increase in test coverage automation by Q3 2026.
LLM & GenAI Product Features
- Own delivery of GenAI-powered product capabilities : intelligent content, semantic search, personalisation, and conversational UX in production, at scale.
AI-Augmented Observability
- Implement AI-driven monitoring and anomaly detection pipelines. Reduce MTTR by leveraging predictive alerting, intelligent runbooks, and auto-remediation scripts.
Team AI Fluency :
- Build mandatory AI literacy across all engineering levels.
- Every engineer understands prompt engineering basics, AI ethics guardrails, and responsible AI deployment practices.
Responsible AI Governance :
- Partner with Security, Legal, and Data Privacy to ensure all AI deployments meet compliance standards, bias mitigation requirements, and explainability benchmarks.
TECHNOLOGY STACK & DOMAIN FAMILIARITY REQUIRED :
- Languages: Java/ Go/ Python/ Node.js /PHP /Rust (must be hands-on in at least 2)
- Cloud: AWS / GCP / Azure (multi-cloud exposure strongly preferred)
- AI & GenAI: OpenAI / Anthropic / Gemini APIs /LangChain /LlamaIndex / RAG / Vector DBs / GitHub
- Copilot: Cursor /Hugging Face
- Containers: Docker /Kubernetes /Helm /Service Mesh (Istio / Linkerd)
- Databases: PostgreSQL /MongoDB / Redis / Cassandra / Elasticsearch / Pinecone (Vector DB)
- Messaging: Apache Kafka /RabbitMQ /AWS SQS/SNS /Google Pub/Sub
- MLOps & DataOps: MLflow /Kubeflow / SageMaker / Vertex AI /Airflow /dbt
- Observability: Datadog /Prometheus /Grafana /OpenTelemetry / Jaeger /ELK Stack
- CI/CD & IaC: GitHub Actions ArgoCD / Jenkins / Terraform /Ansible /Backstage (IDP)
QUALIFICATIONS & CANDIDATE PROFILE :
Education :
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution - CS, IS, ECE, AI/ML streams strongly preferred
- Demonstrated engineering depth and leadership impact may complement institution pedigree
Experience :
- 10 to 14 years of progressive engineering experience, with at least 3 years in a formal Engineering Manager or equivalent people-leadership role
- Proven track record of managing and scaling engineering teams (615+ engineers) in a fast-growing SaaS or digital product environment
- Hands-on backend engineering background must be able to read, write, and critique production code
- Direct experience driving AI/ML feature delivery or AI tooling adoption within engineering organisations
- Exposure across start-up, mid-size, and large-scale product organisations, preferred adaptability is a core requirement
- Strong CS fundamentals: distributed systems, algorithms, system design, and software architecture
- Demonstrated career stability minimum of 2 years of average tenure per organisation.
The Ideal Engineering Manager in 2026 :
- Leads with context, not control, empowers engineers while maintaining accountability and quality
- Is fluent in both people language and technical language, switches registers naturally with engineers and executives alike
- Sees AI as a force multiplier for the team, not a threat. Actively experiments with and advocates for AI tooling
- Measures success by team outcomes, not personal output. Takes pride in what the team ships, not what they build alone
- Creates feedback loops obsessively between product and engineering, between seniors and juniors, between metrics and decisions
- Has strong opinions, loosely held, brings conviction to discussions but updates on evidence
- Invests in engineering excellence as seriously as delivery velocity knows that quality and speed are not opposites
WHY THIS ROLE STANDS APART :
AI Transformation at Scale :
- Lead one of the most significant AI adoption programmes in India's digital media sector.
- Our decisions will shape how hundreds of engineers work in 2026 and beyond.
Hands-On & Strategic Balance :
- A rare EM role that actively encourages technical depth.
- Stay close to the code while owning the people agenda - the best of both worlds.
Established Platform, Real Scale :
- 5001,000 engineers, proven product-market fit, and the org maturity to execute.
- This is not a greenfield startup gamble it is a serious company with serious ambition.
Clear Leadership Growth Path :
- A visible, direct path toward Director / VP of Engineering.
- Senior leadership is invested in growing its next generation of technology executives.
NOW HIRING · WORLD-CLASS TALENT Backend Tech Lead (Senior Level Engineering Leadership)
Placed by Recruiting Bond on behalf of a Confidential Digital Platform Leader
📍Location: Bengaluru, India (Hybrid / On-Site)
🏢Sector: Technology, Information & Media
👥Company Size: 500 – 1,000 Employees
💼Employment: Full-Time, Permanent
🎯Experience: 6 – 9 Years (Backend Engineering)
🚀 Level: Tech Lead
ABOUT THIS MANDATE
Recruiting Bond has been exclusively retained by one of India's most well-established digital platform organisations — a company operating at the intersection of Technology, Information, and Media — to identify and place a world-class Backend Tech Lead who can drive a transformational engineering agenda at scale.
This is not an ordinary role. The organisation is executing a high-stakes, large-scale modernisation of its backend infrastructure — migrating from legacy monolithic systems to resilient, cloud-native, AI-augmented distributed architectures that serve millions of concurrent users. The person in this seat will be a core pillar of that transformation.
We are looking exclusively for the top 1% — engineers who think in systems, own outcomes, and lead by example.
THE OPPORTUNITY AT A GLANCE
🏗️ Architecture Ownership
Drive system design decisions across the entire backend platform. Shape the future of distributed, fault-tolerant architecture.
🤖 AI-Augmented Engineering
Embed GenAI and LLM tooling directly into the SDLC. Champion automation-first development practices across squads.
🎓 Engineering Leadership
Mentor and grow the next generation of backend engineers. Lead hiring, reviews, and cross-functional technical alignment.
KEY RESPONSIBILITIES
1. Architecture & Platform Modernisation
- Lead the full migration of legacy monolithic systems to a scalable, cloud-native microservices architecture
- Design and own distributed, fault-tolerant backend systems with sub-millisecond SLO targets
- Architect API-first and event-driven platforms using async messaging patterns (Kafka, Pub/Sub, SQS)
- Resolve systemic performance bottlenecks, concurrency conflicts, and scalability ceilings
- Establish backend design standards, coding guidelines, and architectural review processes
2. Distributed Systems Engineering (Production-Grade)
- Design and implement Webhook reliability frameworks with intelligent retry and exponential backoff strategies
- Build idempotent, versioned APIs with enterprise-grade rate limiting and throttling controls
- Implement circuit breakers, bulkheads, and resilience patterns using Resilience4j / Hystrix or equivalents
- Engineer Dead-Letter Queue (DLQ) strategies and event reprocessing pipelines with guaranteed delivery semantics
- Apply Saga orchestration and choreography patterns for distributed transaction integrity
- Execute zero-downtime deployments and canary release strategies with rollback capability
- Design and enforce multi-region disaster recovery and business continuity protocols
3. AI-Driven Engineering Practices
- Champion LLM and GenAI adoption as first-class tooling across the software development lifecycle
- Apply prompt engineering techniques for automated code generation, review, and documentation workflows
- Utilise AI-assisted debugging, root cause analysis, and predictive performance optimisation
- Build automation-first pipelines that reduce toil and accelerate delivery velocity
- Evaluate and integrate emerging AI developer tools into the engineering ecosystem
4. Engineering Leadership & Culture
- Own backend platforms end-to-end with full accountability across development, stability, and performance
- Actively mentor, coach, and elevate engineers at all levels (L3–L6) through structured 1:1s and code reviews
- Drive and lead technical hiring — from designing assessments to final hire decisions
- Partner with Product, Data, DevOps, and Security stakeholders to align engineering with business objectives
- Represent the engineering org in cross-functional roadmap planning and architecture decision reviews
- Foster a culture of technical excellence, psychological safety, and high-velocity delivery
TECHNOLOGY STACK (HANDS-ON PROFICIENCY REQUIRED)
Languages: Java (primary) · Go · Python · Node.js · PHP · Rust
Cloud: AWS · GCP · Azure (Multi-cloud exposure preferred)
Containers: Docker · Kubernetes · Helm · Service Mesh (Istio / Linkerd)
Databases: PostgreSQL · MySQL · MongoDB · Cassandra · Redis · Elasticsearch
Messaging: Apache Kafka · RabbitMQ · AWS SQS/SNS · Google Pub/Sub
Observability: Datadog · Prometheus · Grafana · OpenTelemetry · Jaeger · ELK Stack
CI/CD & IaC: GitHub Actions · Jenkins · ArgoCD · Terraform · Ansible
AI & GenAI: OpenAI / Claude APIs · LangChain · RAG Pipelines · GitHub Copilot · Cursor
QUALIFICATIONS & CANDIDATE PROFILE
Education
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution — CS, IS, ECE, AI/ML streams strongly preferred
- Exceptional real-world engineering track record may be considered in lieu of institution pedigree
Experience
- 6 to 9 years of progressive backend engineering experience with demonstrable ownership and impact
- Proven track record of shipping and scaling production SaaS / Product systems at significant user load
- Exposure to and success within start-up, mid-size, and large-scale product organisations — the full spectrum
- Strong computer science fundamentals: algorithms, data structures, distributed systems theory, OS internals
- Demonstrated career stability — minimum 2 years average tenure per organisation
- The Ideal Candidate Exemplifies
- System-level thinking with an ability to hold context across code, architecture, product, and business
- An ownership mindset — no task is 'not my job'; outcomes and quality are personal commitments
- Strong written and verbal communication skills for asynchronous, cross-functional collaboration
- Intellectual curiosity: actively follows engineering trends, contributes to the community (OSS, blogs, talks)
- Bias for automation, observability, and engineering efficiency at every level
- A mentor's instinct — genuine desire to grow others and raise the capability of the team around them
WHY THIS ROLE STANDS APART
✅ Transformational Scope
Lead platform modernisation at scale. Your architectural choices will define systems serving millions of users for years.
✅ AI-Forward Engineering Culture
Be at the forefront of AI-augmented development. This org invests in tools and practices that make great engineers exceptional.
✅ Established, Stable Platform
Join a company with 500–1,000 employees, proven product-market fit, and the resources to execute on a serious technical vision.
✅ Career-Defining Leadership
Operate with strategic influence, direct access to senior leadership, and a clear path toward Principal / Staff / VP Engineering.
HOW TO APPLY
This search is being managed exclusively by Recruiting Bond
Submit your application with an updated resume
Only shortlisted candidates will be contacted. All applications are treated with the strictest confidentiality.
⚡ We move fast — qualified candidates can expect a response within 48–72 business hours.
Recruiting Bond | Bengaluru, Karnataka, India | 2026
About the role
We’re hiring an IT Systems Administrator for an NBFC to secure endpoints, SaaS, and networks across ~50 branches, ~250+ field staff, and ~50+ office users.
This is primarily an IT Admin + Security role, with secondary exposure to AWS cloud ops + light DevOps + basic DB access management.
If you’re an IT Admin aiming to break into AWS Cloud Ops + DevOps, this role is a strong next step — you’ll own core IT/security and get hands-on exposure to cloud operations and deployments.
Key responsibilities (Primary: IT Admin + Security)
- Manage endpoint security for laptops and mobiles (policies, patching, encryption, antivirus/EDR); drive MDM implementation now/future (e.g., Intune/Jamf).
- Administer Google Workspace (Gmail/Drive/Calendar): users, groups, permissions, SSO, MFA, sharing controls.
- Own joiner–mover–leaver lifecycle: provisioning/deprovisioning, access controls, periodic access reviews.
- Secure branch connectivity: VPN, internal Wi-Fi, internet usage controls; coordinate troubleshooting and standardization across branches.
- Manage HO security stack: firewall operations, rule changes with change control, monitoring/log review (basic but consistent).
- Secure SaaS tools (CRM/HRMS/comms like Slack/Zoom): role-based access, MFA enforcement, offboarding, integration/OAuth controls.
- Maintain IT asset inventory: procurement coordination, issuance/return, audits, warranty/AMC, license renewals; remote lock/wipe for lost devices.
- Handle security incidents: phishing, account compromise, device loss/theft — contain, investigate, recover, and prevent recurrence.
- Run backups and basic DR testing; maintain SOPs/documentation and train staff on cyber hygiene.
- Provide hands-on user support: laptop builds, software installs, Outlook/Excel issues, VPN/Wi-Fi troubleshooting, escalations and vendor coordination.
Secondary responsibilities (AWS + DevOps + DB ops support)
- Support AWS administration: IAM users/roles/policies, MFA, access key hygiene, basic log review (e.g., CloudTrail).
- Manage AWS access controls: security groups/firewall rules, IP allowlists/whitelisting (admin tools, databases, vendor access).
- Assist engineering with DevOps operations:
- CI/CD support (deployment coordination, rollbacks, environment configuration)
- Secrets/credentials management and rotation (no shared creds)
- DNS + SSL/TLS certificates, basic monitoring/alerting coordination
- Bonus: Docker/Kubernetes and Terraform exposure
- Basic database operations (admin-lite):
- DB user creation, roles/permissions, least-privilege access
- IP allowlisting/whitelisting for DB access via VPN/approved sources
- Backup/restore verification coordination and basic monitoring signals (connections/storage)
Requirements
- 3+ years in IT security / systems administration (BFSI or branch-heavy org preferred).
- Hands-on with Google Workspace administration.
- Strong endpoint/security fundamentals: encryption, patching, AV/EDR, remote support, device compliance.
- Comfortable with networks: VPN/Wi-Fi/LAN troubleshooting; firewall basics and change discipline.
- Strong operational discipline: asset tracking, vendor management, documentation, ticketing, user communication.
- Practical AWS familiarity (IAM, access controls, logging) and ability to support DevOps workflows.
Nice to have
- Experience implementing MDM at scale (Intune/Jamf/SureMDM).
- Exposure to SOC2 / ISO27001 evidence, controls, and audit workflows.
- Scripting for automation (PowerShell/Bash/Python).
- Familiarity with managed databases and secure access patterns.
- Exploratory tester with 2–3 years of experience in software testing.
- The candidate should be an expert in GUI and functional testing of web applications.
- Good communication is a must and should be capable of collaborating with cross functional teams.
- Should be self driven capable of handling responsibilities independently.
- Should have good knowledge of SQL and Jira
- Strong proficiency in Microsoft Excel is required for test analysis and reporting.
- Should be able to understand application architecture to effectively design and execute test scenarios.
- Experience with Playwright automation is an added advantage but not mandatory.
Role & Responsibilities:
- Design, develop, and unit test applications in accordance with established standards.
- Preparing reports, manuals and other documentation on the status, operation and maintenance of software.
- Analyzing and resolving technical and application problems
- Adhering to high-quality development principles while delivering solutions on-time
- Providing third-level support to business users.
- Compliance of process and quality management standards
- Understanding and implementation of SDLC process
Ideal Candidate:
- Strong Senior Angular Developer Profiles.
- Must have 6+ years of experience in frontend development, with at least 4+ years in Angular 8+.
- Must have strong proficiency in JavaScript, TypeScript, HTML5, and CSS3.
- Must have strong test-driven development experience and proficiency in unit testing frameworks such as Jasmine, Karma, NUnit, Selenium.
- Must have strong experience in database technologies (MySQL / SQL Server / Oracle)
- Considering candidates from South India only.
- Must have 2+ experience with Web APIs, Entity Framework, and Linq Queries.
- Experience in .NET Core framework, OOP, and C# APIs.
- Product Companies
- B.Tech./M.Tech in Computer Science (or related field).

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Lead DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 7-10 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong Lead DevOps / Infrastructure Engineer Profiles.
- Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
- Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
- Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
- Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
- Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
- Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
- (Company) – Must be from B2C Product Companies only.
- (Education) – B.E/ B.Tech
Preferred
- Experience working in microservices architecture and event-driven systems.
- Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- (Environment) – Experience working in high-growth startup or large-scale production environments.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos
Head of AI Systems & Operations
Location: Vile Parle (Mumbai)
Company: Lighting Manufacturing Company
Employment Type: Full-Time
Experience: 1–3 Years
Reporting line: Directly to the Founder
About the Company
We are a fast-growing lighting manufacturing company with 150+ employees, strong online sales, and an expanding product portfolio.
We want to build a Tech-Driven company. As our Sales and Marketing continue to grow rapidly, we now require strong operational systems to scale efficiently.
Role Overview
We are looking for a highly driven and system-oriented individual to take complete ownership of operational structuring and automation across the organization.
This role is focused on building AI-driven systems that bring clarity, control, and scalability, without the need for manual follow-ups or micromanagement.
Key Responsibilities
- Design and implement AI-based operational systems
- Automate inventory, stock monitoring, and supply chain workflows
- Integrate product costing with website listings for real-time margin visibility
- Build auto-check mechanisms for departmental performance
- Create clear dashboards for:
- Sales tracking
- Cost & profit monitoring
- Inventory health
- Dead stock identification
- Ensure all departments operate within structured processes
- Reduce dependency on manual supervision
Required Skills
- Familiarity with automation tools or AI integrations
- Strong logical thinking and system-building mindset
- Understanding of ERP systems / automation tools
- Experience with data tracking, dashboards, or business analytics
- Ability to structure operations in a growing company
- High ownership and accountability
Who Should Apply
- Young professionals looking to take on a high-impact role
- Individuals excited about building systems from scratch
- Candidates who prefer responsibility over routine
Why Join
- Direct exposure to founder-level decision making
- High-growth environment
- Opportunity to build and scale operational systems from the ground up
Note: Compensation will be aligned with industry benchmarks and will not be a limiting factor for the right candidate.
Job Title: Data Entry Operator / Data Entry Clerk
Location-Hyderbad
5 Days working
Job Summary:
Company is seeking a talented and motivated Data Entry Operator who is
responsible for accurately entering, updating, and maintaining data in company databases and
systems. This role ensures information is recorded efficiently, securely, and with attention to detail
to support smooth business operations
Job Responsibilities:
• Enter and update data into databases, spreadsheets, and systems with high accuracy.
• Verify and correct data to ensure consistency and eliminate errors.
• Review source documents for completeness and clarity before entry.
• Maintain records of activities and completed work.
• Retrieve, organize, and present data for internal reports as required.
• Identify and report discrepancies or data quality issues to supervisors.
Requirements:
• High school diploma or equivalent
• Proven experience in data entry, clerical, or administrative work.
• Strong typing skills with accuracy and speed.
• Proficiency with MS Office (Excel, Word) and database software.
• Good time management skills.
• Strong attention to detail.
• Ability to work independently and meet deadlines.
Qualification- BTech-CS (2025 graduate only)
Joining: Immediate Joiner
Job Type: Trainee
Work Mode: Remote
Working Days: Monday to Friday
Shift (Rotational – based on project need):
· 5:00 PM – 2:00 AM IST
· 6:00 PM – 3:00 AM IST
Job Summary
ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for fresher who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
Technical & Development Skills:
· Basic understanding of AI / Machine Learning concepts
· Exposure to AWS (deployment or cloud fundamentals)
· PHP development
· WordPress development and customization
· JavaScript (ES5 / ES6+)
· jQuery
· AJAX calls and asynchronous handling
· Event handling
· HTML5 & CSS3
· Client-side form validation
Work Environment & Tools
- Comfortable working in a remote setup
- Familiarity with collaboration and remote access tools
Additional Requirements (Work-from-Home Setup)
This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:
- System: Laptop or Desktop (Windows-based)
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.
Key Responsibilities:
• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.
• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.
• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).
• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.
• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.
• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.
• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.
• Optimize models for performance, scalability, and reliability.
• Maintain documentation and promote knowledge sharing within the team.
Mandatory Requirements:
• 4+ years of relevant experience as an AI/ML Engineer.
• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.
• Experience implementing RAG pipelines and prompt engineering techniques.
• Strong programming skills in Python.
• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
• Experience with vector databases (FAISS, Pinecone, ChromaDB).
• Strong understanding of SQL and database systems.
• Experience integrating AI solutions into BI tools (Power BI, Tableau).
• Strong analytical, problem-solving, and communication skills. Good to Have
• Experience with cloud platforms (AWS, Azure, GCP).
• Experience with Docker or Kubernetes.
• Exposure to NLP, computer vision, or deep learning use cases.
• Experience in MLOps and CI/CD pipelines
Job Description for Digital Marketing Specialist
Job Title: Digital Marketing Specialist
Company: Mydbops
Location: Pondicherry
About Mydbops
Mydbops is a leading database consulting and managed services company specialising in open-source database technologies. With a team of 120+ certified professionals, we support over 800+ clients globally across 3,000+ database instances. We're also an AWS Advanced Consulting Partner, delivering high-impact, cost-efficient database solutions at scale.
Role Summary:
We're looking for a data-driven Digital Marketing Executive who can execute B2B campaigns, create compelling content, and most importantly—track, analyze, and optimize marketing performance with strong analytical rigor. This role requires someone who can bridge technical database services marketing with measurable ROI-focused campaign execution.
Key Responsibilities
- Campaign Management & Execution
- Plan, execute, and optimize Google Ads, LinkedIn Ads, and Meta campaigns for B2B database services
- Manage end-to-end campaign workflows including audience targeting, ad creative, budget allocation, and performance tracking
- Develop and execute lead generation strategies targeting CTOs, IT Managers, and DevOps teams
Analytics & Data-Driven Decision Making (Critical Requirement)
- Set up and maintain Google Analytics 4, Google Tag Manager, and conversion tracking systems
- Build comprehensive campaign performance dashboards with actionable insights
- Analyze campaign data to identify trends, optimize spend, and improve conversion rates
- Provide weekly/monthly performance reports with clear recommendations
- Track full-funnel metrics: impressions, CTR, CPC, conversions, lead quality, and ROI
- Conduct A/B testing and implement data-backed improvements
Content & Creative Development
- Create social media graphics, ad creatives, and marketing collateral using Canva or similar tools
- Collaborate with technical teams to translate complex database concepts into compelling marketing messages
- Design infographics and explainer visuals for technical blog posts and LinkedIn content
Technical Collaboration
- Work closely with DBAs and technical staff to understand product offerings and client pain points
- Develop technically accurate marketing content for database optimization, migration, and managed services
- Stay updated on database industry trends and competitive landscape
AI & Automation
- Leverage AI tools and LLMs for content generation, campaign optimization, and workflow automation
- Explore and implement marketing automation tools to improve efficiency
Must-Have Skills
Analytics & Tools (Non-negotiable)
- Strong proficiency in Google Analytics (GA4), Google Tag Manager, and Google Search Console
- Experience setting up conversion tracking, UTM parameters, and attribution models
- Ability to analyze data and extract actionable insights
- Excel/Google Sheets proficiency for data analysis and reporting
Campaign Management
- 3+ years managing Google Ads and LinkedIn Ads campaigns with proven ROI
- Experience with B2B lead generation and marketing funnel optimization
Design & Content
- Proficiency in Canva, Figma, or Adobe Creative Suite
- Strong copywriting skills for technical B2B audiences
- Experience creating social media content and ad creatives
Technical Aptitude
- Ability to quickly learn and understand technical products (databases, cloud services, SaaS)
- Familiarity with WordPress/Webflow, hosting platforms, and web technologies
- Comfortable working with technical teams and translating technical concepts
Nice-to-Have Skills
- Experience marketing SaaS, cloud, or database products
- Knowledge of marketing automation platforms (HubSpot, Marketo, Salesforce)
- Certification in Google Analytics, Google Ads, or digital marketing
- Experience with AI-powered marketing tools and LLM applications
- Understanding of SQL or database concepts
Job Details:
- Job Type: Full-time opportunity
- Work time: General Shift
- Mode of Employment - Work From Office (Pondicherry)
- Experience - 2-4 years
Job Details:
Job Title: Linux Admin
Location: Mumbai-Powai
Shifts: Rotational Shifts
Job Description:
Job Overview:
● Experience in Customer Support with an enterprise software organization.
● Experience with Linux or UNIX administration (Clustering, High Availability, Load Balancing, etc.).
● Hands-on experience in managing web servers (Apache, Tomcat, JBoss).
● Elementary database (SQL Server, MySQL, Oracle) operational knowledge.
● Proficient with Scripting or other programming languages.
● Hands-on experience on ticketing tools (Jira / Freshdesk).
● Readiness to work shifts and/or be on call and/or put in extra hours for task closure.
● Excellent verbal, written, presentation and interpersonal communication skills.
● Fast learner who can pick up new technologies.
● Capable of working with a cross-functional team to solve business and technical problems.
● Ability to make complex technical matters easy-to-comprehend for non-technical persons.
● Highly driven individual with an execution focus and a strong sense of urgency.
● High level of enthusiasm about helping and serving clients, strong customer and solution- oriented personality.
● Ability and willingness to travel, if required.
The Role:
● Render exceptional first-tier phone/email support for efficient resolution of technology and functional problems across all products.
● Take ownership of user problems and be proactive when dealing with user issues.
● Follow an established set of processes while handling support requests.
● Report any issue that may significantly impact the business.
● Follow standard procedures for proper escalation of unresolved issues to appropriate internal teams.
● Ensure all calls are logged in the ticketing logging system & every activity is updated.
● Ensure users and management are notified during downtimes with complete information.
● Identify and learn more about the software and hardware used/supported by the organization.
● Research, diagnose, troubleshoot and identify solutions to resolve customer issues. Prepare accurate and timely reports.
● Document knowledge in the form of knowledge base tech notes and articles.
Additional / Preferred Skills:
● Exposure at client sites is desirable.
● Experience in the Financial Service industry or Banking applications is desirable
Mentor Interns
Own Production
AI/ML based development
Full stack developer
with react.js
node.js
Python with flex or Django
RESTful service
Database
Docker
CI/CD
AWS
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale Distribution, Manufacturing, and Specialty Retail.
Unilog’s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Job Details
- Designation: Principal Engineer – Solr
- Location: Bangalore / Mysore / Remote
- Job Type: Full-time
- Department: Software R&D
Job Summary
We are seeking a highly skilled and experienced Principal Engineer with a strong background in Apache Solr and Java to lead our Engineering and customer-led initiatives. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our search platform while providing expert-level troubleshooting and resolution for critical production issues.
This role will involve designing the architecture for new platforms while reviewing and recommending better approaches for existing ones to drive continuous improvement and efficiency.
Key Responsibilities
- Lead Engineering and support activities for Solr-based search applications, ensuring minimal downtime and optimal performance
- Design and develop the architecture of new platforms while reviewing and recommending better approaches for existing ones
- Regularly work towards enhancing search ranking, query understanding, and retrieval effectiveness
- Diagnose, troubleshoot, and resolve complex technical issues in Solr, Java-based applications, and supporting infrastructure
- Perform deep-dive analysis of logs, performance metrics, and alerts to proactively prevent incidents
- Optimize Solr indexes, queries, and configurations to enhance search performance and reliability
- Work closely with development, operations, and business teams to drive improvements in system stability and efficiency
- Implement monitoring tools, dashboards, and alerting mechanisms to enhance observability and proactive issue detection
- Exposure to AI-based search using vector databases, RAG models, NLP, and LLMs
- Collaborate on capacity planning, system scaling, and disaster recovery strategies for mission-critical search systems
- Provide mentorship and technical guidance to junior engineers and support teams
- Drive innovation by tracking latest trends, emerging technologies, and best practices in AI-based Search, Solr, and other search platforms
Requirement
- 8+ years of experience in software development and production support with a focus on Apache Solr, Java, and databases (Oracle, MySQL, PostgreSQL, etc.)
- Strong understanding of Solr indexing, query execution, schema design, configuration, and tuning
- Experience in designing and implementing scalable system architectures for search platforms
- Proven ability to review and assess existing platform architectures, identifying areas for improvement and recommending better approaches
- Proficiency in Java, Spring Boot, and micro-services architectures
- Experience with Linux / Unix-based environments, shell scripting, and debugging production systems
- Hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Splunk, ELK Stack) and log analysis
- Expertise in troubleshooting performance issues related to Solr, JVM tuning, and memory management
- Familiarity with cloud platforms such as AWS, Azure, or GCP and containerization technologies like Docker / Kubernetes
- Strong analytical and problem-solving skills, with the ability to work under pressure in a fast-paced environment
- Certifications in Solr, Java, or cloud technologies
- Excellent communication and leadership abilities
About Our Benefits
- Competitive salary
- Health insurance
- Retirement plan
- Paid time off
- Training and development opportunities
About the Role
We are looking for a Senior Program Manager (DevX) to lead enterprise-scale initiatives that improve developer productivity, engineering workflows, and platform efficiency. This role is central to our software transformation journey, ensuring developers have the right tools, platforms, and processes to deliver at scale.
What You’ll Do
- Lead end-to-end DevX and Developer Productivity programs
- Drive engineering efficiency across SDLC, CI/CD, and platform tooling
- Partner with Engineering, Platform, and Product teams on tooling and transformation roadmaps
- Standardize and optimize developer, database, and SDLC tools
- Reduce friction from code → build → test → deploy
- Establish program governance, metrics, and executive reporting
- Manage risks, dependencies, and large-scale change initiatives
What We’re Looking For
- 10+ years in Technical Program Management / Engineering Program Management
- Proven experience driving DevX, Platform, or Engineering Transformation programs
- Strong understanding of Agile, DevOps, and modern SDLC practices
- Experience working with large, cross-functional engineering teams
- Excellent stakeholder and executive communication skills
Tools & Platforms
- Developer Tools: GitHub, VS Code, IntelliJ, Postman
- CI/CD & SDLC: Azure DevOps, Jenkins, GitHub Actions, JIRA, Confluence, SonarQube
- Platforms: Docker, Kubernetes
- Database Tools: SSMS, DBeaver, DataGrip, Azure Data Studio, pgAdmin
Nice to Have
- Platform or Cloud Engineering background
- Experience in large-scale software or engineering transformations
- Ex-Engineer or Technical background
Candidate Profile
- Education: M. Pharmacy or B. Pharmacy with MBA
- Experience: 4–5 years
- Location: Hyderabad (mandatory)
- Gender: Male
- Travel: Willingness to travel frequently to Mumbai
- Domain Knowledge:
- Pharma finished dosage formulations
- Licensing deals and techno-commercial exposure
- Should have Hyderabad pharma network
- Ability to initiate formulation licensing deals for US & EU regulated markets
Key Responsibilities
- Support business development efforts for regulated markets, primarily US and EU
- Understand regulatory requirements and build / maintain relevant databases
- Provide end-to-end BD support to the BD Manager
- Analyse data and present concise summaries to aid management decision-making
- Focus on opportunities in regulated markets
- Handle techno-commercial evaluations
- Follow up on potential opportunities and convert them into qualified leads
- Prepare and manage costing sheets
Job Role: Profile Data Setup Analyst
Job Title: Data Analyst
Location: Vadodara | Department: Customer Service |
Experience: 3 - 5 Years
Job Overview
A company, we’ve been transforming the window and door industry with intelligent
software for over 40 years. Our solutions power manufacturers, dealers, and installers globally,
enabling efficiency, accuracy, and growth. We are now looking for curious, data-driven professionals
to join our mission of delivering world-class digital solutions to our customers.
Job Overview
As a Profile Data Setup Analyst, you will play a key role in configuring, analysing, and managing product
data for our customers. You will work closely with internal teams and clients to ensure accurate,
optimized, and timely data setup . This role is perfect for someone who
enjoys problem-solving, working with data, and continuously learning.
Key Responsibilities
• Understand customer product configurations and translate them into structured data using
Windowmaker Software.
• Set up and modify profile data including reinforcements, glazing, and accessories, aligned with
customer-specific rules and industry practices.
• Analyse data, identify inconsistencies, and ensure high-quality output that supports accurate
quoting and manufacturing.
• Collaborate with cross-functional teams (Sales, Software Development, Support) to deliver
complete and tested data setups on time.
• Provide training, guidance, and documentation to internal teams and customers as needed.
• Continuously look for process improvements and contribute to knowledge-sharing across the
team.
• Support escalated customer cases related to data accuracy or configuration issues.
• Ensure timely delivery of all assigned tasks while maintaining high standards of quality and
attention to detail.
Required Qualifications
• 3–5 years of experience in a data-centric role.
• Bachelor’s degree in engineering e.g Computer Science, or a related technical field.
• Experience with product data structures and product lifecycle.
• Strong analytical skills with a keen eye for data accuracy and patterns.
• Ability to break down complex product information into structured data elements.
• Eagerness to learn industry domain knowledge and software capabilities.
• Hands-on experience with Excel, SQL, or other data tools.
• Ability to manage priorities and meet deadlines in a fast-paced environment.
• Excellent written and verbal communication skills.
• A collaborative, growth-oriented mindset.
Nice to Have
• Prior exposure to ERP/CPQ/Manufacturing systems is a plus.
• Knowledge of the window and door (fenestration) industry is an added advantage.
Why Join Us
• Be part of a global product company with a solid industry reputation.
• Work on impactful projects that directly influence customer success.
• Collaborate with a talented, friendly, and supportive team.
• Learn, grow, and make a difference in the digital transformation of the fenestration industry.
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
JOB DETAILS:
- Job Title: Senior Business Analyst
- Industry: Ride-hailing
- Experience: 4-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Data Visualization, Data Analysis, Strong in Python and SQL, Cross-Functional Communication & Stakeholder Management
Criteria:
1. Candidate must have 4–7 years of experience in analytics / business analytics roles.
2. Candidate must be currently based in Bangalore only (no relocation allowed).
3. Candidate must have hands-on experience with Python and SQL.
4. Candidate must have experience working with databases/APIs (Mongo, Presto, REST or similar).
5. Candidate must have experience building dashboards/visualizations (Tableau, Metabase or similar).
6. Candidate must be available for face-to-face interviews in Bangalore.
7. Candidate must have experience working closely with business, product, and operations teams.
Description
Job Responsibilities:
● Acquiring data from primary/secondary data sources like mongo/presto/Rest APIs.
● Candidate must have strong hands-on experience in Python and SQL.
● Build visualizations to communicate data to key decision-makers and preferably familiar with building interactive dashboards in Tableau/Metabase
● Establish relationship between output metric and its drivers in order to identify critical drivers and control the critical drivers so as to achieve the desired value of output metric
● Partner with operations/business teams to consult, develop and implement KPIs, automated reporting/process solutions, and process improvements to meet business needs
● Collaborating with our business owners + product folks and perform data analysis of experiments and recommend the next best action for the business. Involves being embedded into business decision teams for driving faster decision making
● Collaborating with several functional teams within the organization and use raw data and metrics to back up assumptions, develop hypothesis/business cases and complete root cause analyses; thereby delivering output to business users
Job Requirements:
● Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative field.
● Around 4-6 years of experience being embedded in analytics and adjacent business teams working as analyst aiding decision making
● Proficiency in Excel and ability to structure and present data in creative ways to drive insights
● Some basic understanding of (or experience in) evaluating financial parameters like return-on-investment (ROI), cost allocation, optimization, etc. is good to have
👉 ● Candidate must have strong hands-on experience in Python and SQL.
What’s there for you?
● Opportunity to understand the overall business & collaborate across all functional departments
● Prospect to disrupt the existing mobility industry business models (ideate, pilot, monitor & scale)
● Deal with the ambiguity of decision making while balancing long-term/strategic business needs and short-term/tactical moves
● Full business ownership working style which translates to freedom to pick problem statements/workflow and self-driven culture
The Power BI Intern will assist the analytics team in using Microsoft Power BI to create interactive dashboards and reports. Working with actual datasets to assist well-informed business decision-making, this position provides practical exposure to data analysis, visualization, and business intelligence techniques.
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data
OVERVIEW
We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.
The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.
SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.
Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.
Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.
WHAT YOU WILL BUILD
Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.
Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.
Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.
Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.
Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.
DOMAIN EXPERIENCE
Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.
Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.
High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.
ENGINEERING STANDARDS
Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.
Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.
Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.
Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.
TECHNICAL ENVIRONMENT
PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.
WHAT WE ARE LOOKING FOR
Attention to Detail: You notice when something is slightly off and investigate rather than ignore.
Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.
Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.
Long-Term Orientation: You build systems you will maintain for years.
Communication: You document clearly, explain data issues to non-engineers, and surface problems early.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals
Responsibilities:
- Cloud Architecture & Strategy
- Define and evolve the company’s cloud architecture, with AWS as the primary platform.
- Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
- Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
- Partner with engineering and product to convert custom solutions into productised capabilities.
- Security & Compliance Enablement
- Act as a foundational partner in building out the company’s security andcompliance functions.
- Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
- Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
- Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
- Customer & Solutions Enablement
- Work with Solutions Engineering and customers to design and validate complex deployments.
- Contribute to processes that productise custom implementations into scalable platform features.
- Leadership & Influence
- Serve as a technical thought leader across cloud, data, and security domains.
- Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
- Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
- Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
- Data Platforms & Governance
- Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
- Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
- Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
- Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
- Developer Experience & DevOps
- Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
- Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
- Embed security, compliance, and reliability standards into the development lifecycle.
Requirements:
- 12+ years of experience in cloud engineering or architecture roles.
- Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
- Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
- Strong foundation in data management and governance, including lifecycle and compliance.
- Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
- Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
- Strong foundation in networking, security, observability, and performance engineering.
- Excellent communication and influencing skills, with the ability to partner across technical and business functions.
Good to Have:
- Exposure to Azure, GCP, or other cloud environments.
- Experience working in SaaS/PaaS at enterprise scale.
- Background in product engineering, with experience shaping technical direction in collaboration with product teams.
- Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
- You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.
Role & Responsibilities:
We are seeking a Software Developer with 2-10 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling. • Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features. • Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
• Strong knowledge of Python (scripting, APIs, data handling).
• Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
• Experience with JSON data parsing and transformations.
• Familiarity with PostgreSQL or other relational databases.
• Ability to write clean, maintainable, and well-documented code.
• Strong problem-solving skills and eagerness to learn.
• Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
• Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
• Experience working in startups or fast-paced environments.
• Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
• The opportunity to define the future of GovTech through AI-powered solutions.
• A strategic leadership role in a fast-scaling startup with direct impact on product direction and market success.
• Collaborative and innovative environment with cross-functional exposure.
• Growth opportunities backed by a strong leadership team.
• Remote flexibility and work-life balance.
DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.
About the Role
We're seeking a talented and versatile Full Stack Developer with a strong foundation in mobile app development to join our dynamic team. You'll play a pivotal role in designing, developing, and maintaining high-quality software applications across various platforms.
Responsibilities
- Full Stack Development: Design, develop, and implement both front-end and back-end components of web applications using modern technologies and frameworks.
- Mobile App Development: Develop native mobile applications for iOS and Android platforms using Swift and Kotlin, respectively.
- Cross-Platform Development: Explore and utilize cross-platform frameworks (e.g., React Native, Flutter) for efficient mobile app development.
- API Development: Create and maintain RESTful APIs for integration with front-end and mobile applications.
- Database Management: Work with databases (e.g., MySQL, PostgreSQL) to store and retrieve application data.
- Code Quality: Adhere to coding standards, best practices, and ensure code quality through regular code reviews.
- Collaboration: Collaborate effectively with designers, project managers, and other team members to deliver high-quality solutions.
Qualifications
- Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Strong programming skills in [relevant programming languages, e.g., JavaScript, Python, Java, etc.].
- Experience with [relevant frameworks and technologies, e.g., React, Angular, Node.js, Swift, Kotlin, etc.].
- Understanding of software development methodologies (e.g., Agile, Waterfall).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong communication and interpersonal skills.
Preferred Skills (Optional)
- Experience with cloud platforms (e.g., AWS, Azure, GCP).
- Knowledge of DevOps practices and tools.
- Experience with serverless architectures.
- Contributions to open-source projects.
What We Offer
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- A collaborative and supportive work environment.
- A chance to work on cutting-edge projects.
About Corridor Platforms
Corridor Platforms is a leader in next-generation risk decisioning and responsible AI governance, empowering banks and lenders to build transparent, compliant, and data-driven solutions. Our platforms combine advanced analytics, real-time data integration, and GenAI to support complex financial decision workflows for regulated industries.
Role Overview
As a Backend Engineer at Corridor Platforms, you will:
- Architect, develop, and maintain backend components for our Risk Decisioning Platform.
- Build and orchestrate scalable backend services that automate, optimize, and monitor high-value credit and risk decisions in real time.
- Integrate with ORM layers – such as SQLAlchemy – and multi RDBMS solutions (Postgres, MySQL, Oracle, MSSQL, etc) to ensure data integrity, scalability, and compliance.
- Collaborate closely with Product Team, Data Scientists, QA Teams to create extensible APIs, workflow automation, and AI governance features.
- Architect workflows for privacy, auditability, versioned traceability, and role-based access control, ensuring adherence to regulatory frameworks.
- Take ownership from requirements to deployment, seeing your code deliver real impact in the lives of customers and end users.
Technical Skills
- Languages: Python 3.9+, SQL, JavaScript/TypeScript, Angular
- Frameworks: Flask, SQLAlchemy, Celery, Marshmallow, Apache Spark
- Databases: PostgreSQL, Oracle, SQL Server, Redis
- Tools: pytest, Docker, Git, Nx
- Cloud: Experience with AWS, Azure, or GCP preferred
- Monitoring: Familiarity with OpenTelemetry and logging frameworks
Why Join Us?
- Cutting-Edge Tech: Work hands-on with the latest AI, cloud-native workflows, and big data tools—all within a single compliant platform.
- End-to-End Impact: Contribute to mission-critical backend systems, from core data models to live production decision services.
- Innovation at Scale: Engineer solutions that process vast data volumes, helping financial institutions innovate safely and effectively.
- Mission-Driven: Join a passionate team advancing fair, transparent, and compliant risk decisioning at the forefront of fintech and AI governance.
What We’re Looking For
- Proficiency in Python, SQLAlchemy (or similar ORM), and SQL databases.
- Experience developing and maintaining scalable backend services, including API, data orchestration, ML workflows, and workflow automation.
- Solid understanding of data modeling, distributed systems, and backend architecture for regulated environments.
- Curiosity and drive to work at the intersection of AI/ML, fintech, and regulatory technology.
- Experience mentoring and guiding junior developers.
Ready to build backends that shape the future of decision intelligence and responsible AI?
Apply now and become part of the innovation at Corridor Platforms!
Job Title: Data Entry Operator / Data Entry Clerk
Job Summary:
Graniti Vicentia India Pvt Ltd is seeking a talented and motivated Data Entry Operator who is
responsible for accurately entering, updating, and maintaining data in company databases and
systems. This role ensures information is recorded efficiently, securely, and with attention to detail
to support smooth business operations
Job Responsibilities:
• Enter and update data into databases, spreadsheets, and systems with high accuracy.
• Verify and correct data to ensure consistency and eliminate errors.
• Review source documents for completeness and clarity before entry.
• Maintain records of activities and completed work.
• Retrieve, organize, and present data for internal reports as required.
• Identify and report discrepancies or data quality issues to supervisors.
Requirements:
• High school diploma or equivalent
• Proven experience in data entry, clerical, or administrative work.
• Strong typing skills with accuracy and speed.
• Proficiency with MS Office (Excel, Word) and database software.
• Good time management skills.
• Strong attention to detail.
• Ability to work independently and meet deadlines.
Required Skills and Qualifications :
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Modeler or in a similar role at a asset manager or financial firm.
- Strong Understanding of various business concepts related to buy side financial firms. Understanding of Private Markets (Private Credit, Private Equity, Real Estate, Alternatives) is required.
- Strong understanding of database design principles and data modeling techniques (e.g., ER modeling, dimensional modeling).
- Knowledge of SQL and experience with relational databases (e.g., Oracle, SQL Server, MySQL).
- Familiarity with NoSQL databases is a plus.
- Excellent analytical and problem-solving skills.
- Strong communication skills and the ability to work collaboratively.
Preferred Qualifications:
- Experience in data warehousing and business intelligence.
- Knowledge of data governance practices.
- Certification in data modeling or related fields.
Key Responsibilities :
- Design and develop conceptual, logical, and physical data models based on business requirements.
- Collaborate with stakeholders in finance, operations, risk, legal, compliance and front offices to gather and analyze data requirements.
- Ensure data models adhere to best practices for data integrity, performance, and security.
- Create and maintain documentation for data models, including data dictionaries and metadata.
- Conduct data profiling and analysis to identify data quality issues.
- Conduct detailed meetings and discussions with business to translate broad business functionality requirements into data concepts, data models and data products.
Job Title: Associate Backend Engineer
Job Summary:
We are looking for an enthusiastic and motivated Associate Backend Engineer with 1 to 2 years of experience to join our growing engineering team. Whether you're a recent graduate or have some industry experience, this role offers a strong foundation to grow your skills in real-world backend development. You’ll work closely with experienced engineers and contribute to the design, development, and maintenance of scalable backend systems that power our products.
This position is ideal for individuals who are eager to learn, write production-grade code, and grow into a high-performing backend engineer.
Website: https://www.thealteroffice.com/about
Key Responsibilities:
- Assist in building and maintaining backend services and APIs for web and mobile applications.
- Work with both relational (MySQL, PostgreSQL) and NoSQL (MongoDB, Redis) databases for data modeling and storage.
- Write clean, maintainable, and well-documented code under guidance.
- Contribute to authentication, authorization, and other core backend features.
- Collaborate with cross-functional teams including product, frontend, and QA to deliver complete features.
- Participate in code reviews and incorporate feedback to improve code quality.
- Debug issues, write unit/integration tests, and help maintain service reliability and performance.
- Learn to work with CI/CD pipelines, version control (Git), and deployment workflows.
- Use tools like Docker, basic cloud services (AWS/GCP/Azure), and optionally explore monitoring/logging tools.
- Explore new technologies such as GraphQL, WebSockets, or message queues (e.g., Kafka, RabbitMQ) when relevant.
Requirements:
- 1 to 2 years of backend development experience, including internships, academic projects, freelance, or open-source work.
- Familiarity with at least one backend programming language (e.g., Python, Java, Go, JavaScript, etc.).
- Basic understanding of RESTful APIs, HTTP, databases, and server-side logic.
- Exposure to SQL and NoSQL databases and understanding of CRUD operations.
- Familiarity with Git and fundamental development workflows.
- Willingness to learn and apply best practices in scalability, security, and asynchronous programming.
- Strong problem-solving mindset and eagerness to take feedback and grow.
- Good communication and collaboration skills in a team environment.
We’re hiring a Full Stack Developer (5+ years, Pune location) to join our growing team!
You’ll be working with React.js, Node.js, JavaScript, APIs, and cloud deployments to build scalable and high-performing web applications.
Responsibilities include developing responsive apps, building RESTful APIs, working with SQL/NoSQL databases, and deploying apps on AWS/Docker.
Experience with CI/CD, Git, secure coding practices (OAuth/JWT), and Agile collaboration is a must.
If you’re passionate about full stack development and want to work on impactful projects, we’d love to connect!
Job Title: Data Engineer
Location: Hyderabad
About us:
Blurgs AI is a deep-tech startup focused on maritime and defence data-intelligence solutions, specialising in multi-modal sensor fusion and data correlation. Our flagship product, Trident, provides advanced domain awareness for maritime, defence, and commercial sectors by integrating data from various sensors like AIS, Radar, SAR, and EO/IR.
At Blurgs AI, we foster a collaborative, innovative, and growth-driven culture. Our team is passionate about solving real-world challenges, and we prioritise an open, inclusive work environment where creativity and problem-solving thrive. We encourage new hires to bring their ideas to the table, offering opportunities for personal growth, skill development, and the chance to work on cutting-edge technology that impacts global defence and maritime operations.
Join us to be part of a team that's shaping the future of technology in a fast-paced, dynamic industry.
Job Summary:
We are looking for a Senior Data Engineer to design, build, and maintain a robust, scalable on-premise data infrastructure. You will focus on real-time and batch data processing using platforms such as Apache Pulsar and Apache Flink, work with NoSQL databases like MongoDB and ClickHouse, and deploy services using containerization technologies like Docker and Kubernetes. This role is ideal for engineers with strong systems knowledge, deep backend data experience, and a passion for building efficient, low-latency data pipelines in a non-cloud, on-prem environment.
Key Responsibilities:
- Data Pipeline & Streaming Development
- Design and implement real-time data pipelines using Apache Pulsar and Apache Flink to support mission-critical systems.
- Develop high-throughput, low-latency data ingestion and processing workflows across streaming and batch workloads.
- Integrate internal systems and external data sources into a unified on-prem data platform.
- Data Storage & Modelling
- Design efficient data models for MongoDB, ClickHouse, and other on-prem databases to support analytical and operational workloads.
- Optimise storage formats, indexing strategies, and partitioning schemes for performance and scalability.
- Infrastructure & Containerization
- Deploy, manage, and monitor containerised data services using Docker and Kubernetes in on-prem environments.
- Performance, Monitoring & Reliability
- Monitor the performance of streaming jobs and database queries; fine-tune for efficiency and reliability.
- Implement robust logging, metrics, and alerting solutions to ensure data system availability and uptime.
- Identify bottlenecks in the pipeline and proactively implement optimisations.
Required Skills & Experience:
- Strong experience in data engineering with a focus on on-premise infrastructure.
- Strong expertise in streaming technologies like Apache Pulsar, Apache Flink, or similar.
- Deep experience with MongoDB, ClickHouse, and other NoSQL or columnar storage databases.
- Proficient in Python, Java, or Scala for data processing and backend development.
- Hands-on experience deploying and managing systems using Docker and Kubernetes.
- Familiarity with Linux-based systems, system tuning, and resource monitoring.
Preferred Qualifications:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or an equivalent combination of education and experience.
Additional Responsibilities for Senior Data Engineers :
For those hired as Senior Data Engineers, the role will come with added responsibilities, including:
- Leadership & Mentorship: Guide and mentor junior engineers, sharing expertise and best practices.
- System Architecture: Lead the design and optimization of complex real-time and batch data pipelines, ensuring scalability and performance.
- Sensor Data Expertise: Focus on building and optimizing sensor-based data pipelines and stateful stream processing for mission-critical applications in domains like maritime and defense.
- End-to-End Ownership: Take responsibility for the performance, reliability, and optimization of data systems.
Compensation:
- Data Engineer CTC: 4 - 8 LPA
- Senior Data Engineer CTC: 12 - 16 LPA
- 4= years of experience
- Proficiency in Python programming.
- Experience with Python Service Development (RestAPI/FlaskAPI)
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with Cloud (Azure /AWS) technologies
JD
The Java Software Engineer role is to design, develop, and maintain scalable and high-performance
backend systems using Java and related technologies. This includes building APIs, integrating with
databases and external services, and ensuring system reliability, security, and maintainability.
Must Have Skills:
- Programming Languages & Frameworks (Java 18+, Spring Boot)
- API Development (RESTful APIs, OpenAPI/Swagger)
- Any Database
- CI/CD Pipelines (Jenkins, GitLab CI/CD)
- Cloud Platforms (Any)
- Unix
- Testing Frameworks (JUnit, TestNG, Mockito, JBehave)
Good to have Skills:
- Messaging & Integration (Kafka, REST)
- Azure Open AI, RAG-based Architecture, AI Agents, Agentic AI, Spring Integration
- Monitoring & Logging (Prometheus, Splunk)
- Security & Authentication (OAuth2, JWT, Spring Security)
Qualification:
- Bachelor's or Master’s degrees in Computer Science, Computer Engineering, or a related technical discipline.
- Ability to work independently and to adapt to a fast-changing environment.
- Creative, self-disciplined, and capable of identifying and completing critical tasks independently and with a sense of urgency.
Note: Banking Project is required
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.
The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
- Design, build, and maintain scalable data pipelines for structured and unstructured data sources
- Develop ETL processes to collect, clean, and transform data from internal and external systems
- Support integration of data into dashboards, analytics tools, and reporting systems
- Collaborate with data analysts and software developers to improve data accessibility and performance
- Document workflows and maintain data infrastructure best practices
- Assist in identifying opportunities to automate repetitive data tasks
Hiring: Mobile App Developer (Consultant)
We at Amagle Software are looking for an experienced React Native Developer to work with us in a consulting capacity.
What we’re looking for:
Strong experience in React Native (iOS + Android)
Knowledge of Firebase (Auth, Firestore, Push Notifications, etc.)
Good understanding of mobile app architecture and database design
Ability to provide technical guidance and write scalable code
Engagement Model:
Consultant (Remote)
Duration: Initially 2–3 months (extendable based on need)
Flexible hours, outcome-based collaboration
Compensation: Open to discussion
Location: Remote (India-based consultants preferred)
Interested?
Send your profile/portfolio to us.
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks
CYLSYS SOFTWARE SOLUTION PVT. LTD.
Job Description| Sr. Dot.Net Developer-
Purpose of the role:
The ideal candidate will be familiar with the full software design life cycle. We are looking for a
.Net developer to build software using languages and technologies of the .NET framework. You will create applications from scratch, configure existing systems and provide user support. In this role, you should be able to write functional code with a sharp eye for spotting defects. You should be a team player and excellent communicator. If you are also passionate about the .NET framework and software design/architecture, we’d like to meet you. Your goal will be to work with internal teams to design, develop and maintain software.
REQUIREMENTS:
• Should have a minimum of 5+ years of hands-on experience in .NET MVC, Core, ASP
.NET and SQL Server.
• Should have expertise in C# programming.
• Should have strong UI and MVC knowledge.
• Should be proficient in JavaScript, HTML and CSS.
• Should be familiar with at least one frontend framework like bootstrap, Telerik etc.
• Should have experience in Agile software development/Agile methodologies.
• Familiarity with architecture styles/APIs (REST, RPC)
• In addition, with the knowledge of Angular.
• Experience working in agile development environment.
• Experience developing web-based applications in C#, HTML, JavaScript, or .NET.
• Should have experience in ASP.NET, MVC, .NET CORE, Design Patterns, SQL Server.
• Experience working with MS SQL Server and MySQL Knowledge of practices and procedures for full software design life cycle.
• In addition, with the knowledge of Razor pages / Dynamic CRM.
MANDATORY EXPERIENCE:
• Bachelor’s or Master’s in Computer Engineering or Computer Application.
• Diploma/Course in Computer Application.
• Hand-on experience in ASP.NET, MVC, Design Patterns, SQL Server or relevant experiences.
Roles and Responsibilities:
• Own development, design, scaling, and maintenance of application and messaging engines that power the central platform of Capillary's Cloud CRM product.
• Work on the development of AI and data science products for various use cases. Implement PoCs in Python, and Spark-Scala and productize the implementations.
• Contribute to overall design and roadmap.
• Mentor Junior team members.
Required Skills:
• Innovative and self-motivated with a passion to develop complex and scalable applications.
• 3+years of experience in software development with a strong focus on algorithms and data structures.
• Strong coding and design skills with prior experience in developing scalable & high-availability applications. Expertise in using Core Java/J2EE or Node.js
• Work experience with Relational databases and Non-Relational is required (Primarily MySQL, MongoDB, and Redis)
• Familiarity with big data platforms (like Spark-Scala) is an added plus.
• Strong Analytical and Problem Solving Skills.
• BTech from IIT or BE in computer science from a top REC/NIT.
Job Perks
• Competitive Salary as per market standards
• Flexible working hours
• Chance to work with a world class engineering team.
Why Join Us:
Be part of a fast-moving tech team building impactful, user-friendly apps with modern development practices and a collaborative work culture.
Capillary is an Equal Opportunity Employer and will not discriminate against any applicant for employment on the basis of race, age, religion, sex, veterans, individuals with disabilities, sexual orientation, or gender identity.
Disclaimer:
It has been brought to our attention that there have recently been instances of fraudulent job offers, purporting to be from Capillary Technologies. The individuals or organizations sending these false employment offers may pose as a Capillary Technologies recruiter or representative and request personal information, purchasing of equipment or funds to further the recruitment process or offer paid training. Be advised that Capillary Technologies does not extend unsolicited employment offers. Furthermore, Capillary Technologies does not charge prospective employees with fees or make requests for funding as a part of the recruitment process.
We commit to an inclusive recruitment process and equality of opportunity for all our job applicants.
🚀 Hiring: Postgres DBA at Deqode
⭐ Experience: 6+ Years
📍 Location: Pune & Hyderabad
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
Looking for an experienced Postgres DBA with:-
✅ 6+ years in Postgres & strong SQL skills
✅ Good understanding of database services & storage management
✅ Performance tuning & monitoring expertise
✅ Knowledge of Dataguard admin, backups, upgrades
✅ Basic Linux admin & shell scripting
Looking for a passionate developer and team player who wants to learn, contribute and bring
fun & energy to the team. We are a friendly startup where we provide opportunities to explore
and learn a lot of things(new technology/tools etc.,) in building quality products using
best-in-class technology.
Responsibilities
● Design and develop new features using Full-stack development
(Java/Spring/React/Angular/Mysql) for a cloud(AWS/others) and mobile product
application in SOA/microservices architecture.
● Design awesome features and continuously improve them by exploring alternatives /
technologies to make design improvements.
● Performance testing with Gatling (Scala).
● Work with CI/CD pipeline and tools (Docker, Ansible) to improve build and
deployment process.
● Working with QA to ensure the quality and timing of new release deployments.
Skills/Experience
Good coding/problem solving skills and interest in learning new things will be the key.
Time /Training will be provided to learn new technologies/tools.
● 4 or more years of professional experience in building web/mobile applications using
Java or similar technologies (C#, Ruby, Python, Elixir, NodeJS).
● Experience in Spring Framework or similar frameworks.
● Experience in any DB (SQL/noSQL)
● Any experience in front-end development using React/Vue/Angular/similar
frameworks.
● Any experience with Java/similar testing frameworks (JUnit/Mocks etc).
Engineering Head / Tech Lead (React + Node.js)
About MrPropTek
MrPropTek is building the future of real estate technology. We're looking for a hands-on Engineering Head / Tech Lead to drive our tech strategy and lead the development of scalable web applications across frontend and backend using React and Node.js.
Responsibilities
- Lead and mentor a team of full-stack developers
- Architect and build scalable, high-performance applications
- Drive end-to-end development using React (frontend) and Node.js (backend)
- Collaborate with product, design, and business teams to align on priorities
- Enforce code quality, best practices, and agile processes
- Oversee deployment, performance, and security of systems
Requirements
- 7+ years in software development; 3+ years in a tech lead or engineering management role
- Deep expertise in React.js, Node.js, JavaScript/TypeScript
- Experience with REST APIs, cloud platforms (AWS/GCP), and databases (SQL/NoSQL)
- Strong leadership, communication, and decision-making skills
- Startup or fast-paced team experience preferred
Job Location- Mohali, Delhi/ NCR
Job Type: Full-time
Job Title: Java Developer
Location: Chennai
Experience: 3+ Years
Employment Type: Full-time
Job Description:
We are looking for a skilled and passionate Java Developer with a strong foundation in Core Java and Object-Oriented Programming concepts. The ideal candidate should possess excellent communication skills and have hands-on experience with multi-threading, collections, databases, and backend development frameworks like Spring and Spring Boot. Exposure to cloud platforms and frontend technologies is a plus.
Key Responsibilities:
- Design, develop, and maintain scalable Java applications using Core Java and Spring Boot.
- Apply strong knowledge of Java Collections Framework and concurrent programming techniques including ExecutorService, ForkJoinPool, and other threading mechanisms.
- Optimize JVM performance and memory usage.
- Participate in all phases of the software development lifecycle: requirement gathering, design, development, testing, and deployment.
- Write clean, maintainable code following best practices, coding standards, and design patterns.
- Conduct code reviews, unit testing, and participate in peer programming.
- Collaborate effectively with team members and stakeholders to deliver high-quality solutions.
- Interact with databases like Oracle, Sybase, or SQL Server, with a strong understanding of views, triggers, indexing, and stored procedures.
Must-Have Skills:
- 3+ years of hands-on experience in Core Java
- Strong understanding of Collections, Multithreading, and Concurrency
- Proficient in Spring Framework, especially Spring Boot
- Deep understanding of Object-Oriented Design and Data Structures
- Strong experience in RDBMS (Sybase/Oracle/SQL Server) including indexing, replication, CLOB/BLOB types, views, and procedures
- Excellent written and verbal communication skills
Good to Have:
- Exposure to Microservices architecture
- Familiarity with NoSQL databases
- Knowledge or willingness to work on frontend technologies
- Experience working with public cloud platforms (AWS, Azure, GCP)
Soft Skills:
- Strong analytical and problem-solving skills
- Team player with a proactive attitude
- Ability to work independently and in a fast-paced environment
If you are passionate about Java development and want to work on cutting-edge technologies in a collaborative environment, we would love to hear from you.
Let me know if you want this converted into a company-branded format or with CTC/NP details added.
Job Description:
Years of Experience:- 5-8 Years
Location: Bangalore
Job Role:- Database Developer
Primary Skill - Database, SQL
Secondary skill - DB2 and Python
Skills:
Main Pointers for Database Developer role.
*Should have strong working experience on any Database like DB2(Good to Have) and SQL OR Oracle/ PL SQL etc.
*Should have working experience on performance tuning
We’re looking for a Product Ninja with the mindset of a Tech Catalyst — a proactive executor who thrives at the intersection of product, technology, and user experience. In this role, you’ll bring product ideas to life, translate strategy into action, and collaborate closely with engineers, designers, and stakeholders to deliver impactful solutions.
This role is ideal for someone who’s hands-on, detail-oriented, and passionate about using technology to create real customer value.
Responsibilities:
- Support the definition and execution of the product roadmap in alignment with business goals.
- Work closely with engineering, design, QA, and marketing teams to drive product development.
- Translate product requirements into detailed specs, user stories, and acceptance criteria.
- Conduct competitive research and analyze user feedback to inform feature enhancements.
- Track product performance post-launch and gather insights for continuous improvement.
- Assist in managing the full product lifecycle, from ideation to rollout.
- Be a tech-savvy contributor, suggesting improvements based on emerging tools, platforms, and technologies.
Qualification:
- Bachelor’s degree in Business, Marketing, Computer Science, or a related field.
- 3+ years of hands-on experience in product management, product operations, or related roles.
- Comfortable working in fast-paced, cross-functional tech environments.
Required Skills:
- Strong analytical and problem-solving abilities.
- Clear, concise communication and documentation skills.
- Proficiency with project and product management tools (e.g., JIRA, Trello, Confluence).
- Ability to manage details without losing sight of the bigger picture.
Preferred Skills:
- Experience with Agile or Scrum workflows.
- Familiarity with UX/UI best practices and working with design systems.
- Exposure to APIs, databases, or cloud-based platforms is a plus.
- Comfortable with basic data analysis and tools like Excel, SQL, or analytics dashboards.
Who You Are:
- A doer who turns ideas into working solutions.
- A collaborator who thrives in tech teams and enjoys building alongside others.
- A catalyst who nudges things forward with curiosity, speed, and smart experimentation.
- A bachelor’s degree in Computer Science or a related field.
- 5-7 years of experience working as a hands-on developer in Sybase, DB2, ETL technologies.
- Worked extensively on data integration, designing, and developing reusable interfaces Advanced experience in Python, DB2, Sybase, shell scripting, Unix, Perl scripting, DB platforms, database design and modeling.
- Expert level understanding of data warehouse, core database concepts and relational database design.
- Experience in writing stored procedures, optimization, and performance tuning Strong Technology acumen and a deep strategic mindset.
- Proven track record of delivering results
- Proven analytical skills and experience making decisions based on hard and soft data
- A desire and openness to learning and continuous improvement, both of yourself and your team members.
- Hands-on experience on development of APIs is a plus
- Good to have experience with Business Intelligence tools, Source to Pay applications such as SAP Ariba, and Accounts Payable system Skills Required
- Familiarity with Postgres and Python is a plus
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.
Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
· Professional Experience: 5+ years of experience in data engineering or a related field.
· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
· AWS Glue for ETL/ELT.
· S3 for storage.
· Redshift or Athena for data warehousing and querying.
· Lambda for serverless compute.
· Kinesis or SNS/SQS for data streaming.
· IAM Roles for security.
· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
· Version Control: Proficient with Git-based workflows.
· Problem Solving: Excellent analytical and debugging skills.
Optional Skills
· Knowledge of data modeling and data warehouse design principles.
· Experience with data visualization tools (e.g., Tableau, Power BI).
· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
· Exposure to other programming languages like Scala or Java.
Education
· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Why Join Us?
· Opportunity to work on cutting-edge AWS technologies.
· Collaborative and innovative work environment.
Database Architect & Engineer
Work Mode - Work from Home
Experience Required - 5+ Years
Immediate joiners are preferred.
Job description
Seeking a seasoned Database Architect with expertise in Oracle and/or PostgreSQL, system wide migrations, and complex query creation and optimization. The ideal candidate should have at least 5 years of experience in database management and be able to work independently. Key responsibilities include defining a schema for a new product, managing and maintaining databases, planning and executing system wide migrations, creating and optimizing complex queries, ensuring the security and integrity of the database, and providing training and support to team members. This role requires excellent communication and problem-solving skills, as well as a strong understanding of database architecture, design, and security.
Responsibilities:
- Design, implement, and maintain Oracle and PostgreSQL databases
- Expert in ETL tools, data integration, and migration processes
- Create and optimize complex SQL queries for performance and efficiency
- Develop and refine database functions, procedures, and packages
- Analyze and troubleshoot database performance issues, implementing tuning solutions
- Ensure database architecture aligns with application needs and best practices
- Collaborate with development teams on data models and database design
- Provide documentation, training and support to other team members
- Assure quality, security and compliance requirements are met as part of projects
delivery
Qualifications:
- 5+ years of experience with Oracle and/or PostgreSQL
- Experience in designing database schemas from scratch
- Experience with documentation such as ERDs and DB Relations
- Experience with system wide migrations
- Strong understanding of database architecture and design
- Experience with complex query creation and optimization
- Experience with performance tuning and troubleshooting
- Strong understanding of database security
- Experience with database backup and recovery
- Excellent communication and problem-solving skills
- Bachelor's degree in Computer Science or related field is a plus
- Experience in development of SaaS enterprise scale product
- Experience in working with globally distributed teams following Agile methodology
and Scrum.
- Experience in working with modern CI/CD tools and related ecosystem of
applications - uDeploy, Jenkins, Artifactory, Maven etc.
- Experience using source control tools like Git, SVN etc.
About Gyaan:
Gyaan empowers Go-To-Market teams to ascend to new heights in their sales performance, unlocking boundless opportunities for growth. We're passionate about helping sales teams excel beyond expectations. Our pride lies in assembling an unparalleled team and crafting a crucial solution that becomes an indispensable tool for our users. With Gyaan, sales excellence becomes an attainable reality.
About the Job:
Gyaan is seeking an experienced backend developer with expertise in Python, Django, AWS, and Redis to join our dynamic team! As a backend developer, you will be responsible for building responsive and scalable applications using Python, Django, and associated technologies.
Required Qualifications:
- 2+ years of hands-on experience programming in Python, Django
- Good understanding of CI/CD tools (Github Action, Gitlab CI) in a SaaS environment.
- Experience in building and running modern full-stack cloud applications using public cloud technologies such as AWS/
- Proficiency with at least one relational database system like MySQL, Oracle, or PostgreSQL.
- Experience with unit and integration testing.
- Effective communication skills, both written and verbal, to convey complex problems across different levels of the organization and to customers.
- Familiarity with Agile methodologies, software design lifecycle, and design patterns.
- Detail-oriented mindset to identify and rectify errors in code or product development workflow.
- Willingness to learn new technologies and concepts quickly, as the "cloud-native" field evolves rapidly.
Must Have Skills:
- Python
- Django Framework
- AWS
- Redis
- Database Management
Qualifications:
- Bachelor’s degree in Computer Science or equivalent experience.
If you are passionate about solving problems and have the required qualifications, we want to hear from you! You must be an excellent verbal and written communicator, enjoy collaborating with others, and welcome discussing a plan upfront. We offer a competitive salary, flexible work hours, and a dynamic work environment.

We are seeking a Production Support Engineer to join our team.
Responsibilites:
- Be the first line of defense for production and test environment issues.
- Work collaboratively with the team to identify, manage, and resolve ongoing incidents.
- Troubleshoot and connect with appropriate teams to effectively triage issues impacting test and production environments.
- Understand system architecture, upstream, and downstream dependencies to enable effective participation in triage and restoration activities.
- Perform systems monitoring of applications within the IRS domain after service restoration and post patching, maintenance, and upgrades.
- Create necessary service tickets and ensure tickets are routed to the appropriate technical teams.
- Provide weekend support for various activities including patching, release deployments, security updates, and 3rd party updates.
- Keep up with info alerts, patching alerts, and delivery partners' activities.
- Update stakeholders to plan for upcoming maintenance as well as alert them about service issues and restoration.
- Manage and communicate about upcoming maintenance in the test environment on a daily basis.
- Liaise with various stakeholders to gain approval for alert communications, including confirmation before an all-clear communication.
- Work closely with testing and development teams to prepare for infrastructure updates and release readiness.
- Submit Application Redirects tickets for planned maintenance after gaining approval from management.
- Participate in analysis and improvement of system performance.
- Host daily operational standup.
- Provide additional support to existing production support procedures and process improvements.
- Provide regular status reports to management on application status and other metrics.
- Collaborate with management to improve and customize reports related to production support.
- Plan and manage support for incident management tools and processes.
Requirements:
- Bachelor's Degree in computer science, engineering, or related field.
- AWS Cloud certification.
- 3+ years of relevant IT work experience with cloud experience.
- Knowledge of Java and microservice development and deployments.
- Understanding of the business processes behind applications.
- Strong analytical, problem-solving, negotiation, task and project management, and organizational skills.
- Strong oral and written communication skills, including process documentation.
- Proficiency in Microsoft Office applications (Word, PowerPoint, Excel, and Project).
- Proficiency in knowledge of computer systems, databases, and SharePoint.
- Knowledge of Splunk and AppDynamics.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link: https://zrec.in/gQWFK?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com























