6+ Data processing Jobs in Bangalore (Bengaluru) | Data processing Job openings in Bangalore (Bengaluru)
Apply to 6+ Data processing Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Data processing Job opportunities across top companies like Google, Amazon & Adobe.
1. Roadmap & Strategy (The "Why")
- Own the product roadmap for the Data Platform, prioritizing features like real-time ingestion, data quality frameworks, and self-service analytics.
- Translate high-level business questions (e.g., "We need to track customer churn in real-time") into technical requirements for ETL pipelines.
- Define Service Level Agreements (SLAs) for data freshness, availability, and quality.
2. Technical Execution (The "What")
- Write detailed Technical Product Requirements (PRDs) that specify source-to-target mappings, transformation logic, and API integration requirements.
- Collaborate with Data Engineers to decide on architecture trade-offs (e.g., Batch vs. Streaming, Build vs. Buy).
- Champion the adoption of Data Observability tools to detect pipeline failures before business users do.
3. Data Governance & Quality
- Define and enforce data modeling standards (Star Schema, Snowflake Schema).
- Ensure compliance with privacy regulations (GDPR/CCPA/DPDP) regarding how data is ingested, stored, and masked.
- Manage the "Data Dictionary" to ensure all stakeholders understand what specific metrics actually mean.
4. Stakeholder Management
- Act as the primary liaison between Data Producers (Software Engineers sending data) and Data Consumers (BI Analysts, Data Scientists).
- Manage dependencies: If the backend team changes a database column, ensure your ETL roadmap accounts for it.
š What We Are Looking For
Technical "Must-Haves":
- SQL Mastery: You can write complex queries to explore data, validate transformations, and debug issues. You don't wait for an analyst to pull data for you.
- ETL Knowledge: Deep understanding of data integration concepts: Change Data Capture (CDC), Batching, Streaming, Upserts, and Idempotency.
- Data Modeling: You understand Dimensional Modeling, Data Lakes vs. Data Warehouses, and normalization/denormalization.
- API Fluency: You understand how to pull data from 3rd party REST/GraphQL APIs and handle rate limits/pagination.
Product Skills:
- Experience writing technical specs for backend/data teams.
- Ability to prioritize technical debt vs. new features.
- Strong communication skills to explain "Data Latency" to non-technical executives.
Preferred Tech Stack Exposure:
- Orchestration: Airflow, Dagster, or Prefect.
- Warehousing: Snowflake, BigQuery, or Redshift.
- Transformation: dbt (data build tool).
- Streaming: Kafka or Kinesis.
Who we are: My AI Client is building the foundational platform for the "agentic economy," moving beyond simple chatbots to create an ecosystem for autonomous AI agents and they aim to provide tools for developers to launch, manage, and monetize AI agents as "digital coworkers."
The Challenge
The current AI stack is fragmented, leading to issues with multimodal data, silent webhook failures, unpredictable token usage, and nascent agent-to-agent collaboration. My AI Client is building a unified, robust backend to resolve these issues for the developer community.
Your Mission
As a foundational member of the backend team, you will architect core systems, focusing on:
- Agent Nervous System: Designing agent-to-agent messaging, lifecycle management, and high-concurrency, low-latency communication.
- Multimodal Chaos Taming: Engineering systems to process and understand real-time images, audio, video, and text.
- Bulletproof Systems: Developing secure, observable webhook systems with robust billing, metering, and real-time payment pipelines.
What You'll Bring
- My AI Client seeks an experienced engineer comfortable with complex systems and ambiguity.
Core Experience:
ā Typically 3 to 5 years of experience in backend engineering roles.
ā Expertise in Python, especially with async frameworks like FastAPI.
ā Strong command of Docker and cloud deployment (AWS, Cloud Run, or similar).
ā Proven experience designing and building microservice or agent-based architectures.
Specialized Experience (Ideal):
- Real-Time Systems: Experience with real-time media transmission like WebRTC, WebSockets and ways to process them.
- Scalable Systems: Experience in building scalable, fault-tolerant systems with a strong understanding of observability, monitoring, and alerting best practices.
- Reliable Webhooks: Knowledge of scalable webhook infrastructure with retry logic, backoffs, and security.
- Data Processing: Experience with multimodal data (e.g., OCR, audio transcription, video chunking with FFmpeg/OpenCV).
- Payments & Metering: Familiarity with usage-based billing systems or token-based ledgers.
Your Impact
- The systems designed by this role will form the foundation for:
- Thousands of AI agents for major partners across chat, video, and APIs.
- A new creator economy enabling developers to earn revenue through agents.
- The overall speed, security, and scalability of my clientās AI platform.
Why Join Us?
- Opportunity to solve hard problems with clean, scalable code.
- Small, fast-paced team with high ownership and zero micromanagement.
- Belief in platform engineering as a craft and care for developer experience.
- Conviction that AI agents are the future, and a desire to build their powering platform.
- Dynamic, collaborative in-office work environment in Bengaluru in a Hybrid setup (weekly 2 days from office)
- Meaningful equity in a growing, well-backed company.
- Direct work with founders and engineers from top AI companies.
- A real voice in architectural and product decisions.
- Opportunity to solve cutting-edge problems with no legacy code.
Ready to Build the Future?
My AI Client is building the core platform for the next software paradigm. Interested candidates are encouraged to apply with their GitHub, resume, or anything that showcases their thinking.
Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
Ā
1.Ā Ā Ā Ā Ā Ā Python Developer with Snowflake
Ā
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Ā Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.Ā
- Ā Experience in Snowflake data load using Python.
- Ā Experience in creating user-defined functions in Snowflake.
- Ā Snowsql implementation.
- Ā Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Ā Interpret/analyze business requirements & functional specification
- Ā Good to have DBT, FiveTran, and AWS Knowledge.
GCPĀ Ā Data Analyst profile must have below skills sets :
Ā
- Knowledge of programming languages likeĀ https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.simplilearn.com%2Ftutorials%2Fsql-tutorial%2Fhow-to-become-sql-developer&data=05%7C01%7Ca_anjali%40hcl.com%7C4ae720b3f3cc45c3e04608da3346b335%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637878675987971859%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EImfaJAD1KHOyrBQ7FkbaPl1STtfnf4QdQlbjw72%2BmE%3D&reserved=0" target="_blank">SQL, Oracle, R, MATLAB, Java andĀ https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.simplilearn.com%2Fwhy-learn-python-a-guide-to-unlock-your-python-career-article&data=05%7C01%7Ca_anjali%40hcl.com%7C4ae720b3f3cc45c3e04608da3346b335%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637878675987971859%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Z2n1Xy%2F3YN6nQqSweU5T7EfUTa1kPAAjbCMTWxDCh%2FY%3D&reserved=0" target="_blank">Python
- Data cleansing, data visualization, data wrangling
- Data modeling , data warehouse concepts
- Adapt to Big data platform like Hadoop, Spark for stream & batch processing
- GCP (Cloud Dataproc, Cloud Dataflow, Cloud Datalab, Cloud Dataprep, BigQuery, Cloud Datastore, Cloud Datafusion, Auto ML etc)
About the Organization
Real Estate Syndicators leverage SyndicationPro to manage billions in real estate assets and thousands of investors. Growing at 17% MoM,Ā SyndicationPro is #1 Platform To Automate Real Estate Fund Raising, Investor Relations, & Close More Deals!
Ā
What makes SyndicationPro unique is that it is cash flow positive while maintaining a healthy growth rate and is backed by seasoned investors and real estate magnates. We are also a FirstPrinciples.io Venture Studio Company, giving us the product engineering, growth, and operational backbone in India to scale our team.Ā
Ā
Our story has been covered by Bloomberg Business, Yahoo Finance, and several other renowned news outlets.Ā
Why this Role
We are scaling our Customer Support & Success organization and you will have the opportunity to see how the systems and processes are built for an SaaS company that is rapidly scaling. Our Customer Support & Success Team is led by expierened SaaS Operators who can provide you the guidance and mentorship to growth quickly. You will also have the oppurtunity for faster than normal promotion cycles.Ā
Roles and Responsibilities
- Work on Migrating sensitive customer data from a third-party investment platform or Excel to SyndicationPro with minimal supervision.
- Understand the clientās needs and set up the SyndicationPro platform to meet customer expectations.
- Analyze customer expectations and data to share an expected completion time.
- To manage multiple migrations at the same time to ensure migrations are completed within the stipulated time.
- Work closely with internal and customer-facing teams to deep dive on a customer migration request and workflow using our systems to ensure nothing falls to the bottom of the to-do list
- Reviewing data for deficiencies or errors, correcting any incompatibilities, and checking the output.
- Keep current on product releases and updates
Desired candidate profile:
- Proven data entry or Migration work experience will be an asset
- Experience with MS Office and data programs
- Attention to detail
- Confidentiality
- Organization skills, with an ability to stay focused on assigned tasks
Expectations from the candidate:
- 0-2 years of experience. Prior experience in SAAS environments will be an added advantage
- Resolve processing migration problems working with the technical team
- Pay attention to detail to maintain Data accuracyĀ
- Self Motivated and willing to excel in strict /short deadline
- Have the ability to multitask as needed and time management skills
- Must be comfortable working during night shifts.
- Ā
REQUIREMENT:
- Ā Previous experience of working in large scale data engineering
- Ā 4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
- Ā Previous experience of architecting and designing backend for large scale data processing.
- Ā Familiarity and experience of working in different technologies related to data engineering ā different database technologies, Hadoop, spark, storm, hive etc.
- Ā Hands-on and have the ability to contribute a key portion of data engineering backend.
- Ā Self-inspired and motivated to drive for exceptional results.
- Ā Familiarity and experience working with different stages of data engineering ā data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
- Ā Familiarity and experience working with different DB technologies and how to scale them.
RESPONSIBILITY:
- Ā End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
- Ā Build data engineering workflow for large scale data processing.
- Ā Discover opportunities in data acquisition.
- Ā Bring industry best practices for data engineering workflow.
- Ā Develop data set processes for data modelling, mining and production.
- Ā Take additional tech responsibilities for driving an initiative to completion
- Ā Recommend ways to improve data reliability, efficiency and quality
- Ā Goes out of their way to reduce complexity.
- Ā Humble and outgoing - engineering cheerleaders.

