50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
About the Role
Pendo is looking for a Senior Engineering Manager to lead teams building core product capabilities across Analytics, Guides, and Platform services. These are the systems that power how hundreds of millions of end users experience the software.
In this role, you will drive execution against business objectives, direct complex initiatives from kickoff through delivery, and build a team that operates with clarity and focus. You will set clear expectations, delegate effectively, and partner closely with product, design, and senior engineering leadership to keep teams aligned and moving. You default toward action, push teams to deliver value daily, and actively use AI tools as part of how you work.
If you're energized by directing high-impact teams, developing strong engineers, and building a culture where craft and velocity coexist, this role is a great fit.
What You'll Do
Team Leadership & Hiring
- Create an environment where engineers are encouraged to take risks, experiment, and challenge the status quo.
- Lead, mentor, and grow a team of engineers through clear expectations, coaching, and timely feedback.
- Own hiring end-to-end, partnering with recruiting to attract and close top engineering talent.
- Build an inclusive, high-performing team culture grounded in ownership, accountability, and continuous improvement.
Delivery & Execution
- Maintain a high bar for velocity, predictability, and quality.
- Own team execution against product and engineering goals.
- Partner with Product and Design to define roadmaps, scope work, and deliver high-quality outcomes.
- Identify and remove blockers, manage risks, and ensure strong planning and prioritization.
Technical Leadership
- Guide technical direction in partnership with senior engineers and tech leads.
- Shape architecture that drives delivery speed while preserving quality, reliability, and adaptability.
Cross-Functional Collaboration
- Work closely with product, design, infrastructure, and other engineering teams to deliver cohesive customer experiences.
- Align team priorities with broader organizational goals and strategy.
Operational Excellence
- Drive improvements in system reliability, performance, and scalability.
- Establish strong practices around monitoring, incident response, and continuous improvement.
What We're Looking For
- 8+ years of experience in software engineering.
- 3+ years of experience managing and growing engineering teams.
- Proven track record of hiring and building high-performing teams.
- Experience delivering complex, cross-functional initiatives in a product-driven environment.
- Strong technical foundation in backend, distributed systems, or full-stack development.
- Proven ability to lead teams through ambiguity and change while maintaining execution.
- Actively uses AI tools in day-to-day work and helps drive adoption across teams.
- Strong communication, organizational, and stakeholder management skills.
Nice to Have
- Experience working on analytics products, user-facing SaaS platforms, or data-intensive systems.
- Experience managing teams across both frontend and backend domains.
- Familiarity with modern cloud environments and scalable architectures.
- Experience working in distributed teams across multiple time zones.
Product Engineer (Full Stack) – AI & Healthcare
About Us
We are building a next-generation computational model of human biology to predict, prevent, and cure diseases at their source. By combining real-world biological data, diagnostics, and advanced modeling, we aim to make biology computable.
Our platform integrates at-home diagnostics, biomarker tracking, and lifestyle data (wearables, sleep, nutrition) to create a continuous, connected view of human health.
Role Overview
We are looking for a Product Engineer who combines strong engineering skills with product intuition and design sensibility. You will build high-quality, performant software that serves as the interface between individuals and their biology.
This is a high-ownership role where you will work closely with design and product teams to ship impactful features end-to-end.
Key Responsibilities
- Build and ship scalable, reliable full-stack systems
- Develop intuitive, high-quality user interfaces for complex biological data
- Collaborate closely with design to deliver polished user experiences
- Own features end-to-end: from ideation to production
- Optimize performance, responsiveness, and usability
- Work in a fast-paced, AI-first environment
Required Skills & Qualifications
- 2–4 years of experience building and shipping production systems
- Strong proficiency in full-stack development
- Experience with modern frontend frameworks (React preferred)
- Solid understanding of backend systems and APIs
- Strong product sense and attention to UI/UX detail
- Ability to work independently and make decisions quickly
Preferred Qualifications (Good to Have)
- Experience in startup environments
- Strong side projects or open-source contributions
- Experience with product analytics tools
- Exposure to AI/ML or agent-based systems
Tech Stack
- Frontend: TypeScript, React 18, Vite, Tailwind, TanStack, Radix
- Mobile: React Native, Nativewind
- Backend: Python (FastAPI, Pydantic)
- Database: Supabase, PostgreSQL
- AI/Tools: Braintrust, LiteLLM
- Analytics: Mixpanel, Amplitude, FullStory
Compensation & Benefits
- CTC: ₹30L – ₹35L (Base) + ESOPs
- Location: HSR Layout, Bengaluru (In-person)
- Work Schedule: 6 days/week (Mon–Sat)
- Joining: Immediate
Perks:
- Sponsored healthy meals (lunch & dinner)
- Gym subscription
- Learning & development budget
- Freedom to use tools/tech of your choice
How We Work
- AI-first mindset
- Founder-mode ownership
- Speed over process
- High trust, high autonomy
- Focus on learning velocity
Why Join Us?
- Work on one of the hardest problems in human history
- Build products at the intersection of AI, healthcare, and design
- Direct impact on improving human health outcomes
- High-growth, high-learning environment
Intern (GenAI - Python) - 3 Months Unpaid Internship
Job Title: GenAI Intern (Python) - 3 Months Internship (Unpaid)
Location: Ahmedabad (On-Site)
Duration: 3 Months
Stipend: Unpaid Internship
Company: Softcolon Technologies
About the Internship:
Softcolon Technologies is seeking a dedicated GenAI Intern who is eager to delve into real-world AI applications. This internship provides hands-on experience in Generative AI development, focusing on RAG systems and AI Agents. It is an ideal opportunity for individuals looking to enhance their skills in Python-based AI development through practical project involvement.
Eligibility:
- Freshers or currently pursuing BE (IT/CE) or related field
- Strong interest in Generative AI and real-world AI product development
Required Skills (Must Have):
- Basic knowledge of Python
- Basic understanding of Python frameworks like FastAPI and basic Django
- Familiarity with APIs and JSON
- Submission of resume, GitHub Profile/Project Portfolio, and any AI/Python project links
What You Will Learn (Internship Goals):
You will gain hands-on experience in:
- Fundamentals of Generative AI (GenAI)
- Building RAG (Retrieval-Augmented Generation) applications
- Working with Vector Databases and embeddings
- Creating AI Agents using Python
- Integrating LLMs such as OpenAI (GPT models), Claude, Gemini
- Prompt Engineering + AI workflow automation
- Building production-ready APIs using FastAPI
Responsibilities:
- Assist in developing GenAI-based applications using Python
- Support RAG pipeline implementation (embedding + search + response)
- Work on API integrations with OpenAI/Claude/Gemini
- Assist in building backend services using FastAPI
- Maintain project documentation and GitHub updates
- Collaborate with team members for tasks and daily progress updates
Selection Process:
- Resume + GitHub portfolio screening
- Short technical discussion (Python + basics of APIs)
- Final selection by the team
Why Join Us?
- Practical experience in GenAI through real projects
- Mentorship from experienced developers
- Opportunity to work on portfolio-level projects
- Certificate + recommendation (based on performance)
- Potential for a paid role post-internship (based on performance)
How to Apply:
Share your resume and GitHub portfolio link via:
Python Embedded Engineer
📍 Location: Chennai
💼 Experience: 3+ Years
💰 Budget: ₹1.2 LPM
🔍 Role Overview
We are looking for a skilled Python Embedded Engineer with hands-on experience in embedded systems. The ideal candidate will work closely with cross-functional teams to design, develop, and optimize software solutions that interact with hardware platforms.
We prefer candidates who are reliable, strong communicators, and technically sound.
🛠️ Key Responsibilities
- Design, develop, and maintain robust Python-based applications
- Work on embedded system software development & integration
- Collaborate with hardware, firmware, and system teams
- Optimize software performance for embedded environments
- Debug, troubleshoot, and resolve system-level issues
- Participate in code reviews and follow coding standards
- Support testing, validation, and product release
✅ Required Skills
- 3+ years of experience in Python development
- Strong understanding of software development fundamentals
- Experience or exposure to embedded systems (mandatory)
- Knowledge of hardware-software interaction & communication protocols
- Familiarity with Git/version control systems
- Strong analytical, problem-solving & communication skills
➕ Good to Have
- Experience with C/C++
- Knowledge of RTOS or Linux-based embedded systems
- Experience with microcontrollers, sensors, or device drivers
- Exposure to CI/CD pipelines & automated testing
🎯 Ideal Candidate
- Trustworthy and dependable
- Strong communication skills
- Ability to work in a cross-functional environment
Strong Junior GenAI / AI Backend Engineer Profiles
Mandatory (Experience 1) – Must have 1+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)
Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)
Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.
Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)
Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems
Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)
Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)
Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem
Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)
Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies
Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.
Strong Senior GenAI / AI Backend Engineer Profiles
Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production
Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems
Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects
Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines
Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases
Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)
Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation
Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects
Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations
Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking
Mandatory (Company) – Product companies / startups, preferably Series A to Series D
Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs
Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks
Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience
Data Quality Engineer
Engineering - Hyderabad, Telangana
About Gradera — Digital Twin & Physical AI Platform
At Gradera, we are building a next-generation Digital Twin and Physical AI platform that enables enterprises to model, simulate, and optimize complex real-world systems. Our work brings together strategy, architecture, data, simulation, and experience design to power decision-making across large-scale operational environments such as manufacturing, logistics, and supply chain networks.
This platform-led initiative applies AI-native execution, advanced simulation, and governed orchestration to help organizations test scenarios, predict outcomes, and continuously improve performance. We operate with an enterprise-first mindset prioritizing reliability, transparency, and measurable business impact as we build intelligent systems that scale beyond a single industry or use case.
Data Quality Engineer
Overview
We are seeking a detail-oriented Data Quality Engineer to ensure the integrity, accuracy, and reliability of data powering our digital twin and AI platforms. You will design and implement data quality frameworks, build automated validation pipelines, and establish quality metrics that enable trusted, simulation-ready data products. This role is critical to ensuring that operational decisions and ML models are built on a foundation of high-quality, governed data.
Our core data quality stack includes:
Data Quality Frameworks
- Delta Live Tables expectations for declarative quality enforcement
- Great Expectations for comprehensive data validation
- Databricks data profiling and quality monitoring
Platform & Tools
- Databricks SQL and PySpark for quality checks at scale
- Unity Catalog for lineage tracking and governance compliance
- Python for custom validation logic and anomaly detection
Observability
- Quality metrics dashboards and alerting
- Data profiling and statistical analysis
- Anomaly detection and drift monitoring
Key Responsibilities
- Design and implement data quality frameworks using Delta Live Tables expectations and Great Expectations
- Build automated data validation pipelines that enforce quality standards at ingestion and transformation stages
- Develop data profiling processes to understand data distributions, patterns, and anomalies
- Define and track data quality metrics (completeness, accuracy, consistency, timeliness, validity)
- Implement anomaly detection mechanisms to identify data drift and quality degradation
- Create quality dashboards and alerting systems for proactive issue identification
- Collaborate with data engineers to embed quality checks into ETL/ELT pipelines
- Partner with data architects to establish data quality standards and governance policies
- Investigate and perform root cause analysis for data quality issues
- Document data quality rules, thresholds, and remediation procedures
- Support data certification processes for simulation-ready and ML-ready datasets
- Drive continuous improvement in data quality practices and tooling
Preferred Qualifications
- 6+ years of experience in data engineering or data quality roles, with 3+ years focused on data quality
- Track record of implementing enterprise-scale data quality frameworks
- Experience with Lakehouse architectures (Delta Lake, Iceberg)
- Familiarity with real-time data quality monitoring for streaming pipelines
- Experience working in agile, cross-functional teams
Highly Desirable
- Experience with data quality for digital twin or simulation platforms
- Familiarity with operational state data validation and temporal consistency checks
- Experience with graph data quality validation (Neo4j or similar)
- Exposure to ML data quality (feature validation, training data quality)
- Experience with data observability platforms
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
Python Embedded Engineer
Location: Chennai
Experience : 3+ years
Budget : 1.2 LPM
We are looking for
Python Engineer who has experience in embedded system is must. Please look for the candidates who are trustworthy and good in communication.
We are looking for a skilled Python Embedded Engineer with 3+ years of experience and exposure to embedded systems.
The ideal candidate will work closely with cross-functional teams to design, develop, and optimize software solutions that interact with hardware and embedded platforms.
Key Responsibilities
Design, develop, and maintain robust Python-based applications and tools.
Work on embedded system software development and integration.
Collaborate with hardware, firmware, and system teams for end-to-end solution development.
Optimize software performance for embedded environments.
Debug, troubleshoot, and resolve system-level issues.
Participate in code reviews and ensure adherence to coding standards.
Contribute to testing, validation, and product release activities.
Required Skills & Qualifications
3+ years of professional experience in Python development.
Strong understanding of software development fundamentals.
Experience or exposure to embedded systems is preferred.
Knowledge of hardware-software interaction and communication protocols.
Familiarity with Git or other version control systems.
Good analytical and problem-solving skills.
Strong communication and teamwork abilities.
Good to Have
Experience with C/C++.
Knowledge of RTOS or Linux-based embedded systems.
Experience with microcontrollers, sensors, or device drivers.
Exposure to CI/CD and automated testing frameworks.
We are looking for a strong Mobile Engineer with backend exposure who can own end-to-end feature development. This is a mobile-heavy fullstack role where you will primarily build scalable mobile applications while contributing to backend services and APIs.
Key Responsibilities
- Design and develop high-quality mobile applications (primary focus)
- Build and integrate RESTful APIs and backend services
- Collaborate with product and design teams to ship features end-to-end
- Ensure performance, scalability, and reliability of mobile apps
- Write clean, maintainable, and testable code
- Participate in architecture discussions and technical decision-making
Must Have Skills
- Strong experience in mobile development (Flutter / React Native / iOS / Android)
- Solid understanding of backend development (Node.js / Java / Python / Go)
- Experience with API design, microservices, and databases
- Good understanding of system design and app performance optimization
- Familiarity with cloud platforms (AWS/GCP)
Good to Have
- Experience working in startup environments
- Exposure to CI/CD pipelines and DevOps practices
- Understanding of real-time systems or scalable architectures
Generative AI System Design
- Architect and implement end-to-end LLM-powered applications
- Build scalable RAG pipelines (chunking, embeddings, hybrid search, reranking)
- Design and implement agent-based workflows (tool calling, multi-step reasoning, orchestration)
- Integrate LLM APIs such as OpenAI and Anthropic, along with open-source models
- Implement structured output validation, grounding strategies, and hallucination mitigation
- Optimize inference cost, latency, and token efficiency
- Design evaluation pipelines for performance, accuracy, and safety
2️⃣ Backend & Microservices Engineering
- Design scalable backend systems using Python
- Build REST and async APIs using FastAPI / Django
- Architect and implement microservices with clear service boundaries
- Implement service-to-service communication (REST, gRPC, event-driven messaging)
- Work with message brokers (Kafka / RabbitMQ)
- Optimize database performance (PostgreSQL, MongoDB)
- Implement caching strategies (Redis)
- Build observability: logging, monitoring, distributed tracing
3️⃣ Cloud-Native Architecture & DevOps
- Design and deploy containerized services using Docker
- Orchestrate services using Kubernetes
- Implement CI/CD pipelines
- Ensure system scalability, resilience, and fault tolerance
- Apply distributed systems principles:
- Circuit breakers
- API gateway patterns
- Load balancing
- Horizontal scaling
- Saga patterns
- Zero-downtime deployments
About Us
We believe the future of software development is AI-native — where engineers operate at a higher level of abstraction and quality remains non-negotiable.
Incubyte is a software craft consultancy where the “how” of building software matters as much as the “what”.
We partner with companies of all sizes, from helping enterprises build, scale, and modernize to early-stage founders bring their ideas to life.
Our engineers operate in an AI-native development model, using AI as a collaborator across the SDLC to accelerate development while upholding the discipline of software craftsmanship. Guided by Software Craftsmanship and Extreme Programming practices, we build reliable, maintainable, and scalable systems with speed, without compromising quality. If this way of building software resonates with you, we’d like to talk.
Our Guiding Principles
These principles define how we work at Incubyte. They are non-negotiable.
Relentless Pursuit of Quality with Pragmatism
We build high-quality systems without losing sight of delivery.
Extreme Ownership
We take responsibility end-to-end for decisions, execution, and outcomes.
Proactive Collaboration
We collaborate closely, challenge each other, and solve problems together.
Active Pursuit of Mastery
We continuously improve our craft and raise our bar.
Invite, Give, and Act on Feedback
We seek, give, and act on feedback to get better every day.
Ensuring Client Success
We act as trusted partners and focus on real outcomes, not just output.
Job Description
This is a remote position.
Experience Level
This role is ideal for engineers with 3–15 years of experience and a strong background in building secure, scalable platforms.
We are looking for hands-on DevOps and Backend Engineers with real-world experience in handling production incidents, distributed systems, and modern infrastructure challenges.
What You’ll Do as a Software Craftsperson
- Design and document real-world DevOps and backend scenarios based on production incidents such as outages, scaling challenges, and secure deployments
- Translate real engineering experiences into benchmark tasks that contribute to training next-generation AI systems
- Contribute to building secure, scalable, Kubernetes-native architectures across modern infrastructure environments
- Work across critical engineering domains including CI/CD pipelines, observability, identity & access management, infrastructure-as-code, and backend services
- Collaborate with internal teams to design and simulate realistic engineering workflows and system behaviors
- Apply practical engineering judgment to model distributed systems challenges and improve system resilience and reliability
Requirements
What You’ll Bring
5–15 years of experience in DevOps and Backend Engineering with a strong foundation in building secure, scalable systems.
Strong hands-on expertise in DevOps and backend technologies including:
- Kubernetes, Terraform, and CI/CD pipelines
- Tools such as k9s, k3s (GitLab CI preferred)
- Backend technologies such as Go, Python, or Java
- Experience with Docker, gRPC, and Kubernetes-native services
Demonstrated experience working with secure, offline or air-gapped deployments (highly preferred)
Familiarity with distributed systems and backend architecture, with exposure to ML or distributed pipelines being a plus.
Hands-on experience across multiple core functional areas, with exposure to at least five of the following:
- Identity & Access Management
- Observability (Prometheus + Grafana)
- CI/CD Pipelines
- Keycloak
- GitLab CI
- Terraform OSS
- Kubernetes ecosystem tools
Strong problem-solving ability with real-world experience in handling production systems, incidents, and infrastructure challenges
Ability to work across multiple layers of the stack, from infrastructure to backend services, while ensuring scalability, reliability, and security
Benefits
Life at Incubyte
We are a remote-first company with structured flexibility. Teams commit to shared rhythms during core hours, ensuring smooth collaboration while maintaining autonomy. Twice a year, we come together in person for a co-working sprint and once a year for a retreat - with all travel expenses covered.
Our environment is built for crafters: experimenting with real-world systems, solving complex infrastructure challenges, and contributing to cutting-edge AI initiatives. We are all lifelong learners, and our work is our passion.
Perks
Dedicated learning & development budget
Sponsorship for conference talks
Comprehensive medical & term insurance
Employee-friendly leave policies
Home Office fund
Medical Insurance
Job Description
Experience: 3–6 Years
Location: Bangalore (Should be willing to travel to Indonesia, 6 months-1 year)
Role Overview
We are looking for a skilled Agentic Engineer / Data Scientist with hands-on experience in designing, architecting, and developing advanced agentic systems and intelligent solutions. The ideal candidate will have a strong foundation in modern AI frameworks and experience building scalable, production-grade AI systems.
Key Responsibilities
- Design and develop agentic systems and AI-driven solutions
- Architect scalable workflows for multi-agent systems
- Implement and optimize RAG (Retrieval-Augmented Generation) pipelines
- Work with vector embeddings and vector databases
- Build and manage agent orchestration frameworks
- Develop end-to-end AI workflows, including data ingestion, processing, and inference
- Collaborate with cross-functional teams to deliver AI-powered solutions
- Continuously evaluate and integrate emerging AI tools and frameworks
Required Skills & Qualifications
- 3–6 years of experience in Data Science, Machine Learning, or AI Engineering
- Strong experience in agent engineering and agent-based architectures
- Hands-on experience with:
- RAG pipelines
- Vector embeddings and semantic search
- Multi-agent orchestration and workflow design
- Proficiency in Python and relevant ML/AI libraries
- Experience with:
- LangChain (mandatory)
- LangGraph (mandatory)
- Solid understanding of LLMs and prompt engineering
- Strong problem-solving and system design skills
Strong Senior Full Stack Engineer profiles
Mandatory (Experience 1) – Must have 4+ years of hands-on full stack engineering experience (avoid frontend-heavy only profiles, backend heavy will work)
Mandatory (Experience 2) – Must have strong backend engineering experience using Python, including designing and owning APIs, services, and data models in production environments
Mandatory (Experience 3) – Must have strong frontend development experience using React (or equivalent), including component architecture, state management, and building production-grade user interfaces
Mandatory (Experience 4) – Must have end-to-end ownership experience, building and shipping features across the full stack (UI + API + database) without clear handoff boundaries
Mandatory (Experience 5) – Must have strong web fundamentals, including understanding of browser rendering, performance optimization, and accessibility best practices
Mandatory (Experience 6) – Must be able to demonstrate solving complex problems on both frontend and backend, clearly articulating trade-offs, decisions, and outcomes
Mandatory (Experience 7) – Must have solid experience with databases (SQL/NoSQL), including schema design, query optimization, and handling performance bottlenecks
Mandatory (Company) - Must have worked in product-based companies, (preferred early-stage startups with Seed to Series C/D with fast-paced shipping culture)
Mandatory (Education) - Strong CS fundamentals required (CS degree or equivalent). Candidates from Tier-1 institutes (IITs, BITS, IIITs) are preferred but not mandatory
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding.
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (Preferably Top product companies, AI native companies, B2B SaaS)
Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)
Mandatory (Note): Candidates who have owned end-to-end product development or worked on app development projects during their graduation will be highly preferred.
Mandatory (Note 2): The role offers a mix of work setups, including remote, Mumbai (in-office), and Bangalore (in-office) opportunities

About the Role
We are looking for a highly skilled Data Scientist with strong expertise in Machine Learning, MLOps, and Generative AI. The ideal candidate will have hands-on experience in building scalable ML models, deploying them in production, and working with modern AI frameworks, including GenAI technologies.
Key Responsibilities
· Design, develop, and deploy machine learning models for real-world business problems
· Work on end-to-end ML lifecycle: data preprocessing, model building, evaluation, deployment, and monitoring
· Implement and manage MLOps pipelines for scalable and reproducible workflows
· Utilize tools like MLflow for experiment tracking, model versioning, and lifecycle management
· Develop and integrate Generative AI (GenAI) solutions such as LLM-based applications
· Collaborate with cross-functional teams (engineering, product, business) to translate requirements into AI solutions
· Optimize model performance and ensure production stability
· Stay updated with the latest advancements in AI/ML and GenAI ecosystems
Required Skills & Qualifications
· 4+ years of experience in Data Science / Machine Learning
· Strong programming skills in Python
· Hands-on experience with ML modeling techniques (supervised, unsupervised, NLP, etc.)
· Solid understanding of MLOps practices and tools
· Experience with MLflow or similar model lifecycle tools
· Practical experience in Generative AI (GenAI), including working with LLMs
· Experience with libraries/frameworks like Scikit-learn, TensorFlow, PyTorch
· Strong understanding of data structures, algorithms, and statistics
· Experience with cloud platforms (AWS/GCP/Azure) is a plus
Good to Have
· Experience with LLM fine-tuning, prompt engineering, or RAG pipelines
· Exposure to Docker, Kubernetes, and CI/CD pipelines
· Knowledge of data engineering workflows
Senior Data (Platform) Engineer
Location: Hyderabad | Department: Technology, Data
About the Role
Are you passionate about building reliable, scalable data platforms that make analytics and AI development easier? As a Senior Data Platform Engineer, you will be hands-on in building, operating, and improving our core data platform and AI/LLM enablement tooling.
You’ll focus on infrastructure, orchestration, CI/CD, and reusable frameworks that support analytics engineering and AI-driven use cases. You’ll work closely with Analytics Engineering and Insights teams and support other departments as they integrate with our data systems.
What You'll Do
Data Platform & Infrastructure
- Build, deploy, and operate cloud infrastructure for data and AI workloads using Infrastructure as Code (Terraform).
- Provision and manage cloud resources across development, staging, and production environments.
- Develop and maintain CI/CD pipelines for data transformations, orchestration workflows, and platform services.
- Operate and scale containerized workloads on Kubernetes, including Airflow, internal APIs, and AI/LLM services.
- Troubleshoot and resolve infrastructure, pipeline, and orchestration failures to ensure platform reliability.
- Maintain and support existing ML services and pipelines to ensure stability and reliability (No expectation to design or develop new ML models or training pipelines).
- Continuously monitor and optimize platform performance and cost.
Framework, Tooling and Enablement
- Build and maintain reusable frameworks and patterns for dbt, Airflow, Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.), Internal data and AI APIs
- Build and support infrastructure and pipelines for AI/LLM-based use cases, including orchestration, integration, and serving.
- Improve developer experience for Analytics Engineering and Insights teams by reducing friction in local development, deployments, and production workflows.
- Create and maintain technical documentation and examples to support self-service analytics and data development.
What You’ll Need
Technical Skills & Experience
- 5+ years of experience in data engineering, platform engineering, or similar hands-on roles.
- Strong programming skills in Python and SQL.
- Hands-on experience with:
- Terraform
- Airflow
- dbt
- Kubernetes
- Cloud platforms (AWS, Google Cloud, or Microsoft Azure)
- CI/CD pipelines (GitHub Actions, GitLab CI, CircleCI, etc.)
- Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.)
- Strong understanding of analytical data models and how analytics teams consume data.
- Experience integrating and operating LLM-based pipelines and services (not model training).
Soft Skills & Collaboration
- Strong problem-solving skills and ability to debug complex platform issues.
- Strong preference for declarative development, with the ability to clearly separate what a system should do from how it is implemented.
- Clear communicator who can work effectively with both technical and non-technical stakeholders.
- Pragmatic, ownership-driven mindset with a focus on reliability and simplicity.
Why Join Us?
We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together.
Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
Job Title: Senior Full-Stack Developer (Tech Lead)
Experience: 5–8 Years
Location: Ahmedabad (Hybrid / On-site preferred)
Salary: Flexible for the right candidate
Note: This role is strictly for Ahmedabad-based (local) candidates. On-site presence is mandatory.
Role Overview
We are seeking a highly skilled Senior Full-Stack Developer (Tech Lead) to lead technical delivery and oversee end-to-end system execution. The ideal candidate will take ownership of architecture, ensure high-quality engineering standards, mentor junior team members, and effectively communicate with clients and stakeholders.
This role is best suited for a hands-on professional who enjoys solving complex problems, building scalable systems, and taking full responsibility for technical outcomes.
Key Responsibilities
- Design, develop, and maintain scalable backend and frontend systems
- Own system architecture and technical decision-making
- Lead code reviews and enforce clean, modular, and maintainable code practices
- Collaborate with clients to understand requirements and provide technical solutions
- Mentor and guide junior developers to improve overall team performance
- Ensure reliability, performance, and security of applications
- Drive best practices across development, deployment, and CI/CD workflows
- Design and integrate Generative AI–powered features where applicable (e.g., chatbots, content generation, automation tools)
Required Technical Skills
Backend (Must-Have)
- Strong experience with Node.js (Express.js / NestJS)
- RESTful API design and implementation
- Database design, optimization, and performance tuning
- Experience with PostgreSQL / MySQL
- Hands-on experience with MongoDB or other NoSQL databases
- Authentication and authorization mechanisms (JWT, OAuth2, RBAC)
- Willingness to learn Python and its frameworks (Django / FastAPI)
- Basic Python knowledge is an added advantage
- Experience with React.js or Next.js
- Strong knowledge of JavaScript / TypeScript
- Component-driven architecture and reusable UI patterns
- State management using Redux / Zustand / Context API
- Responsive UI development using MUI, Ant Design, or Tailwind CSS
Engineering Practices
- Proficient with Git, GitHub/GitLab
- Understanding of CI/CD pipelines
- Experience with Docker and containerization
- Familiarity with clean architecture and modular design patterns
Bonus Skills (Nice to Have)
- Microservices architecture
- Experience with Prisma ORM
- DevOps exposure (AWS, EC2, Vercel, Docker, Nginx)
- Caching solutions such as Redis
- Queue systems (Celery, Kafka, RabbitMQ)
- Exposure to AI / LLM integrations
Soft Skills
- Strong sense of ownership and accountability
- Excellent English communication skills (verbal and written)
- Proven ability to mentor and lead junior developers
- Strong analytical and problem-solving mindset
- Reliable, consistent, and delivery-focused
- Leadership maturity and professionalism
Job Title: Data Engineer
About the Role
We are looking for a highly motivated Data Engineer to join our growing team and play
a critical role in shaping the data foundation of different software platforms. This role sits
at the intersection of data engineering, product, and business stakeholders, and is
responsible for building reliable data pipelines, delivering actionable insights, and
ensuring data quality across systems.
You will work closely with internal teams and external partners to translate business
requirements into scalable data solutions, while maintaining high standards for data
integrity, performance, and usability.
Key Responsibilities
Data Engineering & Architecture
Design, build, and maintain scalable data pipelines and ETL/ELT processes
Develop and optimize data models in PostgreSQL and cloud-native
architectures
Work within AWS ecosystem (e.g., S3, Lambda, RDS, Glue, Redshift, etc.) to
support data workflows
Ensure efficient ingestion and processing of large-scale datasets
Business & Partner Integration
Collaborate directly with business stakeholders and external partners to
gather requirements and deliver reporting solutions
Translate ambiguous business needs into structured data models and
dashboards
Integrate with third-party APIs and other external data sources
Data Quality & Governance
Implement robust data validation, monitoring, and QA processes
Ensure consistency, accuracy, and reliability of data across the platform
Troubleshoot and resolve data discrepancies proactively
Reporting & Analytics Enablement
Build datasets and pipelines that power dashboards and reporting tools
Support internal teams with ad hoc analysis and data requests
Partner with product and engineering teams to embed data into the SaaS product experience
Performance & Scalability
Optimize queries, pipelines, and storage for performance and cost efficiency
Continuously improve system scalability as data volume and complexity grow
Required Qualifications
3–6+ years of experience in Data Engineering or related role
Strong proficiency in Python for data processing and scripting
Advanced experience with PostgreSQL (query optimization, schema design)
Hands-on experience with AWS data architecture (S3, RDS, Lambda, Glue,
Redshift, etc.)
Experience integrating with external APIs
Solid understanding of ETL/ELT pipelines, data modeling, and warehousing
concepts
Experience working cross-functionally with business stakeholders
Preferred Qualifications
Experience in AdTech, eCommerce, or SaaS platforms
Familiarity with BI tools (e.g., Looker, Tableau, Power BI)
Experience with workflow orchestration tools (e.g., Airflow)
Understanding of data governance and compliance best practices
Exposure to real-time or streaming data pipelines
What We’re Looking For
Strong problem-solver who can operate in a fast-paced, ambiguous
environment
Ability to balance technical depth with business context
Excellent communication skills — able to work directly with non-technical
stakeholders
Ownership mindset with a focus on execution and quality
Overview:
We're looking for a Full Stack Developer with strong backend expertise who can build,
manage, and scale AI-driven products end to end. You'll play a critical role in designing
scalable architectures, optimizing performance and cost, and building robust AI and agentic
systems.
Responsibilities
1. Architect and build scalable backend systems using FastAPI, PostgreSQL, and Redis.
2. Design, develop, and maintain AI-driven applications, integrating multiple LLMs, APIs,
and agentic frameworks.
3. Implement vector databases (pgvector, Qdrant, etc.) for RAG and AI memory systems.
4. Orchestrate multi-agent AI systems with LangChain/LangGraph, including function
calling, agent collaboration, and monitoring.
5. Build and integrate RESTful APIs for frontend and external use.
6. Manage DevOps workflows, including CI/CD, cloud deployments (AWS/GCP), server
scaling, and logging/monitoring (Sentry).
7. Optimize application cost, latency, and reliability, balancing speed with LLM call
efficiency and caching strategies.
8. Collaborate with product, design, and AI teams to translate business requirements into
high-performing tech.
9. Maintain documentation and ensure code quality with tests, reviews, and async-first
architecture.
10. Contribute to frontend development (React + TypeScript) when necessary, ensuring
seamless API integration and data visualization.
Requirements
Core Skills
• Strong proficiency in Python and FastAPI.
• Experience with PostgreSQL (including pgvector) and SQLAlchemy (async).
• Solid understanding of Redis, RQ (Redis Queue), and caching mechanisms.
• Proven experience integrating LLMs and AI APIs (OpenAI, Anthropic, etc.).
• Hands-on experience with LangChain / LangGraph, RAG pipelines, and agent
orchestration.
• Experience working with cloud platforms (AWS / GCP) and managing file storage (S3).
• Familiarity with frontend stacks (React, TypeScript, Tailwind, Zustand).
• Working knowledge of DevOps: Docker, CI/CD pipelines, deployment automation, and
observability tools (Sentry, Mixpanel, Clarity).
Bonus / Nice to Have
• Experience building agent monitoring dashboards or AI workflows.
• Prior experience in startup or product-based environments.
• Understanding of LLM cost optimization, token management, and function calling
orchestration.
• Familiarity with external API integrations like BrightData, Hunter.io, Adzuna, and Serper.
• Experience building scalable AI products (e.g., chatbots, AI copilots, data agents, or
automation tools).
Mindset
• Startup-ready: comfortable working in fast-paced, ambiguous environments.
• Deep curiosity about AI systems and automation.
• Strong sense of ownership and accountability for shipped products.
• Pragmatic and cost-conscious in architectural decisions.
• Excellent communication and documentation skills.
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.
Description
Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.
As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.
What you will do...
- Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
- You will work to develop delivery estimates and an estimated project plan.
- You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
- Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
- Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
- Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
- Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.
What we need...
- 5+ years of experience working in a Software Engineering capacity
- Excellent knowledge and experience with Python, and preferably additional languages such as Go
- Strong critical thinking skills, and a bias towards problem solving
- Familiarity with implementing microservice architectures
- Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
- Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
- Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
- Experience deploying production workloads on the public cloud - either GCP or AWS
- Experience using CI/CD tools such as GitHub Actions, GitLab, etc
- Able to work with new tools and technologies where you may not have prior experience
- Comfortable with being on video in meetings internally and with clients
- Strong English communications skills
We are a fully remote company and offer competitive compensation and benefits.
About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.
Key Responsibilities:
- Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
- Design and build efficient, secure, and modular client-side and server-side architecture
- Develop high-performance web applications with reusable and maintainable code
- Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
- Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
- Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: Minimum of 2+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
- Technical Skills:
- Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
- Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
- Familiarity with NoSQL and PostgreSQL databases
- Experience working with audio/video processing libraries is a strong plus
- Soft Skills:
- Strong problem-solving skills and the ability to think critically about issues and solutions
- Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
- Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
- Keen attention to detail and a passion for delivering high-quality, scalable solutions
- Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment
Compensation and Benefits:
- Health Insurance: Comprehensive health coverage provided by the company
- ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
Job Title: Senior AI/ML Engineer/Team Lead
Location: Gurugram, Haryana
Employment Type: Full-Time
Experience: 4-9 Years
CTC: Up to 15LPA
About Aaizel Tech
Aaizel Tech is a pioneering tech startup at the intersection of cybersecurity, AI, geospatial solutions, and more. We drive innovation by delivering transformative technology solutions across industries. As a growing startup, we are looking for passionate and versatile professionals eager to work on cutting-edge projects in a dynamic environment.
Role Overview
As a Senior AI/ML Engineer at Aaizel Tech, you will lead the design, development, and deployment of advanced Machine Learning models and AI solutions. You will work on projects ranging from predictive analytics and NLP to computer vision and anomaly detection. You will also mentor a team of AI/ML professionals, collaborate with cross-functional teams, and drive innovation by integrating state-of-the-art research with scalable production systems.
Key Responsibilities
1. Model Development & Optimization
Design & Implementation:
- Architect and develop end-to-end ML solutions for applications such as predictive analytics, anomaly detection, computer vision, and NLP.
- Utilize advanced techniques including deep learning (CNNs, RNNs), reinforcement learning, and generative models (GANs) to address complex challenges.
Optimization:
- Fine-tune model parameters using techniques such as hyperparameter tuning (Grid Search, Bayesian Optimization, Neural Architecture Search).
- Optimize models for both accuracy and inference speed to meet real-time processing requirements.
2. Advanced Data Engineering & Integration
Data Pipeline Development:
- Build robust ETL pipelines using libraries like Pandas, NumPy, and PySpark to process large-scale datasets from satellite imagery, IoT sensors, and real-time streams.
- Integrate data from diverse sources (APIs, databases, big data platforms like Hadoop and Apache Kafka) to support real-time analytics.
Data Quality & Preprocessing:
- Implement data cleansing, feature engineering, and transformation pipelines to ensure high-quality inputs for ML models.
3. Research & Innovation
Algorithm Research:
- Conduct research on state-of-the-art ML techniques including Transfer Learning, Transformer models, and AutoML to enhance model performance.
- Innovate new algorithms for specialized tasks such as geospatial analysis, environmental modeling, or cybersecurity threat detection.
Prototyping & Experimentation:
- Develop proof-of-concept models and prototypes to validate new approaches before production deployment.
4. Deployment, MLOps & Performance Monitoring
Model Deployment:
- Deploy models using containerization (Docker) and orchestration tools (Kubernetes) to ensure scalable and efficient production environments.
- Work with cloud platforms (AWS, Azure, GCP) and model serving solutions (TensorFlow Serving, ONNX, TorchServe) for high-throughput inference.
MLOps & Lifecycle Management:
- Implement CI/CD pipelines for ML models, ensuring seamless updates and versioning.
- Develop monitoring dashboards (using Prometheus, Grafana) to track model performance and trigger retraining based on real-time feedback.
5. Collaboration & Leadership
Cross-Functional Teamwork:
- Collaborate closely with data engineers, software developers, domain experts, and product managers to integrate AI solutions into end-to-end products.
Mentorship & Code Quality:
- Provide technical leadership and mentorship to junior AI/ML engineers, ensuring adherence to coding standards and best practices.
- Participate in code reviews, maintain detailed documentation, and foster a culture of continuous learning.
Recommended Technology Stack
Backend Framework:
- Python (Django/FastAPI): Ideal for API integration, leveraging Python’s rich AI/ML ecosystem.
AI/ML Frameworks:
- PyTorch + Hugging Face Transformers + scikit-learn: For flexibility in research, multilingual NLP tasks, and classical ML pipelines.
Data Engineering:
- Apache Kafka + Apache Spark + Apache NiFi: To handle both real-time data streaming and batch processing.
Database & Storage:
- PostgreSQL with TimescaleDB extension: For structured and time-series data storage.
DevOps & Monitoring:
- Docker, Kubernetes, GitLab CI/CD, Prometheus/Grafana: For containerized deployments, continuous integration, and comprehensive monitoring.
Media Processing:
- OpenCV, FFmpeg, Tesseract OCR, Wav2Vec2: To support image, video, and speech-to-text processing where needed.
Required Skills & Qualifications
Technical Expertise:
- Experience:
- 4+ years in Machine Learning, AI research, or a related field with a proven track record of delivering production-level AI solutions.
- Programming & Frameworks:
- Expertise in Python and hands-on experience with frameworks like PyTorch, TensorFlow, and scikit-learn.
- Experience with Hugging Face Transformers for NLP applications.
- Data Engineering:
- Proficiency in building data pipelines using Pandas, NumPy, PySpark, and integrating data from diverse sources.
- Familiarity with big data platforms and real-time data processing frameworks.
- Model Deployment & MLOps:
- Hands-on experience with containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines for ML models.
- Experience with cloud deployment and model serving solutions.
- Research & Innovation:
- Demonstrated ability to apply advanced ML techniques (deep learning, transfer learning, reinforcement learning) to solve real-world problems.
- Testing & Optimization:
- Strong background in model evaluation, hyperparameter tuning, and performance optimization.
Soft Skills:
- Exceptional problem-solving and analytical abilities.
- Strong communication skills, with the ability to present complex technical concepts to diverse stakeholders.
- Leadership and mentoring experience, with a collaborative approach to working in cross-functional teams.
- Ability to thrive in a fast-paced, dynamic environment and drive continuous innovation.
Educational Background:
- Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or a related field from a reputed institution.
What We Offer
- Innovative Projects: Engage in cutting-edge AI/ML projects that influence product strategy and technological innovation.
- Professional Growth: Opportunities for continuous learning, mentorship, and career advancement.
- Collaborative Culture: Work within a diverse team of experts passionate about pushing the boundaries of technology.
- Impactful Work: Play a key role in shaping AI-driven solutions and driving real-world impact.
Join Aaizel Tech as a Senior AI/ML Engineer and lead the development of innovative, scalable AI solutions that transform industries and drive digital excellence!
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that entrust us with their most critical infrastructure and operations. We're bootstrapped, profitable, and scaling rapidly by consistently solving real, impactful problems.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who we seek
We are looking for a Fullstack Developer Intern to join our Engineering team. You’ll build and improve internal products. This is a hands-on internship focused on learning by shipping. Your ultimate goal will be to build highly responsive and innovative AI based software solutions that meet our business needs.
We're looking for individuals who genuinely care, ship fast, and are driven to make a significant impact.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
- Build user-facing features using Next.js and TypeScript.
- Convert designs into responsive UI using Tailwind CSS and reusable components.
- Work with APIs to integrate frontend with backend services.
- Implement common product workflows: authentication, forms, dashboards, tables, and navigation.
- Fix bugs, write clean code, and improve performance.
- Collaborate in a PR-based workflow on GitHub.
- Write and maintain documentation for the features you ship.
- Learn and apply best practices: component structure, state management, error handling, accessibility basics.
What We’re Looking For
- Basic to intermediate experience with JavaScript and NextJS.
- Familiarity with TypeScript basics.
- Comfortable with HTML/CSS and responsive design, Tailwind CSS is a plus.
- Understanding of how APIs work and how to consume them from the frontend.
- Strong Git knowledge.
- Strong learning mindset, ownership, and attention to detail.
Benefits
- Work directly with founders and the leadership team.
- Drive projects that create real business impact, not busywork.
- Gain practical skills that traditional education misses.
- Experience rapid growth as you tackle meaningful challenges.
- Fuel your career journey with continuous learning and advancement paths.
- Thrive in a workplace where collaboration powers innovation daily.
Description
We are seeking a skilled and detail-oriented Software Developer to automate our internal workflows, develop tools for internal use that are used by our development team.
We follow the following practices: unit testing, continuous integration CI, continuous deployment CD, and DevOps.
We have codebases in go, java, python, vue js, bash and support the development team that develops C code.
You need to like challenges, explore new fields and find solutions for problems.
You will be responsible for coordinating, automating, and validating internal workflows and ensuring operational stability, and system reliability.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 2+ years in professional software development
- Solid understanding of software development patterns like SOLID, GoF or similar.
- Experience automating deployments for different kinds of applications.
- Strong understanding of Git version control, merge/rebase strategies, tagging.
- Familiarity with containerization (Docker) and deployment orchestration (e.g., docker compose).
- Solid scripting experience (bash, or similar).
- Understanding of observability, monitoring, and probing tooling (e.g., Prometheus, Grafana, blackbox exporter).
Preferred Skills
- Experience in SRE
- Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
- Familiarity with build tools like Make, CMake, or similar.
- Exposure to artifact management systems (e.g., aptly, Artifactory, Nexus).
- Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
- Develop new services that are needed by SRE, Field or Development Team by adopting unit testing, agile, clean code practices.
- Drive the CI/CD pipeline and maintain the workflows, using tools such as GitLab, Jenkins
- Deploy the services and implement and refine the automation for different environments.
- Operate: The services that the SRE Team developed.
- Automate release pipelines: Build and maintain CI/CD workflows using tools such as Jenkins and GitLab.
- Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
- Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readines
- Success Metrics
- Achieve >99% service up time with minimal rollbacks.
- Delivery in time, hold timelines.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide
🚀 Job Title : Gen AI Engineer
Experience : 3 to 5 Years
Location : Bengaluru (MG Road – Prestige Building)
Work Mode : Hybrid (3 Days WFO)
Open Positions : 2
Notice Period : Immediate to 15–20 Days Preferred
🎯 Role Overview :
We are looking for a Gen AI Engineer with hands-on experience in building and deploying LLM powered applications.
You will work on cutting-edge AI solutions, including real-world enterprise use cases and next-generation internal products.
🛠 Mandatory Skills :
- Strong proficiency in Python.
- Hands-on experience with LLMs & GenAI frameworks (LangChain, LlamaIndex, Semantic Kernel).
- Experience in prompt engineering and system design.
- Knowledge of vector databases & embeddings.
- Experience integrating GenAI solutions into production systems.
- Understanding of REST APIs, async processing, and streaming responses.
⚡ AI / ML Knowledge :
- Strong understanding of NLP & transformer-based models.
- Familiarity with fine-tuning, embeddings, and inference patterns.
- Knowledge of GenAI evaluation metrics (accuracy, relevance, grounding).
☁ Infrastructure & Tooling :
- Experience with Cloud platforms (AWS / Azure / GCP).
- Familiarity with Docker & Kubernetes (good to have).
- Exposure to CI/CD pipelines and MLOps practices.
🌟 Nice to Have :
- Experience with multimodal models (vision, OCR, speech).
- Knowledge of AIOps, RCA, observability, enterprise workflows.
- Experience building AI agents & orchestration layers.
- Understanding of AI governance, safety, and responsible AI.
💼 Key Responsibilities :
- Design, develop, and deploy GenAI applications using LLMs (OpenAI, Azure OpenAI, open-source models).
- Build RAG pipelines using vector databases (FAISS, Pinecone, Weaviate, Chroma).
- Develop high-quality prompts, system instructions, and structured outputs (JSON, function calling).
- Integrate GenAI capabilities into backend systems via APIs & microservices.
- Optimize models for performance, latency, cost, and accuracy.
- Implement evaluation frameworks (hallucination detection, confidence scoring).
- Ensure data security, privacy, and compliance.
- Collaborate with product, design, and domain teams.
- Document architecture, prompts, and best practices.
💼🤝 Interview Process :
1. Geektrust Assessment (AI Agent-based evaluation)
2. Final Interview with Founder (45 mins)

Leading drive specialist for machine and plant engineering
Job Details
- Job Title: Sr. Python Automation Developer
- Industry: Engineering
- Domain - Information technology (IT)
- Experience Required: 7-9 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description
• Designation – Python Automation Developer (Sr./ Advanced Sr.)
• Experience: 7 to 9 years.
• Qualifications: B.E./MCA//M.Sc./B.Sc.
• Location: Pune (Near Sangamwadi )
Skills & Technologies:
Mandatory:
• Experience using OOPs in Python/Java/C++/C# (If the candidate has experience in Java/C++/C#, he must be willing to learn Python and work in it)
• Good analytical, design, coding and debugging skills
• Good analytical and requirement of understanding skills.
• Good design patterns, frameworks & coding skills – able to translate requirements into design and able to translate design into fully functional & efficient code.
• English communication skills.
Desirable:
• Working experience on any defect management tool
• Working experience on GIT/SVN or any code repository management tool
• Development Using Eclipse IDE or equivalent IDE
Behaviors:
• Good team player
• Openness to learn new technologies.
• Self-motivated and proactive
• Should work with minimum supervision.
• Should be able to supervise juniors.
• Take ownership
Must-Haves
• 5.9 years of relevant experience using OOPs in Python/Java/C++/C#
(If the candidate has experience in Java/C++/C#, he must be willing to learn Python and work in it.)
• Good analytical, design, coding, and debugging skills
• Good analytical and requirement of understanding skills.
• Good design patterns, frameworks & coding skills – able to translate
requirements into design and able to translate design into fully functional & efficient code.
• English communication skills.
We are looking for a highly skilled Senior Full Stack Developer with experience in building secure, scalable, and high-performing web applications for eLearning or fintech platforms. The ideal candidate will have a strong background in multi-tenant dashboards, role-based access control, and API integrations, with a proactive, go-getter mindset.
Key Responsibilities:
- Develop and maintain multi-tenant dashboards with role-based access control (RBAC) and hierarchical permission models.
- Implement authentication, authorization, and session management across different user tiers, ensuring secure handling of credentials and sensitive data.
- Integrate LMS systems, third-party APIs, and fintech services while maintaining data integrity and security.
- Ensure secure hosting, multi-layered application protection, and adherence to IT standards and best practices.
- Optimize applications for performance, scalability, and reliability across platforms.
- Collaborate with cross-functional teams, including designers, content developers, and project managers, to deliver end-to-end solutions.
Required Skills & Experience:
- Proven full-stack development experience with technologies such as JavaScript, React, Node.js, Python, Java, or similar.
- Hands-on experience with multi-level user management, RBAC, and dashboard architectures.
- Strong knowledge of APIs, hosting, cloud services, and secure deployment practices.
- Familiarity with data security protocols, fintech integrations, and enterprise IT standards.
- Excellent problem-solving skills, proactive approach, and ability to work independently.
Preferred:
- Experience in eLearning platforms (SCORM, xAPI) or fintech solutions.
- Knowledge of database optimization, caching, and performance tuning.
- Experience with international eLearning projects or multi-location deployments.
About the Role
Pendo is looking for a Software Engineer to help build and scale the platform that powers our integrations with enterprise systems such as Salesforce, Slack, Segment, and other partner tools. This team develops the services, APIs, data pipelines, and user interfaces that enable customers to seamlessly connect Pendo into their product and data ecosystems.
In this role, you will primarily focus on building scalable backend systems while also contributing to the frontend experiences that allow customers to configure, manage, and monitor integrations. You’ll collaborate closely with product managers, designers, and infrastructure teams to deliver reliable, high-performance capabilities used by millions of users.
What You'll Do
- Design and build scalable backend services and APIs that power Pendo’s integrations platform.
- Develop and maintain distributed, event-driven data pipelines that process and sync high volumes of behavioral and product analytics data.
- Contribute to frontend applications that allow customers to configure, manage, and monitor integrations and data workflows.
- Lead technical initiatives from design through implementation, testing, and production rollout.
- Integrate with third-party APIs and enterprise platforms using technologies such as REST, webhooks, and OAuth.
- Collaborate with product, design, infrastructure, and partner teams to translate business needs into high-quality technical solutions.
- Use modern development workflows and AI-powered tools to improve developer productivity and streamline engineering processes.
- Participate in design reviews and promote best practices in testing, observability, performance, and system reliability.
- Contribute to improving platform scalability, availability, and operational excellence.
What We're Looking For
- Experience building backend services, APIs, or distributed systems.
- Experience developing modern web applications using frameworks such as Vue, React, or Angular.
- Strong proficiency in at least one backend language such as Go, Java, Python, or C++.
- Experience working with cloud infrastructure such as AWS or GCP.
- Familiarity with distributed systems, event-driven architectures, or high-throughput data pipelines.
- Experience writing and maintaining unit, integration, and end-to-end tests.
- Strong collaboration and communication skills.
Nice to Have
- Experience building integration platforms or working with third-party APIs.
- Familiarity with authentication models such as OAuth and enterprise SaaS integrations.
- Experience working with analytics or behavioral event data.
- Experience leveraging AI-assisted development tools or working with modern AI workflows.
Technologies We Use
- Frontend: Vue, Vuex, React, Angular, Highcharts, Jest, Cypress
- Backend: Go, Java, Python, C++
- Cloud & Data: AWS, GCP, Redis, Pub/Sub, SQL/NoSQL
- AI / ML: GenAI, LLMs, LangChain, MLOps
Job Responsibilities :
- Work closely with product managers and other cross-functional teams to help define, scope, and deliver world-class products and high-quality features addressing key user needs.
- Translate requirements into system architecture and implement code while considering performance issues of dealing with billions of rows of data and serving millions of API requests every hour.
- Ability to take full ownership of the software development lifecycle from requirement to release.
-Writing and maintaining clear technical documentation enabling other engineers to step in and deliver efficiently.
- Embrace design and code reviews to deliver quality code.
- Play a key role in taking Trendlyne to the next level as a world-class engineering team
- Develop and iterate on best practices for the development team, ensuring adherence through code reviews.
- As part of the core team, you will be working on cutting-edge technologies like AI products, online backtesting, data visualization, and machine learning.
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Proficient understanding of the performance of web and mobile applications.
- Mentor junior developers and foster skill development within the team.
Job Requirements :
- 1+ years of experience with Python and Django.
- Strong understanding of relational databases like PostgreSQL or MySQL and Redis.
- (Optional) : Experience with web front-end technologies such as JavaScript, HTML, and CSS
Who are we :
Trendlyne, is a Series-A products startup in the financial markets space with cutting-edge analytics products aimed at businesses in stock markets and mutual funds.
Our founders are IIT + IIM graduates, with strong tech, analytics, and marketing experience. We have top finance and management experts on the Board of Directors.
What do we do :
We build powerful analytics products in the stock market space that are best in class. Organic growth in B2B and B2C products have already made the company profitable. We deliver 900 million+ APIs every month to B2B customers. Trendlyne analytics deals with 100s of millions rows of data to generate insights, scores, and visualizations, which are an industry benchmark.
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 1-3 years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Job Responsibilities :
- Work closely with product managers and other cross-functional teams to help define, scope, and deliver world-class products and high-quality features addressing key user needs.
- Translate requirements into system architecture and implement code while considering performance issues of dealing with billions of rows of data and serving millions of API requests every hour.
- Ability to take full ownership of the software development lifecycle from requirement to release.
-Writing and maintaining clear technical documentation enabling other engineers to step in and deliver efficiently.
- Embrace design and code reviews to deliver quality code.
- Play a key role in taking Trendlyne to the next level as a world-class engineering team
- Develop and iterate on best practices for the development team, ensuring adherence through code reviews.
- As part of the core team, you will be working on cutting-edge technologies like AI products, online backtesting, data visualization, and machine learning.
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Proficient understanding of the performance of web and mobile applications.
- Mentor junior developers and foster skill development within the team.
Job Requirements :
- 4+ years of experience with Python and Django.
- Strong understanding of relational databases like PostgreSQL or MySQL and Redis.
- (Optional) : Experience with web front-end technologies such as JavaScript, HTML, and CSS
Who are we :
Trendlyne, is a Series-A products startup in the financial markets space with cutting-edge analytics products aimed at businesses in stock markets and mutual funds.
Our founders are IIT + IIM graduates, with strong tech, analytics, and marketing experience. We have top finance and management experts on the Board of Directors.
What do we do :
We build powerful analytics products in the stock market space that are best in class. Organic growth in B2B and B2C products have already made the company profitable. We deliver 900 million+ APIs every month to B2B customers. Trendlyne analytics deals with 100s of millions rows of data to generate insights, scores, and visualizations, which are an industry benchmark.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding.
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (Preferably Top product companies, AI native companies, B2B SaaS)
Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)
Mandatory (Note): Candidates who have owned end-to-end product development or worked on app development projects during their graduation will be highly preferred.
Mandatory (Note 2): The role offers a mix of work setups, including remote, Mumbai (in-office), and Bangalore (in-office) opportunities
Role- Data Analyst
Experience- 2 to 5 years
Location-Bangalore
Job Role-
● Experience: Minimum of 2+ years of professional experience in a data-heavy
environment (E-commerce or Fintech experience is a plus).
● SQL Mastery: Exceptional ability to write complex joins, window functions, Analytical
functions, and CTEs. Experience with high-scale databases (e.g., BigQuery, Hive, or
Postgres).
● Scripting: Functional knowledge of Python for data manipulation (Pandas, NumPy)
and basic automation scripts.
● Systems Thinking: Ability to understand upstream data flows and how they impact
downstream reporting.
● Problem-Solving: A "detective" mindset—you enjoy digging into a Rs 600Cr discrepancy until you find the root cause
We are looking for a skilled ML Engineer with 3–5 years of experience in building and deploying production-grade AI solutions, particularly around LLMs, RAG systems, and agentic AI frameworks. The role involves designing end-to-end ML architectures, optimizing models at scale, and delivering client-ready AI solutions. You will collaborate closely with stakeholders, mentor junior engineers, and drive AI projects from experimentation to production.
What will you need to be successful in this role?
Core Technical Skills
• Strong hands-on experience with Python for ML/AI (NumPy, Pandas, Scikit-learn, PyTorch/TensorFlow)
• Proven experience deploying production LLM applications with 1M+ tokens processed
• Advanced prompt engineering expertise including ReAct, meta-prompting, and function calling
• Production experience with RAG systems including hybrid search and re-ranking
• Deep understanding of embedding models and vector databases at scale
• Experience with agentic AI frameworks (LangGraph, CrewAI, or AutoGen)
• Strong knowledge of LLM evaluation frameworks (RAGAS, LLM-as-judge patterns)
• Experience implementing multi-agent systems and orchestration
• Proficiency with cloud ML platforms (AWS SageMaker, Azure ML, or Vertex AI)
Advanced Capabilities
• Experience with model fine-tuning (LoRA, QLoRA, PEFT, instruction tuning)
• Knowledge of knowledge graphs and graph-based RAG implementations
• Understanding of model hosting, inference optimization, and cost management
• Experience with MLOps pipelines, CI/CD for ML, and model versioning
• Ability to architect end-to-end ML solutions from data ingestion to deployment
• Experience with data pipelines and ETL for ML workflows
• Proficiency in containerization and orchestration (Docker, Kubernetes)
Client Engagement & Delivery
• Experience presenting technical solutions to clients and stakeholders
• Ability to translate business requirements into technical ML solutions
• Track record of delivering client POCs and production implementations
• Experience creating technical documentation and implementation guides
Good to have
• Experience hosting private LLMs (7B-13B models on-premises or cloud)
• Knowledge of graph databases (Neo4j) and graph neural networks
• Experience with streaming and real-time ML inference
• Published research papers or contributions to open-source ML projects
• DeepLearning.AI certifications in Agentic AI, RAG, or Finetuning
• AWS/Azure ML certifications or working towards them
Competencies
• Excellent verbal and written communication skills
• Strong mentoring ability for junior ML engineers
• Self-driven with ability to work independently on complex problems
• Excellent problem-solving skills with systematic debugging approach
• Proactive ownership of projects from ideation to deployment
• Ability to stay current with rapidly evolving AI/ML landscape
• Excellent academic record – B.E./B.Tech, MCA,
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● Bachelor's or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming along with Frameworks like Django/Fast Api/Flask and Java frameworks like Spring, Hibernate, SpringBoot, etc
● Debug and resolve technical issues that arise during the development or after deployment at various stages.
● Experience in databases including MySQL and NoSQL
● Experience in designing, developing and maintaining high availability systems.
● Experience in MVC pattern, Tomcat, Git, and Jira.
● Experience working with AWS cloud platform.
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Strong analytical and data driven approach to problem solving
Senior Data Engineer (Azure Databricks)
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
- Work extensively with PySpark notebooks within Databricks for data processing and transformation
- Build and optimize batch data processing workflows
- Develop and manage data integrations using Azure Functions and Logic Apps
- Write efficient and optimized SQL queries for data extraction and transformation
Required Skills:
- Strong hands-on experience with Azure Databricks, PySpark, and SQL
- Experience working with batch processing frameworks
- Proficiency in building and managing data pipelines in Azure ecosystem
Good to Have:
- Experience with Python
Mandatory Requirement:
- Candidate must have hands-on experience working with PySpark notebooks in Databricks
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Leading drive specialist for machine and plant engineering
Job Details
- Job Title: Sr. Python Automation Developer
- Industry: Engineering
- Domain - Information technology (IT)
- Experience Required: 7-9 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description
• Designation – Python Automation Developer (Sr./ Advanced Sr.)
• Experience: 7 to 9 years.
• Qualifications: B.E./MCA//M.Sc./B.Sc.
• Location: Pune (Near Sangamwadi )
Skills & Technologies:
Mandatory:
• Experience using OOPs in Python/Java/C++/C# (If the candidate has experience in Java/C++/C#, he must be willing to learn Python and work in it)
• Good analytical, design, coding and debugging skills
• Good analytical and requirement of understanding skills.
• Good design patterns, frameworks & coding skills – able to translate requirements into design and able to translate design into fully functional & efficient code.
• English communication skills.
Desirable:
• Working experience on any defect management tool
• Working experience on GIT/SVN or any code repository management tool
• Development Using Eclipse IDE or equivalent IDE
Behaviors:
• Good team player
• Openness to learn new technologies.
• Self-motivated and proactive
• Should work with minimum supervision.
• Should be able to supervise juniors.
• Take ownership
Must-Haves
• 5.9 years of relevant experience using OOPs in Python/Java/C++/C#
(If the candidate has experience in Java/C++/C#, he must be willing to learn Python and work in it.)
• Good analytical, design, coding, and debugging skills
• Good analytical and requirement of understanding skills.
• Good design patterns, frameworks & coding skills – able to translate
requirements into design and able to translate design into fully functional & efficient code.
• English communication skills.
About NonStop:
NonStop is a software services company at the intersection of bioinformatics, genomics, and healthcare technology. We partner with biotech firms, pharma organizations, genomics labs, and clinical institutions to design and deliver production-grade bioinformatics software, AI-powered analytical platforms, and end-to-end genomic data pipelines.
We work on problems that matter: from accelerating variant interpretation workflows and building HIPAA-compliant AI platforms, to orchestrating large-scale multi-omics pipelines for disease diagnostics and pharmacogenomics. Our team blends deep domain expertise with engineering rigor, and we're growing to meet the increasing demand from the life sciences industry for smart, scalable, and compliant bioinformatics solutions.
We work this way:
- We dig deep into biological problems, not just the code. Domain knowledge is valued as much as engineering craft.
- Bioinformatics is a team sport. You'll work alongside software engineers, clinicians, and research scientists.
- You own your work end-to-end, from design to delivery. We trust you to make good decisions and learn fast.
- Life sciences move fast. We encourage continuous learning, conference participation, and staying ahead of the field.
Your role:
As a Bioinformatics Engineer at NonStop, you will be a key contributor in designing, building, and maintaining bioinformatics software solutions and analytical pipelines for our clients across genomics, clinical diagnostics, and precision medicine. You'll bring both biological insight and engineering excellence to every project, collaborating with product, engineering, and scientific teams to deliver solutions that are scalable, reproducible, and compliant.
You’ll be responsible for:
- Build scalable bioinformatics applications and pipelines for efficient processing of genomic, transcriptomic, and multi-omics data.
- Produce high-quality, detailed documentation for all projects, pipelines, tools, APIs, and analytical methods.
- Provide technical consultation and solutions across cross-functional bioinformatics projects.
- Coach and mentor team members through knowledge sharing, code reviews, and pairing on domain-specific challenges.
- Ensure compliance with our SDLC process throughout the product development lifecycle.
- Stay current with evolving bioinformatics technologies and evangelize technical excellence within the team.
We’re looking for:
- Minimum 2 to 3 years of experience in designing, developing, and maintaining bioinformatics solutions.
- Master's degree in Bioinformatics, Computational Biology, or a closely related technical discipline.
- Strong understanding of genomic data analysis, variant calling, targeted sequencing, whole-exome (WES), and whole-genome sequencing (WGS) workflows.
- Hands-on experience with RNA-seq analysis, including differential expression and transcriptomic workflows.
- Proficiency in variant interpretation and ACMG/AMP classification workflows is a plus.
- Knowledge of algorithms and computational model development applied to biological data.
- Strong foundation in statistics and data analysis as applied to genomics and bioinformatics.
- Experience developing and debugging bioinformatics pipelines using Nextflow, Snakemake, WDL, or CWL.
- Proficiency in shell scripting and Linux/Unix environments for NGS data analysis.
- Familiarity with workflow automation and best practices in reproducible pipeline design.
- Excellent programming skills in Python (primary); familiarity with R or Java is a plus.
- Proficiency with standard bioinformatics tools (GATK, DeepVariant, VEP, ANNOVAR, MultiQC, FastQC, etc.).
- Experience with relational databases (PostgreSQL, MySQL, or Oracle) and NoSQL databases (MongoDB).
- Confident use of Git and GitHub for version control and collaborative development.
- Experience with cloud computing platforms (AWS or GCP, or Azure) for bioinformatics workloads.
- Familiarity with high-performance computing (HPC) environments is a plus.
Why join NonStop:
- Work on real-world genomics and clinical bioinformatics problems that directly impact patient care and scientific discovery.
- Collaborate with life sciences clients at the cutting edge, from rare disease diagnostics to AI-assisted bioinformatics platforms.
Who are we aka "About Us":
We are an early-stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Baker street Fintech Pvt Ltd (Parent Company) might be the place for you. We have a flat, ownership-oriented culture, and deliver world-class quality. You will be working with a founding team that has delivered over 26 industry-leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
As Cambridge Wealth, we are well-established in the wealth and mutual fund distribution segment, having won awards from BSE Star as well as Mutual Fund houses. Our UHNI/HNI/NRI clients include renowned professionals from various industries.
What are we looking for a.k.a “The JD” :
We are seeking a skilled and detail-oriented Data Analyst to join our product team. As a Data Analyst, you will play a crucial role in extracting, analysing, and interpreting complex financial data to drive strategic decision-making and optimize our data solutions. The ideal candidate should possess a strong foundation in SQL / NoSQL databases, Python programming, and proficiency in tools like PostgreSQL and Excel. A deep understanding of financial concepts is also a plus. Additionally, having an interest in business intelligence tools and machine learning will be valuable for this role.
Responsibilities:
- Proficient in writing complex SQL Queries
- Utilize Python for data manipulation, analysis, and visualisation, using libraries such as pandas, matplotlib, psycopg etc.
- Perform database optimization, indexing, and query tuning to ensure high performance.
- Monitor and maintain data quality, troubleshoot data-related issues, and implement solutions to optimize data integrity and performance.
- Design, configure, and maintain PostgreSQL databases
- Set up and manage database clusters, replication, and backups for disaster recovery
Preferred Qualifications:
- Intermediate-level Excel skills for data analysis and reporting.
- Strong communication skills to present findings effectively and recommendations to both technical and non-technical stakeholders.
- Detail-oriented mindset with a commitment to data accuracy and quality.
*(Only Applicants who have finished their educational commitments are requested to apply)
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- Has worked (0-1.5 years preferably) or is looking to work specifically with an early-stage startup.
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and process from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements.
- You want complete ownership for your role & be able to drive it the way you think is right.
- You can be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
- Are looking to stick around for the long term and grow with the company.

AI & ML (45 Days – Live Hands‑On)
Program Fee: ₹25,000
Duration: 45 Days
Mode: Hybrid (Online Sessions + Live Lab Access)
Eligibility: Freshers, Final‑Year Students, and Career Switchers
About the Internship
This intensive 45‑day internship program is designed for freshers who want to build strong, industry‑relevant skills in Cloud Infrastructure, Cybersecurity, and AI/ML model development. The program offers live production hands‑on training, allowing interns to work on real-world architectures, security workflows, and AI/ML deployments.
Participants will receive end‑to‑end exposure to modern cloud platforms, DevOps practices, security operations, and machine learning deployment, making them job‑ready for roles like:
- Cloud/Infra Engineer
- DevOps Engineer
- Security Analyst
- AI/ML Engineer
- Site Reliability Engineer (SRE)
Key Highlights
- 45 days of practical, mentor-led training
- Live production-style projects and deployments
- Hands‑on experience with:
- AWS / Azure Cloud
- Terraform, CI/CD, Docker, Kubernetes
- Security Hardening & IAM
- Python ML pipelines & model deployment
- Architect & deploy real systems using best practices
- Build portfolio-ready projects
- Receive an industry-recognized Internship Certificate
What You Will Learn
1️⃣ Cloud Infrastructure & DevOps
- Cloud fundamentals (AWS/Azure/GCP)
- Linux administration & scripting
- VPC, Subnets, Routing, NAT, Firewalls
- EC2 provisioning & autoscaling
- Load balancing & High Availability
- Terraform for Infrastructure as Code
- CI/CD pipelines using Jenkins / GitHub Actions
- Docker containerization & Kubernetes basics
2️⃣ Cybersecurity & Cloud Security
- IAM roles, policies, access control
- Server security & hardening
- SSL/TLS, encryption, key management
- Secure VPC & subnet design
- Threat detection & logging
- Secrets management
- Network segmentation & firewall best practices
3️⃣ AI & Machine Learning
- Python for ML
- Supervised and unsupervised algorithms
- Data preprocessing & model training
- Model evaluation & optimization
- Build ML inference APIs using FastAPI/Flask
- Containerize and deploy ML models to cloud
- Integrate monitoring for ML workflows
✅ Live Hands‑On Projects
Interns will work on real-world, production-grade projects such as:
- Deploy a secure 3‑tier web application on cloud
- Automate infra provisioning using Terraform
- Build CI/CD pipelines for automated deployments
- Harden servers & configure security groups, IAM
- Develop and deploy an ML model as a cloud API
- Create monitoring dashboards with Prometheus/Grafana
- End-to-end system deployment with logging and alerting
Each intern will complete a Capstone Project and present it during the final evaluation.
✅ Internship Deliverables
- Internship Completion Certificate
- 3+ Production‑level projects
- GitHub portfolio with all code and deployments
- Cloud & ML documentation
- Resume enhancement and guidance
- Career mentoring + interview preparation
✅ Who Should Apply
This internship is ideal for:
- Fresh graduates
- Final‑year engineering or IT students
- BSc, BCA, MCA, B.Tech learners
- Professionals switching careers to Cloud/DevOps/AI
- Anyone seeking hands‑on, real‑time industry experience
✅ Program Fee
₹25,000/- (includes training, labs, live‑project access, certificate, and mentorship)
✅ Certificate Provided
All participants will receive a verified Internship Certificate, including:
- Candidate Name
- Internship Duration & Dates
- Skills Covered
- Project Evaluation Score
- Authorized Signatory & Company Seal
We have an urgent opening for a highly skilled and passionate professional for the below role:
Quick Role Overview:
- Role: Python Automation Developer
- Location: Pune (Near Sangamwadi – Metro Connectivity)
- Working Model: Hybrid (4 Days Work from Office)
- Experience: 6 – 9 Years (Minimum 5.9+ Years in OOPs Development)
- Qualification: B.E. / MCA / M.Sc. / B.Sc.
- Notice Period: Early Joiners Preferred (15–30 Days Max)
Job Description
We are looking for a strong Python Automation Developer with solid Object-Oriented Programming expertise. This role is ideal for professionals who are strong in Java / C++ / C# and are willing to transition into Python (if not already working in Python).
You will be responsible for designing, developing, and maintaining high-quality automation solutions while translating business requirements into scalable and efficient technical implementations.
This is an excellent opportunity to work in a German-based product company offering strong work-life balance and a global work culture.
Key Responsibilities
- Design and develop automation solutions using Python (preferred) or other OOP-based languages.
- Translate functional requirements into scalable technical designs.
- Apply strong design patterns and coding best practices.
- Write clean, efficient, maintainable, and well-documented code.
- Debug, troubleshoot, and optimize performance issues.
- Work closely with cross-functional teams in a global environment.
- Supervise and mentor junior team members when required.
- Take complete ownership of assigned modules.
Desired Skills & Competencies
Must-Have Skills:
- 5.9+ years of relevant experience in OOPs (Python / Java / C++ / C#)
- Strong analytical, coding, debugging, and design skills
- Excellent understanding of design patterns and frameworks
- Ability to convert requirements → design → fully functional implementation
- Strong problem-solving mindset
- Good English communication skills
- Ability to work independently with minimum supervision
(Candidates with Java/C++/C# background must be willing to work in Python.)
Good to Have:
- Experience with defect management tools
- Experience with GIT / SVN or any code repository management tools
- Experience with Eclipse IDE or equivalent IDE
ROLE SUMMARY
The Senior Python Developer designs, builds, and improves Python and Django applications. The role includes developing end‑to‑end integrations using REST and SOAP services and delivering reliable, scalable solutions through hands‑on coding and data transformation work. The developer works closely with Business Analysts, architects, and other teams to ensure technical solutions support business needs. Key responsibilities also include improving SQL performance, taking part in code reviews, supporting DevOps workflows with Git and Azure DevOps, and helping integrate GenAI features—such as GPT models, embeddings, and agent‑based tools—into enterprise applications.
ROLE RESPONSIBILITIES
- Design and develop Python and Django applications that are scalable, secure, and maintainable.
- Implement UI components using CSS, Bootstrap, jQuery, or similar technologies as needed.
- Develop integrations with internal and external systems using REST, SOAP, and WSDL‑based services.
- Create and optimize SQL queries, database structures, and data access logic to support application features.
- Work with Business Analysts and stakeholders to translate functional requirements into technical specifications and solutions.
- Implement accurate data mappings and transformations in accordance with business and technical requirements.
- Contribute to code reviews, follow established coding standards, and ensure high‑quality deliverables.
- Support the implementation and maintenance of DevOps pipelines using Git and Azure DevOps.
- Contribute to the integration of GenAI capabilities—including GPT models, embeddings, and agent‑based components—into enterprise applications.
- Troubleshoot issues across the application stack and collaborate closely with peers to resolve technical challenges.
TECHNICAL QUALIFICATIONS
- 7+ years of hands‑on experience with Python and Django, including complex application development.
- 5+ years of experience with SQL development, optimization, and database design.
- At least 1-2 years of applied experience with GenAI technologies (GPT models, embeddings, agents, etc.).
- Deep expertise in application architecture, system integration, and service‑oriented design.
- Strong experience with DevOps tools and practices, including Git, Azure DevOps, CI/CD pipelines, and automated deployments.
- Advanced understanding of REST, SOAP, WSDL, and large‑scale service integrations.
GENERAL QUALIFICATIONS
- Exceptional verbal and written communication skills.
- Strong analytical, problem‑solving, and architectural reasoning abilities.
- Demonstrated leadership experience with the ability to guide and mentor technical teams.
- Proven ability to work effectively in fast‑paced, collaborative environments.
EDUCATION REQUIREMENTS
- Bachelor’s degree in Computer Science, MIS, or a related field.
- Advanced certifications in Python, cloud technologies, or GenAI are preferred but not required.
About Shopflo
At Shopflo, we're trying to change the way consumers experience brands and businesses. Our first product was a cart and checkout platform for e-commerce, that allowed marketers to personalise discounts, rewards, and payments. We are currently also working on a new product that takes it a notch higher by unlocking enterprise-grade personalization for all consumer tech businesses.
Team & Company
Shopflo was founded by three co-founders:
- Ankit Bansal (ex-IIT Kharagpur, Oracle, Gupshup)
- Ishan Rakshit (ex-IIT Bombay, Parthenon, Elevation Capital)
- Priy Ranjan (ex-IIT Madras, McKinsey, Elevation Capital)
We’re a fast-growing team of ~50 people, based in HSR Layout, Bengaluru. We raised a $3.8M seed round from Tiger Global, TQ Ventures.
What you will do
- Design and develop microservice that can work in a large-scale multi-tenant environment.
- Explore design implications and work towards an appropriate balance between functionality, performance, and maintainability.
- Working with a cross-discipline team of Design, Product, Data Science and Analytics team.
- Deploy and maintain the application in a secured AWS environment.
- Take ownership from the ideation phase to deployment and maintenance.
- Active participation in the hiring process to bring world-class programmers in the team.
You should apply if you have:
- 2-4 years of experience in server-side development
- Strong programming skills in Java, Python, Node or Golang
- Hands-on experience in API development and frameworks such as Spring, Node, or Django.
- Good Understanding of SQL and NoSQL databases.
- Experience in test-driven development. (writing unit test and API test).
- Understanding of basic cloud computing concepts and experience in using any of the major cloud service providers(AWS/GCP/Azure).
- Ability to build and deploy the application in a containerized environment.
- Understanding of application logging and monitoring systems like Prometheus or Kibana.
- B. E/B. Tech/M. E. /M. Tech/M. S. from a reputed university with a good academic record.
- Curiosity to explore cutting-edge technologies and bake them into the products.
- Zeal and drive to take end-to-end ownership.
Role: Sr. Azure Data Engineer
Experience: 8–10 Years
Work Timings: 1:30 PM – 10:30 PM IST
Location: Bellandur Bengaluru (Work from Office)
Company: Chevron
Employment Type: 6- 12 months Contract
Role Overview
We are seeking an experienced Senior Data Engineer to design and deliver scalable cloud data solutions on Azure. The ideal candidate will have strong expertise in Databricks, PySpark, and modern data architectures, with exposure to energy domain standards like OSDU.
Key Responsibilities
- Architect and design robust Azure-based data solutions using Databricks, ADLS, and PaaS services
- Define and implement scalable data Lakehouse architectures aligned with OSDU standards
- Build and manage end-to-end data pipelines for batch and real-time processing using PySpark
- Establish data governance frameworks including metadata, lineage, security, and access control
- Implement DevOps best practices (CI/CD, Azure Pipelines, GitHub, automated deployments)
- Collaborate with stakeholders to translate business needs into technical solutions
- Develop and maintain architecture documentation, solution patterns, and standards
- Provide technical leadership and mentorship to engineering teams
- Optimize solutions for performance, cost, reliability, and security
- Ensure alignment with enterprise architecture and compliance standards
- Drive adoption of modular and reusable cloud data components
Required Skills & Qualifications
Core Technical Skills
- Azure Databricks, Apache Spark (PySpark), Delta Lake, Unity Catalog
- Azure Data Lake Storage (ADLS), Azure Data Factory, Synapse Analytics
- Strong experience in Python-based data engineering
- Data pipeline development (batch + real-time)
Architecture & Advanced Skills
- Data Lakehouse architecture and distributed systems
- Microservices, APIs, and integration frameworks
- OSDU (Open Subsurface Data Universe) or similar energy data models
DevOps & Tools
- CI/CD tools: Azure Pipelines, GitHub Actions
- Infrastructure as Code: Terraform or similar
Other Skills
- Data governance, security, compliance, and cost optimization
- Strong analytical and problem-solving skills
- Excellent communication and stakeholder management
What we are looking for:
We are looking for a motivated AI Developer with 1–3 years of experience to join our team and build cutting-edge applications powered by Large Language Models (LLMs). You will work on designing, developing, and optimizing intelligent systems using modern AI frameworks and tools.
Responsibilities:
- Design and develop applications leveraging Large Language Models (LLMs)
- Build and optimize RAG (Retrieval-Augmented Generation) pipelines
- Work with frameworks like LangChain, LangGraph, or similar LLM orchestration tools
- Integrate and manage vector databases (e.g., Pinecone, Weaviate, Qdrant, FAISS)
- Implement prompt engineering strategies and improve model responses
- Develop scalable and efficient AI system architectures
- Monitor, debug, and optimize LLM applications using observability tools (e.g., Langfuse or similar)
- Collaborate with backend and product teams to integrate AI features into production systems
- Stay updated with the latest advancements in AI/LLM ecosystem
Skills:
- 1–3 years of hands-on experience in AI/ML or backend development with AI exposure
- Strong understanding of LLMs and generative AI concepts
- Experience with LangChain, LangGraph, or similar frameworks
- Practical experience with RAG architectures and pipelines
- Hands-on experience with vector databases (e.g., Pinecone, Qdrant, Weaviate, FAISS)
- Familiarity with observability tools like Langfuse, Helicone, or similar
- Proficiency in Python (preferred) or Node.js
- Experience working with APIs (OpenAI, Anthropic, etc.)
- Understanding of embeddings, chunking, and retrieval strategies
- Good problem-solving and debugging skills
Experience:
- 1-3 years of experience in AI application development
Remuneration: Industry Standard basis experience
Job Title: Senior Linux Kernel Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Domain: Enterprise Linux / Kernel Development
Job Summary
We are seeking a highly skilled Senior Linux Kernel Engineer with deep expertise in kernel development, debugging, and performance optimization. The role involves working on enterprise-grade Linux distributions, kernel lifecycle management, security patching, and low-level hardware integration.
Key Responsibilities
1. Kernel Lifecycle & Maintenance
- Lead kernel upgrade strategies (e.g., LTS migrations such as 5.15 → 6.x) while ensuring stability and compatibility.
- Perform patch porting across kernel versions, resolving API and dependency conflicts.
- Track and mitigate security vulnerabilities by monitoring CVEs and upstream sources (e.g., LKML).
- Backport critical fixes to production kernels without impacting system stability.
2. Debugging & System Stability
- Act as an escalation point for kernel panics and system crashes.
- Perform post-mortem analysis using kdump, crash, and gdb.
- Debug early boot issues (UEFI, initramfs, kernel initialization).
- Conduct performance analysis using eBPF, ftrace, and perf to optimize system behavior.
3. Driver Development & Hardware Integration
- Design, develop, and maintain device drivers (network, storage, GPU, or character devices).
- Work closely with hardware through DMA, interrupts (MSI-X), and register-level programming.
- Maintain out-of-tree drivers using DKMS or similar frameworks.
- Ensure compatibility of drivers across kernel updates.
Required Technical Skills
- Programming: Strong expertise in C (mandatory) and C++
- Kernel Internals: Deep understanding of:
- Virtual File System (VFS)
- Memory Management (MMU, Paging)
- Process Scheduler
- Linux Networking Stack
- Debugging Tools:
- kdump, crash, gdb
- kprobes, trace-cmd, ftrace
- perf, valgrind
- Hardware debugging tools (JTAG, Serial Console)
- Build Systems:
- Kbuild, Makefiles
- Kernel packaging (RPM/Debian)
- Security:
- Experience with CVE patching and backporting
- Knowledge of SELinux/AppArmor
- Kernel hardening (FIPS, KSPP)
Preferred Skills
- Experience contributing to open-source kernel projects
- Familiarity with Linux Kernel Mailing List (LKML) workflows
- Exposure to enterprise Linux distributions (RHEL, Ubuntu, SUSE)
- Experience with performance tuning and system optimization at scale
1. Core Programming (C Language)
- Must have strong hands-on experience in C programming
- Comfortable with pointers, memory management, and low-level concepts
2. Kernel Internals Expertise
- Should have worked in at least one subsystem:
- VFS / File Systems
- Memory Management
- Scheduler / Networking
3. Debugging & Crash Analysis
- Experience handling kernel panics
- Hands-on with vmcore analysis tools
4. Security & Patching
- Understanding of CVE fixes and backporting
5. Driver Development
- Experience in writing or maintaining device drivers
6. Performance & Advanced Debugging
- Exposure to eBPF, ftrace, perf
7. Hardware-Level Understanding
- Knowledge of DMA, interrupts, hardware interaction
Soft Skills
- Strong analytical and problem-solving abilities
- Excellent communication skills
- Ability to work independently and in collaborative environments
- Quick learner with adaptability to new technologies
Job Title: Cloud Development & Linux Debugging Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Job Summary
We are looking for an experienced Cloud Development & Linux Debugging Engineer with strong expertise in Linux internals, system-level programming, and cloud technologies. The ideal candidate will have hands-on experience in developing, debugging, and optimizing Linux-based systems along with exposure to DevOps tools and containerized environments.
Key Responsibilities
- Develop and debug software at the Linux system level (kernel/user space).
- Work on Linux internals, low-level system components, and performance optimization.
- Design, develop, and maintain applications using Python and C/C++.
- Troubleshoot complex issues in Linux and cloud-based environments.
- Collaborate with cross-functional teams in an Agile/Scrum environment.
- Contribute to automation and infrastructure using DevOps tools.
- Work with containerized and cloud platforms such as Kubernetes and OpenStack.
Required Skills
- Strong experience in Linux software development (Linux internals, system-level programming).
- Proficiency in Python and C/C++.
- Solid debugging and analytical skills.
- Hands-on experience with Ansible, Puppet, and DevOps practices.
- Experience working with OpenStack and Kubernetes.
- Good understanding of Agile/Scrum methodologies.
- Excellent communication and teamwork skills.
Preferred Skills (Good to Have)
- Experience with Go / Golang and Go templating.
- Knowledge of Kubernetes Operators and Helm.
- Exposure to containerization technologies (Docker, Kubernetes).
- Contributions to open-source projects.
- Experience with cloud-native architectures.
Qualifications
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field.
- Self-driven individual with a strong learning mindset.
- Ability to work independently and in collaborative team environments.






















