50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
About the Internship
The Nexora Group is looking for motivated and enthusiastic interns for the role of Data Science with AI Intern. This internship provides practical exposure to data analysis, machine learning, AI tools, and real-world datasets through guided mentorship and project-based learning.
Interns will gain hands-on experience working on live tasks related to data processing, visualization, predictive analytics, and AI-driven solutions.
Responsibilities
- Assist in collecting, cleaning, and analyzing datasets.
- Work on data visualization and reporting tasks.
- Support machine learning and AI model development activities.
- Participate in research and implementation of AI-based solutions.
- Perform data preprocessing and feature engineering tasks.
- Collaborate with mentors and team members on project assignments.
- Test, evaluate, and improve model performance.
- Maintain project documentation and reports.
Required Skills
- Basic understanding of Python or any programming language.
- Interest in Data Science, Analytics, and AI technologies.
- Knowledge of statistics, data handling, or logical reasoning is a plus.
- Good analytical and problem-solving skills.
- Ability to learn and work collaboratively in a team environment.
Preferred Skills (Optional)
- Familiarity with Python libraries such as Pandas, NumPy, or Matplotlib.
- Basic understanding of Machine Learning concepts.
- Knowledge of SQL, Excel, or data visualization tools.
- Interest in Artificial Intelligence and automation technologies.
Who Can Apply
- Students pursuing B.Tech, BCA, MCA, B.Sc IT, M.Tech, or related fields.
- Freshers seeking practical exposure in Data Science and AI.
- Candidates passionate about analytics, AI, and emerging technologies.
Perks & Benefits
- Hands-on experience with real-world datasets and projects.
- Mentorship from experienced professionals.
- Internship Completion Certificate.
- Letter of Recommendation based on performance.
- Flexible learning and working environment.
- Opportunity to build practical technical skills.
- Top performers may receive future opportunities based on project requirements.
About Techjays
At Techjays, we build production-grade AI platforms for global clients. We operate at the intersection of backend engineering, distributed systems, and applied AI — delivering secure, scalable, and enterprise-ready intelligent systems. Our team has built and scaled products at Google, Akamai, NetApp, ADP, Cognizant, and Capgemini.
About the Role
This is not a feature-delivery role. We are looking for an AI Lead who can architect, own, and scale intelligent backend systems end-to-end. You will drive both technical direction and execution — working across LLM integrations, RAG pipelines, agentic AI workflows, and cloud-native backend systems for global clients.
What You'll Do
- Architect and scale backend systems powering AI-driven applications
- Design and implement RAG pipelines, AI agents, and LLM integrations
- Own systems end-to-end — from architecture to deployment and scaling
- Integrate and optimize LLMs (Claude, GPT, Gemini) for real-world production use cases
- Build high-performance distributed systems with observability and cost efficiency
- Lead backend and AI initiatives with strong technical ownership
- Mentor engineers and raise the technical bar across teams
- Collaborate with product and AI teams to deliver AI-native solutions
What We're Looking For
- 6–10 years of strong backend engineering experience
- Hands-on expertise in Python (FastAPI / Django / Flask)
- Deep understanding of Generative AI and LLM-based systems
- Strong experience with RAG pipelines and Vector Databases (Pinecone, FAISS, ChromaDB, Weaviate)
- Solid knowledge of Agentic AI — building autonomous agents and multi-agent workflows
- Proficiency in AWS or GCP in production environments
- Experience with distributed systems, microservices, and system design
- Strong grasp of Data Structures, Algorithms, and Design Patterns
- Familiarity with WebSockets, Git, Linux/Unix, and CI/CD
Nice to Have
- Experience with Anthropic Claude API and Claude Code
- Familiarity with real-time data systems or streaming (Kafka, etc.)
- MLOps and AI system lifecycle experience
- Optimizing AI systems for latency, cost, and scalability
Who You Are
- You think in systems, not just features
- You take full ownership of what you build
- You are comfortable navigating fast-moving, ambiguous environments
- You stay updated with the latest in Generative AI and backend technologies
- Strong communicator who can collaborate across teams and global clients
What We Offer
- Competitive compensation (Best in Industry)
- Work on production-grade AI systems used by global clients
- Exposure to cutting-edge AI tools and frameworks
- A culture that values clarity, integrity, and continuous growth
Role Overview:
Virtana is looking for a Senior DevOps Engineer to join our R&D Infrastructure team. In this role, you won't just follow conventions — you'll help redefine them. You will own the architecture, build, and day-to-day operations of the GCP-based cloud platform that powers Virtana's SaaS products and the AI-driven observability experience our Global 2000 customers depend on. This is a hands-on senior individual contributor role with meaningful technical leadership scope, working alongside engineers and architects on a unified observability platform.
Work Location: Pune
Job Type: Hybrid
Role Responsibilities:
- GCP Cloud Operations: Develop, deploy, operate, and support production cloud infrastructure primarily on GCP — leveraging GKE, BigTable, BigQuery, Dataflow, Cloud Storage, IAM, and core networking services.
- Reliability & SLAs: Ensure production systems are running at all times with multiple levels of redundancy to meet committed SLAs; lead incident response, root cause analysis, and post-incident reviews.
- Build & Release Automation: Design, implement, and continuously improve scalable CI/CD pipelines and test frameworks leveraged by QA and development teams across the company.
- Infrastructure as Code: Manage large-scale, repeatable deployments using Terraform, Ansible, Puppet, or SaltStack; champion Git-based workflows and version control standards for distributed engineering teams.
- Security & Availability: Maintain the ongoing maintenance, security, patching, and availability of services in line with tight operations, security, and procedural models.
- Monitoring & Alerting: Plan and deliver high-value monitoring and alerting features to support operations, support, and customer-facing reliability — eating our own dog food with the Virtana Platform wherever possible.
- Capacity & Cost: Forecast capacity, plan upgrades, patches, and migrations, and drive cloud cost efficiency across hybrid and multi-cloud environments.
- Cross-Functional Partnership: Work with development, operations, and support personnel to identify, isolate, and diagnose issues; handle support escalations and drive permanent fixes.
Required Qualifications:
- Bachelor's degree in Computer Science / Engineering or equivalent relevant experience.
- 5–7 years of professional hands-on DevOps / SRE experience supporting production cloud environments.
- Strong, demonstrable production experience on GCP — including GKE, BigTable, BigQuery, Dataflow, IAM, and core GCP networking services.
- Deep, hands-on expertise with container orchestration (Kubernetes) and Docker in production.
- Advanced proficiency with at least one infrastructure-as-code / configuration management tool: Terraform, Ansible, Puppet, or SaltStack.
- Solid understanding of networking, firewalls, load balancers, DNS, and database operations.
- Strong working knowledge of Git-based workflows and version control standards for distributed engineering teams.
- Comfort operating hybrid environments that include both Linux and Windows ecosystems.
- Excellent verbal and written communication skills, with the ability to explain highly technical topics to both technical and non-technical audiences.
- Self-motivated, detail-oriented, and able to work both independently and within a globally distributed team.
Good to Have:
- Strong scripting skills and a demonstrated ability to automate operational toil — Python preferred; Bash, Go, or Groovy a plus.
- Hands-on experience designing and operating CI/CD pipelines with Jenkins (Spinnaker, GitHub Actions, or GitLab CI also welcome).
- Exposure to AWS or other public clouds in addition to GCP.
- Experience operating SaaS platforms built on microservices architectures.

Who are we?
Trendlyne is a Series-A funded, profitable products startup in the financial markets space. We have cutting-edge analytics products built for Indian and US customers, for stock markets and mutual funds.Our founders are IIT + IIM graduates, with strong tech and marketing experience.
What do we do?
We build powerful analytics in the US and Indian stock market space that are best in class. Organic growth in B2B and B2C products have already made the company profitable. We deliver 1 billion+ APIs every month to B2B customers, and have 25 Lakh+ Monthly Average users on Trendlyne B2C website and app.
Visit us at trendlyne.com, or look for the trendlyne mobile app on the Google Play Store:
https://play.google.com/store/apps/details?id=com.trendlyne.markets
Tech Stack
- Python (Django)
- PostgreSQL / MySQL manages millions or row insertions each day
- Vector Databases / RAG manages all documents
- Redis / caching systems
- REST APIs
- 100+ AI tooling and Foundation model integrations
Job Responsibilities :
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Work closely with product managers and other cross functional teams to help define, scope and deliver world-class products and high quality features addressing key user needs.
- Translate requirements into system architecture and implement code while considering performance issues of dealing with billions of rows of data and serving millions of API requests every hour.
- Ability to take full ownership of the software development lifecycle from requirement to release.
- Writing and maintaining clear technical documentation enabling other engineers to step in and deliver efficiently.
- Embrace design and code reviews to deliver quality code.
- Play a key role in taking Trendlyne to the next level as a world-class engineering team
- Develop and iterate on best practices for the development team, ensuring adherence through code reviews.
- As part of the core team, you will be working on cutting-edge technologies like AI products, online backtesting, data visualization, and machine learning.
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Proficient understanding of the performance of web and mobile applications.
Job Requirements :
- Ownership mindset: you build, ship, and improve
- At least 1+ years of experience with Python and Django.
- Good knowledge of APIs, databases, and system performance
- Strong understanding of relational databases like PostgreSQL or MySQL and Redis. - (Optional): Experience with web front-end technologies such as JavaScript, HTML, and CSS
Job Title: Data Architect (Azure)
Location: Remote
Fulltime
Role Description
We are looking for a seasoned data leader to design, build, and own enterprise-scale data platforms on Azure. This role goes beyond development — it requires end-to-end accountability for architecture, data pipelines, transformation frameworks, and production readiness.
You will act as the critical link between business stakeholders, data engineering teams, and analytics functions, ensuring scalable and high-performance data solutions are delivered and maintained.
Key Responsibilities:
- Design and implement robust data pipelines using Azure Data Factory (ADF), including integration with REST APIs and external data sources
- Build scalable data transformation workflows using Databricks (PySpark), handling complex and nested JSON datasets
- Architect and implement Delta Lake-based data platforms, including fact and dimension models (star schema)
- Define and enforce best practices for data modeling, performance optimization, and cost efficiency
- Own end-to-end data platform lifecycle — from architecture and deployment to monitoring and operational support
- Establish production readiness frameworks, including logging, alerting, and data quality checks
- Collaborate closely with business and analytics teams to translate requirements into scalable technical solutions
- Mentor engineering teams and drive architectural governance across projects
Required Experience & Skills:
• Experience building pipelines with Azure Data Factory
• Experience connecting to REST API sources using Azure Data Factory
• Experience building transformations with Databricks using PySpark
• Experience handling complex nested JSON files using PySpark
• Experience designing dimensional models/star schema
• Experience implementing facts and dimension tables in Databricks Delta Lake
• Around 15-20 years of solid experience in building, managing, and optimizing enterprise data platforms with at least 5 years in Azure cloud data services
• Act as a bridge between business, data engineering, and analytics teams to ensure requirements are clearly understood and implemented correctly
• Own end-to-end production readiness of the data platform, including architectural design, deployment patterns, monitoring strategy and operational support.
We are looking for a skilled Data Engineer with strong expertise in Java and Apache Spark, specializing in data ingestion and large-scale data processing. The ideal candidate will design and build scalable, high-performance data pipelines and contribute to modern analytics platforms in a fast-paced Agile environment.
This role requires hands-on experience in building ingestion frameworks, optimizing Spark workloads, and working with cloud-based data ecosystems.
Key Responsibilities
- Design, develop, and maintain scalable data ingestion pipelines using Java and Apache Spark.
- Build and optimize Spark jobs (Spark Core, Spark SQL, DataFrames, Streaming) for large-scale batch and real-time processing.
- Develop reusable ingestion frameworks for structured and semi-structured data from multiple sources (APIs, databases, files, streaming systems).
- Implement high-performance ETL/ELT solutions with strong focus on data quality, reliability, and scalability.
- Collaborate with data architects, analysts, and cross-functional teams to design robust data workflows.
- Optimize Spark performance (partitioning, caching, tuning, memory management) for production environments.
- Contribute to CI/CD pipelines, code reviews, and best practices in data engineering.
- Troubleshoot data pipeline failures and implement monitoring and alerting mechanisms.
- Document technical designs and mentor junior engineers.
Required Skills & Qualifications
- 3–7 years of strong hands-on experience in Data Engineering and Java development.
- Strong expertise in Apache Spark (Spark Core, Spark SQL, DataFrames, Structured Streaming).
- Solid experience in data ingestion, ETL/ELT, and building data pipelines.
- Working knowledge on Java.
- Experience handling large-scale data processing and distributed systems.
- Familiarity with Maven/Gradle, Git, and CI/CD practices.
- Strong SQL skills and understanding of data modeling concepts.
- Excellent problem-solving and communication skills.
- Must be open to working from Bangalore location.
Secondary / Preferred Skills
- Experience with Databricks (AWS preferred) for Spark-based data engineering.
- Hands-on experience with Snowflake for cloud data warehousing.
- Working knowledge of DBT (Data Build Tool) for analytics engineering and transformations.
- Exposure to Azure cloud services (Databricks).
- Experience with Kafka, Airflow, or orchestration tools.
- Familiarity with Docker/Kubernetes.
- Basic Python scripting for automation and data manipulation.
The Role
As a Senior Site Reliability Engineer at Blitzy's Pune headquarters, you will be the backbone of our platform's reliability, scalability, and operational excellence. You'll work at the intersection of software engineering and infrastructure, ensuring our AI-powered development platform remains highly available and performant as we scale rapidly. This is a high-impact, hands-on role for an engineer who thrives in a fast-moving environment and takes deep ownership of the systems they build.
What Success Looks Like
- In 30 days: You have a deep understanding of Blitzy's infrastructure architecture, have identified key reliability risks, and are actively contributing to on-call rotations.
- In 90 days: You have shipped meaningful improvements to observability, incident response workflows, and deployment pipelines that measurably reduce MTTR and increase system uptime.
- In 6 months: You have driven at least one major reliability initiative from inception to production, established SLO/SLA frameworks for critical services, and are a trusted technical voice shaping our infrastructure roadmap.
Areas of Ownership
- Design, build, and operate scalable, fault-tolerant infrastructure across cloud environments (AWS, GCP, or Azure).
- Define and enforce SLOs, SLAs, and error budgets; lead blameless postmortems and drive systemic improvements.
- Build and maintain robust CI/CD pipelines, release automation, and deployment infrastructure.
- Own observability: design and maintain logging, metrics, tracing, and alerting stacks (e.g., Prometheus, Grafana, Datadog, OpenTelemetry).
- Partner closely with software engineering teams to embed reliability practices into the development lifecycle.
- Drive capacity planning, performance benchmarking, and cost optimization across our infrastructure.
- Champion security best practices within the infrastructure and deployment layers.
Required Experience
- 5+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.
- Strong proficiency in at least one major cloud platform (AWS preferred); experience with Kubernetes and container orchestration at scale.
- Hands-on experience with infrastructure-as-code tools (Terraform, Pulumi, or equivalent).
- Proven track record designing and maintaining high-availability, distributed systems.
- Deep expertise in observability tooling, incident management, and on-call practices.
- Strong scripting and automation skills (Python, Go, Bash, or similar).
- Excellent communication skills with the ability to collaborate across engineering teams and present technical findings to leadership.
What Makes You Stand Out
- Experience supporting AI/ML workloads or GPU-accelerated infrastructure.
- Prior experience in a high-growth startup environment where you wore multiple hats.
- Familiarity with eBPF, service mesh technologies (Istio, Linkerd), or advanced networking.
- Contributions to open-source SRE/DevOps tooling or communities.
- Experience building global, multi-region infrastructure with strict latency and availability requirements.
What Makes This Role Different
You won't be maintaining legacy systems or fighting fires in a sprawling monolith. At Blitzy, you're building reliability into a greenfield AI platform that is redefining how the world creates software. You'll have direct influence over architectural decisions, work side-by-side with world-class engineers, and see the tangible impact of your work as we scale to serve Fortune 500 customers. As a founding member of the Pune SRE team, you'll help shape the culture and technical standards of a team that will grow with the company.
The Role
As a DevOps Engineer at Blitzy's Pune headquarters, you'll build and operate the infrastructure that powers our AI agents and the applications they produce. You'll work at the intersection of cloud infrastructure, developer tooling, and AI-native systems — designing the pipelines, clusters, and automation that allow Blitzy to ship production-ready software at machine speed. This is a hands-on, high-ownership role for an engineer who moves fast, automates everything, and cares deeply about developer experience and system reliability.
What Success Looks Like
- Kubernetes clusters are running reliably at scale, with clear deployment standards, Helm-managed releases, and minimal manual intervention required from engineering teams.
- CI/CD pipelines are fast, consistent, and trusted — developers ship confidently knowing the automation handles the rest.
- Observability is comprehensive: alerts are actionable, dashboards are meaningful, and incidents are resolved faster because the right data is always available.
- Infrastructure provisioning is fully automated — no snowflake environments, no manual setup, everything reproducible through code.
- AI agent orchestration infrastructure is stable and scalable, directly enabling Blitzy's core product to deliver for enterprise customers.
- Engineering teams notice the difference — developer productivity is measurably higher and infrastructure is no longer a bottleneck to shipping.
Areas of Ownership
- Build and manage Kubernetes clusters supporting AI agent workloads and application deployment at scale.
- Design, implement, and maintain CI/CD pipelines for application and AI service delivery — ensuring speed, reliability, and repeatability.
- Automate infrastructure provisioning and dynamic scaling using Python scripts and Terraform IaC.
- Deploy and manage applications using Helm charts; own packaging standards and release automation.
- Build and maintain comprehensive observability stacks — alerting, distributed tracing, metrics, and logging (e.g., Prometheus, Grafana, Datadog, OpenTelemetry).
- Monitor and maintain production services and APIs; own incident response and drive blameless postmortems.
- Build dedicated infrastructure for AI agent orchestration and management, enabling Blitzy's core autonomous development capabilities.
- Collaborate with engineering teams on deployment strategies and continuously improve developer experience through tooling and automation.
Required Experience
- 5–8 years of DevOps, infrastructure, or platform engineering experience.
- Python proficiency for scripting, automation, and infrastructure tooling.
- Deep Kubernetes expertise — cluster management, workload deployment, scaling, and troubleshooting.
- Hands-on Helm experience for application packaging and release management.
- Proven ability to design and implement CI/CD pipelines across complex, multi-service environments.
- Practical experience with at least one major cloud platform (AWS, GCP, or Azure).
- Terraform proficiency for infrastructure-as-code provisioning and state management.
- Strong Linux administration and containerization fundamentals (Docker, OCI).
What Makes You Stand Out
- CKA (Certified Kubernetes Administrator) certification.
- Familiarity with MLOps tooling such as MLflow, Kubeflow, or similar platforms for AI/ML workload management.
- Experience with microservices architecture and distributed systems design.
- Knowledge of API gateways and service mesh technologies (Istio, Linkerd, or equivalent).
- Prior experience in a high-growth AI or software startup where you moved fast and owned broadly.
- Track record of meaningfully improving developer productivity through platform and tooling investments.
What Makes This Role Different
Most DevOps roles have you maintaining existing systems. At Blitzy, you're building the infrastructure layer for a platform that autonomously writes enterprise software — a genuinely new category of product. You'll work on AI agent orchestration, Kubernetes at scale, and developer tooling that is directly responsible for how fast Blitzy delivers value to Fortune 500 customers. As an early member of the Pune engineering team, you'll have outsized influence over our infrastructure culture and technical direction. High performers are eligible for company equity — giving you real ownership in what you build.
Role: Senior Software Engineer (SDE III) – AI & Backend
Location: Bangalore
Experience: 5-8 Years
Notice: Immediate – 15 Days
Work Mode: Work From Office
Type: Contract to Hire (CTH)
Role Overview
We are looking for a Senior Software Engineer with strong expertise in backend development and AI systems. You will build scalable platforms, AI-powered agents, and real-time voice applications.
What You’ll Do
● Design and develop scalable backend services using Python
● Build microservices, RESTful APIs, and event-driven systems
● Develop AI-powered agents using LLM frameworks and multi-agent systems
● Work on voice systems using STT/TTS and real-time streaming architectures
● Integrate APIs, vector databases, and RAG-based systems
● Collaborate with cross-functional teams and mentor junior engineers
What We’re Looking For
● 5+ years of experience in software development
● Strong proficiency in Python.
● Experience with microservices and distributed systems
● Hands-on experience with LLMs and agent frameworks (LangChain, AutoGen, etc.)
● Familiarity with vector DBs, embeddings, and cloud platforms (AWS/GCP/Azure)
● Good communication skills and ownership mindset
Nice to Have
● Experience with voice systems (STT/TTS) and real-time architecture
● Exposure to MCP or similar frameworks
● Experience building AI copilots or multi-modal systems
AccioJob is conducting a Walk-In Hiring Drive with Medha AI for the position of Data Engineer.
Apply Now:
https://go.acciojob.com/rWVc8X
Required Skills: SQL, Python, Pandas
Eligibility:
Degree: BTech./BE
Branch: All
Graduation Year: 2025, 2026
Work Details:
Work Location: Bangalore Urban (Onsite)
CTC: ₹6 LPA to ₹7 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Bangalore Centre
Further Rounds (for shortlisted candidates only):
Resume Evaluation, Technical Interview 1, Technical Interview 2, HR Discussion
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/rWVc8X
👇 FAST SLOT BOOKING 👇
[ 📲 DOWNLOAD ACCIOJOB APP ]

Nasdaq Listed AI-driven Content/Knowledge Management Company
Role: Senior Technical Consultant
Schedule: Fully Remote but requiring working in EDT Zone (Second shift work till 12:30 AM IST)
Benefits: Cell-Phone Reimbursement, Food Allowance, Internet Allowance, Commute Allowance
Company:
Upland Software (Nasdaq listed) is a leader in AI-powered knowledge and content management software. Our solutions help enterprises unlock critical knowledge, automate content workflows, and drive measurable ROI—enhancing customer and employee experiences while supporting regulatory compliance. More than 1,100 enterprise customers rely on Upland to solve complex challenges and provide a trusted path for AI adoption.
Job Description
Opportunity Summary:
We are seeking an experienced Senior Technical Consultant to join our India-based Center of Excellence team. This role demands both technical depth and customer presence—someone who thrives under pressure, communicates clearly, and solves complex integration and troubleshooting challenges in real time, often while collaborating directly with customers on live calls.
What would you do?
· Customer Engagement & Solutioning
o Lead and participate in technical meetings with enterprise customers, including troubleshooting sessions and go-live support calls.
o Translate complex technical details into language understandable by both technical and non-technical stakeholders.
o Calmly manage tense situations and help de-escalate when technical issues or deadlines create pressure.
· Technical Implementation & Integration
o Configure, integrate, and optimize BA Insight connectors with systems such as SharePoint, FileNet, iManage, and other enterprise repositories.
o Utilize REST APIs and scripting (C#, PowerShell, or similar) to implement or extend integration logic.
o Troubleshoot complex issues across multi-tier environments including application servers, connectors, indexing services, and authentication systems.
o Support orchestration pipelines that prepare data (chunking, metadata tagging, indexing) for LLM/AI consumption.
· Cross-Functional Collaboration
o Work closely with Project Managers, Solutions Consultants, and Support to ensure project milestones are met.
o Partner with Product and R&D to identify defects, suggest enhancements, and test new connector functionality.
o Support Sales and Customer Success teams in technical scoping discussions and feasibility assessments.
· Documentation & Process Discipline
o Maintain detailed technical documentation and logs of customer work.
o Track time and deliverables accurately using internal project management and time-tracking tools.
o Follow established PS methodology and internal QA/SDLC standards.
What are we looking for?
· Minimum 4–6 years of experience in a technical consulting, support engineering, or implementation role within a SaaS or enterprise software company.
· Experience with enterprise content management or search-based applications (e.g., SharePoint, iManage, OpenText, Documentum, Elastic, Solr) strongly preferred.
· Prior exposure to AI or data orchestration workflows a plus (chunking, vectorization, or integration with LLMs).
Primary Skills:
The candidate must possess the following primary skills:
Core Engineering & Development
· Strong proficiency in scripting and automation (Python, PowerShell, etc.)
· Solid experience with APIs, integrations, and data pipelines
· Ability to design and implement scalable, reusable solutions
Enterprise Search (Hands-On)
· Practical experience with:
· Indexing pipelines and ingestion workflows
· Metadata modeling and enrichment strategies
· Relevance tuning and search optimization
· Ability to diagnose issues related to:
o Missing content
o Poor ranking
o Incorrect search results
AI & Copilot Troubleshooting (Critical Skill)
· Hands-on experience or strong capability to troubleshoot:
· Microsoft Copilot / AI search issues
· Data grounding problems (AI not finding the right content)
· Permissions and access-related issues
· Microsoft Graph / Search indexing gaps
· Strong understanding of:
o Retrieval-Augmented Generation (RAG) concepts (high level)
o How AI depends on data quality, structure, and access
Enterprise Platform Experience
· Hands-on experience with one or more:
· Microsoft ecosystem (SharePoint, OneDrive, Graph, Copilot)
· Salesforce, ServiceNow, or similar enterprise systems
· Understanding of authentication (SSO, OAuth) and access models
Advanced Troubleshooting & Problem Solving
· Ability to diagnose cross-system issues (AI + search + data source)
· Strong root cause analysis skills across:
o Data
o Integration
o Platform configuration
Delivery & Ownership
· Ability to work independently on deployments and complex issues
· Experience in customer-facing or delivery environments
· Capability to guide/mentor junior engineers
Soft Skills
· Fast learner capable of mastering complex products and customer environments quickly
· Self-starter who can manage multiple projects with minimal supervision
· Team-oriented mindset with strong empathy for customers and colleagues
· Passionate about connecting enterprise data with AI to drive intelligent outcomes
· Exceptional troubleshooting and analytical abilities
· Passionate about delivering an amazing customer experience
· Able to have a change of mind, and able to change the minds of others
· Writes clearly and concisely
· Capable of working without a company office, with a fully remote team
Growth Skills
· Possesses a good work ethic; a self-starter with a desire to grow
· Always looking for better ways to get the job done
Work Schedule
· As Mentioned above.
· Flexibility to alternate schedules with other CoE Project Managers to ensure consistent U.S. coverage.
Qualification
Bachelor’s degree or technical institute degree/certificate in Computer Science, Information Systems, or other related field or equivalent combination of knowledge and experience. This role requires overlap with multiple time zones for planning meetings, status updates etc. on a regular basis. The duration of these overlaps can change depending on the type of meeting. Upland India has the flexibility to manage your working hours accordingly to help in your work-life balance. You can find out more about this during your interview conversation.
About BA Insight
Upland BA Insight is an enterprise search and AI orchestration platform that connects knowledge across systems like SharePoint, FileNet, iManage, Jira, and Documentum. Our technology enables intelligent search and data orchestration pipelines that prepare content for use with AI models such as ChatGPT, Microsoft Copilot, and Azure OpenAI.
Job Title : Technical Team Lead (Python/Django)
Experience Required : 8+ Years
Location : Surat
Work Mode : On-site
Employment Type : Full-time
About the Role :
We are looking for an experienced Technical Team Lead to drive architecture, scalability, and end-to-end product engineering for a fast-growing technology platform.
The ideal candidate should have strong expertise in Python/Django-based backend systems, scalable cloud infrastructure, and modern frontend/mobile technologies.
This role requires a hands-on technical leader who can mentor engineers, make architectural decisions, ensure production stability, and collaborate closely with product and design teams to deliver high-quality software solutions.
Mandatory Skills :
Python, Django, MongoDB, REST APIs, AWS, CI/CD, React.js/Flutter, JWT Authentication, GitHub Actions, AI/ML Integrations, Team Leadership.
Key Responsibilities :
- Lead architecture and technical direction across backend, frontend, mobile, and infrastructure layers.
- Design and maintain scalable, secure, and high-performance systems.
- Review code, enforce engineering standards, and maintain development best practices.
- Manage backend APIs, database optimization, and cloud deployments.
- Drive CI/CD implementation and release management processes.
- Work on AI/ML integrations and intelligent automation features.
- Ensure smooth end-to-end feature delivery from development to production deployment.
- Mentor and guide backend/frontend engineers and conduct technical reviews.
- Collaborate with product, design, and business stakeholders.
- Handle production incidents, debugging, and system reliability ownership.
- Improve application performance, monitoring, and scalability.
Technical Skills Required :
Backend & Database :
- Strong expertise in Python and Django framework.
- Experience building and maintaining large-scale REST APIs.
- MongoDB design, aggregation pipelines, & query optimization.
- Authentication and authorization using JWT.
- Experience with WebSockets and real-time systems.
Frontend & Mobile :
- Hands-on experience with React.js and Redux.
- Knowledge of Flutter (Dart) is preferred.
Cloud & DevOps :
- AWS EC2, S3, and cloud infrastructure management.
- CI/CD pipeline implementation using GitHub Actions.
- Git workflows, branching strategy, PR reviews, and release management.
AI & Integrations :
- Experience integrating OpenAI or similar AI/ML services.
- Payment gateway integration experience.
- Push notification systems such as Firebase FCM.
Required Qualifications :
Must Have :
- 8+ years of experience in Backend or Full Stack Development.
- Strong experience with Python/Django in production environments.
- Expertise in scalable backend architecture and API development.
- Experience with MongoDB optimization and cloud deployments.
- Hands-on experience with CI/CD and DevOps workflows.
- Strong debugging, problem-solving, and leadership abilities.
Preferred Skills :
- Experience leading teams of 3–8 engineers.
- Exposure to AI-driven applications or intelligent systems.
- Experience in Health-tech, Food-tech, or consumer platforms.
- Knowledge of scalable real-time applications and event-driven systems.
Development & Engineering Practices :
- Feature branch-based development workflow.
- Pull Request reviews with approval process.
- Protected production branches and controlled deployments.
- Sandbox/UAT testing before production release.
- Automated deployment pipelines using GitHub Actions.
- Strong focus on auditability, monitoring, and production stability.
Soft Skills :
- Strong communication and stakeholder management skills.
- Ownership mindset with leadership capabilities.
- Ability to work in a fast-paced product environment.
- Team mentoring and collaboration skills.
- High attention to scalability, reliability, and code quality.
Company Description
Eassy Onboard LLP is a team of Databricks Certified Data Engineers committed to empowering businesses through data-driven solutions. Specializing in automated workflows, scalable architectures, optimized data pipelines, and AI solutions, we help organizations reduce manual effort, optimize costs, and achieve reliable insights. We ensure secure data operations with robust validation processes and strong data integrity. As an Employer of Record (EOR), we assist global companies in hiring top Indian talent, managing payroll, compliance, and regulatory requirements. Our mission is to accelerate enterprise transformation and enable companies to build future-ready, compliant teams.
Role Description
We are seeking a Senior Data Engineer with deep expertise in Spark/PySpark/SQL to join our data team.
This is a hands-on technical role for someone passionate about building scalable data systems, mentoring engineers, and shaping data strategy.
You will architect systems that power high-performance data processing, enable advanced analytics, and accelerate AI initiatives.
What You'll Do
- Design and evolve scalable, distributed data infrastructure across cloud platforms including GCP and AWS.
- Build and maintain real-time and batch data processing pipelines supporting AI/ML workloads, consumer applications, and analytics.
- Develop and manage integrations with third-party e-commerce platforms to expand the data ecosystem.
- Ensure data availability, reliability, and quality through monitoring and automated auditing.
- Partner with engineering, AI, and product teams on data solutions for business-critical needs.
- Mentor and support data engineers, establishing best practices and code quality standards.
Qualifications
- Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
- 5+ years of software development and data engineering experience with ownership of production-grade data infrastructure.
- Deep expertise scaling Spark, PySpark, and SQL in production, including Databricks or DataProc on GCP.
- Strong understanding of distributed computing and modern data modeling for scalable systems.
- Proficient in Python with experience implementing software engineering best practices.
- Hands-on experience with both relational and NoSQL systems including MySQL, MongoDB, and Elasticsearch.
- Strong communicator with experience influencing cross-functional stakeholders.
Nice to Have
- Experience with job orchestration and containerization tools such as Airflow and Docker.
- Experience working with vector stores and knowledge graphs.
- Experience working in early-stage, high-growth environments.
- Familiarity with MLOps pipelines and integrating ML models into data workflows.
- A proactive, problem-solving mindset with a passion for innovative solutions.

It’s a global digital engineering and technology (MNC)
Job Details:
- Role: Senior Staff Engineer
- Experience: 7.5-9 Years
- Employment Type: Full-time
- Work Mode: Remote
Job Description
REQUIREMENTS:
- Strong hands-on experience in Java and Python
- Expertise in Microsoft Azure AI/ML services
- Experience with LLM application frameworks (LangChain, LangGraph, or similar)
- Strong experience in API development and system integration
- Experience building backend systems and scalable architectures
- Solid understanding of data structures, system design, and distributed systems
- Familiarity with cloud-native deployments, CI/CD, and observability tools
- Experience with LLM tools/providers and AI-assisted development (Good to Have)
- Strong problem-solving and communication skills
RESPONSIBILITIES:
- Design and develop autonomous AI agents capable of multi-step reasoning and decision-making
- Build and orchestrate agent workflows using modern frameworks (LangChain, LangGraph, etc.)
- Integrate AI agents with APIs, databases, and SaaS platforms for end-to-end automation
- Develop prompt engineering strategies, memory architectures, and tool integrations
- Deploy, monitor, and maintain AI agents in production environments
- Optimize agents for performance, scalability, latency, and cost efficiency
- Debug and improve agent behavior using testing, logging, and feedback loops
- Collaborate with cross-functional teams to embed AI solutions into business workflows
- Write clean, scalable, and production-ready backend code
- Stay updated with emerging AI/LLM trends and agent frameworks
Qualifications
Bachelor’s or master’s degree in computer science, Information Technology, or a related fields

Role Overview
We are looking for passionate and driven interns across multiple technology domains including Frontend Development, Backend Development, DevOps, AI/ML, and Data Engineering. This internship offers hands-on experience in real-world projects, collaboration with cross-functional teams, and exposure to modern tools and technologies.
Domains & Responsibilities
Frontend Development
- Build responsive and user-friendly web interfaces
- Translate UI/UX designs into functional applications
- Optimize performance and ensure cross-browser compatibility
Backend Development
- Develop APIs and server-side logic
- Work with databases and data storage solutions
- Ensure application security and performance
DevOps
- Assist in CI/CD pipeline setup and automation
- Manage deployments and cloud infrastructure
- Monitor system performance and reliability
Job Title: Product Lead or Tech Lead (AI & Infrastructure)
Location- Delhi
Job type: Full time, On site
About Us: TIMBLE is leading Authentication Company, delivering cutting edge technology and alternate data analysis for Identity management, Onboarding & Verification and Business Intelligence. We provide solutions across three verticals
1. BFSI Solutions
2. KYC and background check Solutions
3. AI Solutions
Role Overview-You will be the architectural backbone of Timble’s AI engine. This role requires a strong backend & systems mindset with exposure to AI/ML systems—balancing the development of high-accuracy fraud detection models with the scalable infrastructure required to run them.
Key Responsibilities
· Engineering Leadership: Lead the development of our core AI products, including Bank Statement Analyzers, Face Match technology, and Electronic Residence Physical Verification (ERPV).
· AI/ML Architecture: Design and deploy AI/ML-driven systems for document intelligence, fraud detection, and automation to enhance real-time intelligence.
· Delivery Ownership: Take end-to-end ownership of features and ensure timely delivery in high-stakes production environments.
· System Design & Scalability: Design and optimize high-throughput, low-latency API systems capable of handling real-world production loads across our 30+ high-quality APIs.
· Hands-on Contribution: Remain hands-on with code when required, especially for critical modules, core architecture decisions, and troubleshooting.
· Practical AI Application: Work on integrating and scaling AI/ML components in production. You must have the ability to apply complex AI solutions to solve real-world business problems.
· Technical Strategy & InfoSec: Oversee Information Security protocols to protect proprietary financial data. Lead IP-related technical work, including patent-pending research for our authentication engines.
· Mentorship: Act as the technical North Star for SDE-1 and SDE-2 engineers, instilling a culture of clean code, scalability, and cloud economics.
What We’re Looking For
· Technical Expertise: Strong backend engineering expertise (Python or similar), with experience in building and maintaining scalable systems. Exposure to ML frameworks (TensorFlow/PyTorch) is a plus.
· Domain Knowledge: Previous experience in Fintech, Cybersecurity, or BFSI tech stacks is highly preferred.
· Infrastructure Skills: Solid experience with cloud infrastructure (AWS/GCP/Azure) and maintaining high availability.
· Vision: The ability to translate complex fraud patterns into automated, executable code and a passion for "efficiency by design."
Learn more about us at: https://timbleglance.com
WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, Motive Partners and a who’s who of the financial service industry. We are creating engaging wealth experiences to better financial lives through AI and investment intelligence powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioral thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and planning divisions.
OUR VALUES: Go with your GUT
- Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos aside to find our genius zones. With self-awareness and integrity we strive to be the best we can possibly be. No excuses.
- Understanding through Listening and Speaking the Truth. We value transparency. We communicate with radical candor, authenticity and precision to create a shared understanding. We challenge, but once a decision is made, commit fully.
- I Win for Teamwin. We believe in staying within our genius zones to succeed and we take full ownership of our work. We inspire each other with our energy and attitude. We fly in formation to win together.
Responsibilities:
- Develop user-facing features such as web apps and landing portals.
- Ensure the feasibility of UI/UX designs and implement them technically.
- Create reusable code and libraries for future use.
- Optimize applications for speed and scalability.
- Contribute to the entire implementation process, including defining improvements based on business needs and architectural enhancements.
- Promote coding, testing, and deployment of best practices through research and demonstration.
- Review frameworks and design principles for suitability in the project context.
- Demonstrate the ability to identify opportunities, lay out rational plans, and see them through to completion.
Requirements:
- Bachelor’s degree in Engineering with 5-8 years of software product development experience.
- Proficiency in React, Django, Pandas, GitHub, AWS, and JavaScript.
- Strong knowledge of PostgreSQL, MongoDB, and designing REST APIs.
- Experience with scalable interactive web applications.
- Understanding of software design constructs and implementation.
- Familiarity with ORM libraries and Test-Driven Development.
- Exposure to the Finance domain is preferred.
- Knowledge of HTML5, LESS/CSS3, jQuery, and Bootstrap.
- Expertise in JavaScript fundamentals and front-end/back-end technologies.
Nice to Have:
- Strong knowledge of website security and common vulnerabilities.
- Exposure to financial capital markets and instruments.
Compensation and Benefits Package:
- Competitive compensation with a discretionary annual bonus.
- Performance-linked variable compensation.
- Medical insurance.
A note on location. While we have team centers in Boulder, New York City, San Francisco, Charlotte, and Bangalore, this role is based out of Mumbai.
TIFIN is an equal-opportunity workplace, and we value diversity in our workforce. All qualified applicants will receive consideration for employment without regard to any discrimination.
Purpose of the role
To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure.
Accountabilities
- Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data.
- Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures.
- Development of processing and analysis algorithms fit for the intended data complexity and volumes.
- Collaboration with data scientist to build and deploy machine learning models.
Vice President Expectations
- To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures..
- If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements..
- If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others..
- OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions..
- Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment.
- Manage and mitigate risks through assessment, in support of the control and governance agenda.
- Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does.
- Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business.
- Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies.
- Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions.
- Adopt and include the outcomes of extensive research in problem solving processes.
- Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes.
To be a successful Senior Data Engineer, you should have experience with:
- Hands on experience to work with large scale data platforms & in development of cloud solutions in AWS data platform with proven track record in driving business success.
- Strong understanding of AWS and distributed computing paradigms, ability to design and develop data ingestion programs to process large data sets in Batch mode using Glue, Lambda, S3, redshift and snowflake and data bricks.
- Ability to develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies. Hands on programming experience in python and PY-Spark.
- Understanding of Dev Ops Pipelines using Jenkins, GitLab & should be strong in data modelling and Data architecture concepts & well versed with Project management tools and Agile Methodology.
- Sound knowledge of data governance principles and tools (alation/glue data quality, mesh), Capable of suggesting solution architecture for diverse technology applications.
Additional relevant skills given below are highly valued:
- Experience working in financial services industry & working in various Settlements and Sub ledger functions like PNS, Stock Record and Settlements, PNL.
- Knowledge in BPS, IMPACT & Gloss products from Broadridge & creating ML model using python, Spark & Java.
You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills.
About Simbian
Simbian is at the forefront of cybersecurity innovation, leveraging purpose-built AI Agents to deliver 10x security outcomes for global enterprises and MSSPs. Our platform autonomously investigates and responds to alerts, freeing security teams from repetitive tasks. Simbian combines privacy-first technology, proven integration with 70+ enterprise tools, and rapid deployment for measurable value. Role
Overview
We are seeking a collaborative, innovative DevOps Engineer passionate about enabling secure, scalable operations for cutting-edge cybersecurity products. Join our team during a period of high growth and help architect the future of agentic AI security platforms.
Key Responsibilities
• Kubernetes Management:
o Manage and maintain production-grade Kubernetes clusters across multiple cloud providers (AWS is essential, Azure is valuable, GCP is a plus).
o Deploy, upgrade, troubleshoot, and scale stateful and stateless workloads (NGINX, Postgres, MongoDB, OpenCTI, OpenSearch, Kafka, Hadoop, Fluentd) in Kubernetes.
• Cloud Operations:
o Operate and optimize cloud environments, with strong expertise in AWS (AWS Certified Solutions Architect Professional or equivalent Azure cert preferred).
o Design, deploy, and manage infrastructure on AWS and Azure (GCP optional). • SQL Database Management:
o Administer SQL databases, ideally Postgres, on Kubernetes clusters or cloud VMs.
o Perform routine maintenance, backups, upgrades, monitoring, and optimization.
• Infrastructure as Code:
o Build, install, upgrade, and maintain Helm charts with expertise.
o Use and understand Ansible for cloud automation (AWS/Azure), and Terraform for infrastructure provisioning.
• Monitoring, Logging, Observability:
o Implement and manage logging and metrics stacks using OpenSearch/Elasticsearch, Prometheus, Grafana, Thanos or similar open source tools.
• Programming & Scripting:
o Develop automation scripts in Bash (proficient with control structures). o Produce scripts or microservices in Node.js (preferred) or Python/Django (bonus).
• CI/CD:
o Build and maintain CI/CD pipelines preferably using GitHub Actions (Jenkins or equivalent is acceptable).
• Containerization:
o Create, manage, and troubleshoot Docker/Podman containers, images, volumes, and use Docker Compose for local development.
• Customer-Facing On-Prem Deployments (Bonus):
o Install, configure, and support Kubernetes on customer premises.
o Demonstrate ownership, initiative, and strong customer communication skills.
o Solid knowledge of Linux administration, networking, and cloud environments.
What You’ll Bring:
• 4+ years’ experience in DevOps, SRE, or Production Engineering.
• Mastery of Kubernetes, AWS, infrastructure automation, and database management.
• Strong collaborative, curious, and growth-driven mindset.
• Ability to challenge ideas, drive innovation, and embrace rapid change.
• Excellent communication for technical customer interactions.
Why Join Simbian?
• Work with pioneering agentic AI security—impact global security teams.
• Shape infrastructure for privacy-first technology in a high-growth startup.
• Enjoy a dynamic remote-first work culture with opportunities for ownership and advancement.
Platform Engineer – Cloud & On-Prem Infrastructure
Location - Pune or Bangalore (WFO- 5 days)
Must-Have Skills:
- 8+ years deploying, upgrading, and maintaining infrastructure across on-premises and public cloud, with Kubernetes and Docker
- Proficiency in Infrastructure as Code using Terraform or Pulumi
- Hands-on coding in Golang or Python, plus Bash scripting
Good-to-Have Skills:
- Familiarity with Kubernetes management solutions (OpenShift, Rancher, GKE, EKS, AKS, VMware TKG)
- Experience with VM management platforms (e.g., Red Hat OpenShift Virtualization, VMware)
- Kubernetes certifications (CKA, CKAD)
- Exposure to service mesh technologies (Istio, Linkerd)
Who You Are
- A platform engineer who builds and maintains the infrastructure backbone for both on-prem and cloud environments
- Passionate about automating daily operations and eliminating manual toil
- Comfortable authoring and evolving IaC (Infrastructure as a Code) templates to enforce consistency.
What You’ll Do & Learn
- Roll out & maintain on-premises and cloud infrastructure for development and testing environments
- Implement & support CI/CD pipelines to drive our software delivery processes
- Develop automation tools that streamline routine operations and improve reliability
- Build & enhance Infrastructure-as-Code templates (Terraform, Pulumi) for rapid, repeatable provisioning
- Document system designs, configurations, and processes to enable an asynchronous, distributed team culture

A Bengaluru-based IT services and consulting firm.
Role Overview
We are seeking an excellent Python developer with strong coding skills and a proven ability to solve complex problems. The role involves working on backend systems and automation projects, with flexibility across domains for technically strong candidates.
Key Responsibilities
- Develop, optimize, and maintain Python-based applications.
- Contribute to backend development and automation workflows.
- Collaborate with cross-functional teams to deliver scalable solutions.
- Troubleshoot and resolve technical challenges with a problem-solving mindset.
- Ensure code quality, performance, and maintainability.
Key Requirements
- Strong expertise in Python programming with excellent coding skills.
- Experience in backend development or automation systems preferred.
- Demonstrated problem-solving ability and technical depth.
- Open to diverse domains, provided the candidate is technically strong.
Location: Chennai (Hybrid)
Commitment: Minimum 2 Years (Excluding 3 months of Probation)
Experience Level: Fresher / Entry Level
Job Overview:
We are looking for a skilled and versatile System Administrator with strong expertise in Windows and Linux environments, along with working knowledge of cloud infrastructure, cybersecurity, automation, and AI/ML systems.
The ideal candidate should be capable of handling enterprise IT infrastructure, supporting multi-cloud environments, and contributing to AI/ML deployment and integration activities. Strong communication skills and the ability to collaborate with technical and client-facing teams are essential.
Key Responsibilities:
- Manage and maintain Windows and Linux server environments ensuring stability, performance, and security.
- Support deployment, configuration, and administration of IT infrastructure components across on-prem and cloud environments.
- Monitor system health, troubleshoot issues, and ensure high availability of services.
- Work with cloud platforms such as AWS, Microsoft Azure, and Google Cloud.
- Assist in implementation of security solutions including IAM, firewalls, endpoint protection, and SIEM tools.
- Develop and maintain automation scripts using Python, PowerShell, or JavaScript.
- Support deployment and integration of AI/ML models into production environments.
- Collaborate with engineering and development teams to optimize infrastructure and application performance.
- Participate in technical discussions, documentation, and client support activities when required.
Required Skills & Qualifications:
- Strong knowledge of Windows and Linux system administration.
- Good understanding of networking, servers, and cloud fundamentals.
- Experience or exposure to AWS, Azure, or GCP.
- Proficiency in scripting languages such as Python, PowerShell, or JavaScript.
- Basic understanding of cybersecurity principles and system hardening.
- Familiarity with AI/ML concepts and deployment workflows is an advantage.
- Strong analytical and troubleshooting skills.
- Excellent verbal and written communication skills.
Preferred Qualifications:
- Experience with virtualization and containerization (VMware, Docker, Kubernetes).
- Knowledge of CI/CD pipelines and DevOps practices.
- Exposure to MLOps concepts and model deployment workflows.
- Understanding of monitoring tools and logging systems.
- Experience working in hybrid or enterprise IT environments.
What We Offer:
- Exposure to enterprise-level infrastructure and cloud environments.
- Opportunity to work on real-world AI/ML integration projects.
- Structured career growth into Cloud, DevOps, Security, or AI/ML engineering roles.
- Collaborative work environment with hands-on learning opportunities.
- Competitive compensation and long-term growth path.
Who Should Apply:
- Freshers or candidates with up to 2 years of experience.
- Candidates passionate about system administration, cloud computing, and AI/ML.
- Individuals eager to work in infrastructure-heavy, production environments.
- Strong communicators who can work in team-oriented and client-facing roles.
Lead / Sr. Data Engineer (Architect & Engineering Owner)
The Role
We are seeking a Lead Data Engineer who operates at the intersection of high-scale engineering and enterprise architecture. In this role, you will "own" our healthcare data platform end-to-end. You aren't just building pipelines; you are designing the blueprint for how clinical, claims, and sales data flow through our ecosystem. You will bridge the gap between legacy systems (MSSQL/Oracle) and modern cloud warehouses (Snowflake/Redshift/Databricks), ensuring our data is governed, HIPAA-compliant, and optimized for advanced analytics.
What You’ll Do
1. Architecture & Strategic Leadership
- Design the Blueprint: Own the enterprise data architecture (Staging, Integration, Warehouse, and Semantic layers). Define the evolution from monolithic databases to scalable cloud-hosted analytics.
- Modeling Mastery: Lead the design of complex Dimensional Models (Star/Snowflake) and implement advanced Slowly Changing Dimension (SCD) strategies to track historical clinical events.
- Set the Standard: Establish coding, version control (GitHub), and CI/CD standards. Conduct design reviews and mentor a team of engineers to move from "task-takers" to "system-builders."
2. Advanced Data Engineering (Hands-on)
- Modern ELT/ETL: Build and orchestrate production-grade pipelines using Python, Airflow, and dbt. Manage automated ingestion via Fivetran or custom-built frameworks for APIs and EHRs.
- Multi-Engine Expertise: Operate seamlessly across PostgreSQL, MSSQL, and Oracle, while optimizing petabyte-scale cloud warehouses like Snowflake or Redshift.
- Performance Tuning: Own query optimization. You should be the expert at using EXPLAIN/ANALYZE, partitioning, and indexing to reduce compute costs and latency.
- Quality & Reconciliation: Design robust validation frameworks to ensure data integrity—essential for healthcare compliance and clinical trust.
3. Healthcare Interoperability & Governance
- Data Standards: Map diverse datasets (EHR, API, Flat Files) to HL7 FHIR resources and curated analytic layers.
- Privacy by Design: Embed HIPAA Security Rule safeguards (encryption, audit trails, and access controls) directly into the code and infrastructure.
- Interoperability: Handle complex semi-structured data (JSON/XML) from third-party partners and EMR systems.
What You’ll Bring
- Experience: 8–12+ years in Data Engineering/Architecture. You should have a track record of leading technical projects or mentoring teams.
- The "Hybrid" Stack: * Expert SQL/PL-SQL: Deep experience with performance tuning in relational environments (Oracle/MSSQL).
- Modern Tools: Practical experience with Snowflake/Redshift, dbt, and Airflow.
- Programming: High proficiency in Python (Pandas, PySpark) or Java/Scala for custom ETL routines.
- Architectural Depth: Clear understanding of SDLC, Agile (Scrum), and Data Modeling frameworks.
- Healthcare Domain: Exposure to pharmaceutical or clinical data (Life Sciences, EMR, or Claims) is highly preferred.
- Soft Skills: The ability to translate "clinical business needs" into "technical runbooks" and communicate effectively with stakeholders.
Nice to Have
- AI/ML Integration: Experience supporting Data Science teams with feature extraction and model deployment (SageMaker/Azure ML).
- Advanced Tooling: Familiarity with NoSQL (MongoDB), search engines (Elasticsearch), or niche ETL tools (Talend/Informatica) for migration purposes.
- Cloud Infrastructure: Hands-on experience with AWS Glue, Lambda, or Azure Data Factory.
ML DEVELOPER
Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.
JOB OVERVIEW
We are seeking a talented and results-oriented ML Developer to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating AI assistants of the future.
The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.
RESPONSIBILITIES:
- Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
- Develop custom evals for AI agents.
- Build and maintain ML pipelines
- Optimize and evaluate ML models to ensure accuracy and performance.
- Define system requirements and integrate ML algorithms into cloud based workflows.
- Write clean, well-documented, and maintainable code following best practices
REQUIREMENTS:
- 2-3+ years of experience in data science, machine learning, or a similar role.
- Demonstrated expertise with python, PyTorch, and TensorFlow.
- Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
- Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
- Has demonstrated experience with Model Context Protocols (MCPs), DSPy, AI Agents, MLOps etc
WHO CAN APPLY:
Only those candidates will be considered who,
- have relevant skills and interests
- can commit full time
- Can show prior work and deployed projects
- can start immediately
Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.
SALARY DETAILS: Commensurate with experience.
JOINING DATE: Immediate
JOB TYPE: Full-time
Role Objective
Develop business-relevant, high-quality, scalable Power BI Dashboards keeping customer requirements at the core.
Roles and Responsibilities
- Technical Expertise: Very clear understanding of core concepts like Data Modelling, DAX, Power Query, and Visualization principles.
- Programming Languages: Expertise in Python and SQL for data manipulation, analysis, and automation. Additionally, leverage DAX and M Query for complex data transformations within Power BI.
- Dashboard Development & Quality Measures: Design, develop, and maintain Power BI dashboards ensuring high quality, data accuracy, user-friendly visualization, and adherence to timelines. Define and meet dashboard quality standards such as data accuracy, visualization clarity, and performance benchmarks.
- Translate Business Problems: Collaborate with stakeholders to translate complex business problems into data-centric solutions that address functional, non-functional, and commercial concerns, such as reliability, scalability, and maintainability
- Hypothesis Development: Decompose business problems into a series of testable hypotheses, identifying relevant data assets required for evaluation
- Quality Measures: Define and meet the dashboard quality measures like data accuracy, data visualization, timelines, support, etc.
- Solution Design: Based on the business requirements design wireframes, POC, and final product.
- Performance Optimization: Improve user experience by continuously enhancing performance, maintainability and scalability.
- Troubleshooting: Quick issue resolution as defined by SLA. Resolve issues about access, latency, data accuracy, security, etc.
Job Requirements
1. Education- MBA, B. Tech (Comp. Sc), BBA, BCA, MCA or equivalent
2. Experience- 4+ years of experience in developing Power BI Dashboards.
3. Behavioral Skills-
- Clear and Assertive communication
- Ability to comprehend the business requirement
- Teamwork and collaboration
- Analytics thinking
- Time Management
4. Technical Skills-
- Understand various datasets as play and create a high-quality, robust, and scalable data model in Power BI.
- Should be able to write data transformation steps using M-Language and not just UI.
- Demonstrated ability to write and comprehend complex DAX.
- Understanding of visualization best practices. We encourage insightful dashboards NOT colorful.
- Conceptual clarity and Hands-on experience with SQL and Python.
- Analytical skills to create powerful data stories
- Experience with using DAX Studio and Tabular Editor is a plus
- Expertise in High Volume Data Processing Production environment is a plus
Note: Candidates based in Pune or those willing to relocate will be preferred.
Why Ven Analytics?
At Ven, we are a fast-paced, innovative startup committed to delivering cutting-edge solutions. Our dynamic work environment fosters creativity, collaboration, and professional growth.
We offer a range of employee benefits, including:
- Competitive Compensation: Attractive salary packages and performance-based incentives.
- Career Development: Opportunities for growth, learning, and advancement in a rapidly expanding company.
- Health Benefits: Wellness programs and resources to support your physical and mental well-being.
Join us at Ven Analytics, where you'll have the chance to make an impact, grow professionally, and be part of an exciting journey!
Pyspark Data Engineer:
- Hands-on expertise in designing, building, and maintaining Apache Spark pipelines in production environments.
- Proven experience building and scaling data ingestion frameworks that integrate data from multiple source systems, with a focus on reliability, reusability, and scalability.
- Deep understanding of Spark architecture (driver/executors, DAG, partitioning, shuffles, caching, cluster resource management) and experience operating pipelines at scale, including data transformations on datasets ~500 GB+.
- Strong understanding of Oracle SQL and HDFS, including handling file formats and applying appropriate data cleansing, normalization, and formatting to produce curated output datasets.
- Ability to write Python, Pyspark, and shell scripts to process, transform, and automate data workflows. The Candidate should be good in writing application programs and automation manual data processing steps using python.
Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong Trouble shooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Dockers and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.
AI / LLM Engineering — Good to Have
- Candidates with exposure to AI/LLM engineering will have a strong advantage as we build intelligent, AI-augmented AdTech solutions. None of the below is mandatory.
- LLMs: OpenAI (GPT-4/4o), Anthropic (Claude), Meta (Llama)
- Orchestration & Agents: LangChain, LangGraph, LlamaIndex
- Tool Calling / MCP: Function Calling (OpenAI / Anthropic), FastMCP or Custom MCP Servers
- RAG (Retrieval-Augmented Generation): RAG pipeline design, LlamaIndex, LangChain retrievers and chains
- Vector Databases: Pinecone, Weaviate, FAISS
- Embeddings: OpenAI Embeddings, Hugging Face Sentence Transformers
- Observability: LangSmith, Sentry
- Backend / Infra for AI: Django REST Framework, FastAPI.
Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong Trouble shooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Dockers and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.
AI / LLM Engineering — Good to Have
- Candidates with exposure to AI/LLM engineering will have a strong advantage as we build intelligent, AI-augmented AdTech solutions. None of the below is mandatory.
- LLMs: OpenAI (GPT-4/4o), Anthropic (Claude), Meta (Llama)
- Orchestration & Agents: LangChain, LangGraph, LlamaIndex
- Tool Calling / MCP: Function Calling (OpenAI / Anthropic), FastMCP or Custom MCP Servers
- RAG (Retrieval-Augmented Generation): RAG pipeline design, LlamaIndex, LangChain retrievers and chains
- Vector Databases: Pinecone, Weaviate, FAISS
- Embeddings: OpenAI Embeddings, Hugging Face Sentence Transformers
- Observability: LangSmith, Sentry
- Backend / Infra for AI: Django REST Framework, FastAPI.
Employment Type
Fulltime
Experience Level
Associate
Work Experience (years)
1.5- 4 Years
Annual Compensation
INR 700,000 - 1,000,000
Skills
No of Openings
2
Role Objective
As a Fullstack Intern, you will gain hands-on experience in developing business-relevant, high-quality, and scalable web applications. You will work closely with our dynamic AdTech team to solve real-world challenges in the Media and Entertainment sector.
Roles & Responsibilities
- Application Design: Work with the team to understand requirements, contribute to user stories, and support the design process. Assist in reviewing designs, giving feedback, and ensuring alignment with user expectations.
- Architecture: Learn and contribute to designing scalable and robust system architectures (on-prem or cloud – Azure, AWS, or GCP).
- Development: Assist in front-end and back-end development using technologies such as SQL, Django, Angular/React, HTML, and CSS. Exposure to GoLang and Big Data will be a plus.
- Deployment: Support the implementation of scalable and cost-effective deployment strategies. Contribute to CI/CD processes on IIS or Linux. Familiarity with Dockers is a plus.
- Maintenance: Help in maintaining development and production environments, troubleshooting issues, fixing bugs, and suggesting improvements.
- Data Migration: Learn and assist in planning and implementing database migration strategies.
- Documentation: Contribute to technical documentation, including HLD, technical diagrams, script design, and SOPs.
- Client Interaction: Gain exposure to client communication and understand how to translate business requirements into technical solutions.
Requirements
Education – B.Tech (Computer Science, IT) or equivalent, currently pursuing or recently completed.
Experience – Previous internship experience is a plus.
Behavioural Skills
- Clear and assertive communication
- Ability to comprehend business requirements
- Teamwork and collaboration
- Analytical thinking
- Time management
- Problem-solving and troubleshooting skills.
Technical Skills
- Back-end & Front-end: Django, Angular/React, HTML, CSS
- Cloud Technologies: AWS, GCP, Azure
- Big Data: Hadoop, Spark (knowledge is a plus)
- Containerized Deployment: Dockers/Kubernetes (a plus)
- Other: Understanding of GoLang is a plus
What You’ll Do:
- Build and ship responsive, high-performance web applications using React.
- Translate UI/UX designs into clean, maintainable code.
- Integrate frontend with backend services via REST/GraphQL APIs.
- Optimize applications for speed, scalability, and cross-browser compatibility.
- Write reusable, modular components and maintain design consistency.
- Collaborate with product managers, designers, and engineers to deliver features end-to-end.
- Identify performance bottlenecks and improve frontend architecture.
- Participate in code reviews and contribute to best practices.
What We’re Looking For:
- 4–6 years of experience in frontend development.
- Strong hands-on experience with React.js.
- Solid understanding of JavaScript (ES6+), HTML5, and CSS3.
- Familiarity with state management (Redux / Zustand / Context API).
- Experience working with REST APIs and asynchronous programming.
- Understanding of responsive design and cross-browser compatibility.
- Experience with build tools (Webpack, Vite, etc.).
- Familiarity with Git-based workflows.
- Good problem-solving and communication skills.
Good to Have:
- Experience with testing frameworks (Jest, React Testing Library).
- Exposure to performance optimization techniques.
- Basic understanding of backend technologies (Node.js / Python).
What You’ll Work On:
- Building intuitive and scalable user interfaces for our core product.
- Improving performance and user experience across the platform.
- Contributing to frontend architecture and design systems.
- Solving real-world problems at scale.
Why Join Us
- Work on impactful products used by real customers.
- Opportunity to own features end-to-end.
- Collaborative and fast-paced environment.
- Demonstrated experience building production-grade applications with an emphasis on scalability, maintainability, and performance.
- Strong expertise in concurrency and parallelism, including:
- Multithreading and multiprocessing
- Synchronous and asynchronous programming (e.g., async/await)
- Designing for throughput, latency, and safe shared-state handling
- Proven experience integrating with external systems via application interfaces, including:
- Building and consuming RESTful APIs
- Authentication/authorization patterns (e.g., API keys, OAuth where applicable)
- Reliable integration patterns (timeouts, retries, idempotency, error handling)
- Strong SQL skills, including the ability to write efficient, complex queries (joins, aggregations, window functions) and optimize performance where needed.
About the company
Metron Security provides automation and integration services to leading Cyber Security companies. Our engineering team works on leading security platforms including - Splunk, IBM’s QRadar, ServiceNow, Crowdstrike, Cybereason, and other SIEM and SOAR platforms.
Software Engineer is a challenging role within Cyber Security Engineering integration development. The role involves developing a product/service that achieves high-performance data exchange between two or more Cyber Security platforms. A Software Engineer is at the core of the evolution process and is responsible for the End-to-End delivery of the project, right from getting the requirements from customers to deploying the project for them on-prem or on the cloud, depending on the nature of the project. We follow the best practices of engineering and keep evolving. We are agile.
Each integration needs reskilling yourself with the required technology for that project. If you are passionate about programming and believe in the best practices of software engineering, the following are the skills we are looking for:
● Developer-centric culture - No bureaucracy or red tape.
● Chance to work on 200+ security platforms.
● Opportunity to engage with end-users (customers) and just a cog in the wheel.
About the role
We are looking for passionate developers with 4-7 years of experience in software development to join the Metron Security team as Software Engineers.
Mandatory Skills
- 4+ years of experience in software engineering, with a proven track record of leading engineering teams and mentoring junior developers.
- Expertise in at least one or two Object-Oriented Programming language (Python, typescript, Java,Node.js,Angular, react.js C#, C++) .
- Good knowledge of Data Structure and its correct usage.
- Oversee code reviews, ensuring adherence to best practices and maintaining high code quality standards.
- Imbibe and maintain a strong customer delight attitude while designing and building products/services.
Other Requirements
- Open to learn any new software development skill if needed for the project.
- Alignment and utilisation of the core enterprise technology stacks and integration capabilities throughout the transition states.
- Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
- Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
- Good knowledge of the implications of Cyber Security on production.
- Experience in architecting & estimating deep technical custom solutions & integrations.
- Manage the customer to ensure we have a continuous flow of work from them so that the team has enough work for at least a month.
- Mentor and coach team members, providing technical guidance and fostering professional development.
Added advantage:
- Experience in the Cyber Security domain.
- Experience in developing software using web technologies.
- Experience in handling a project from start to end.
- Hands-on experience in an Agile Development project and in writing and estimating User Stories.
- Contribution to open source - Please share your link in the application/resume if any.
Role: Core infrastructure for two global applications.
Key responsibilities:
Heavy Data Lifting: Designing and maintaining complex MongoDB aggregation pipelines and executing high-volume bulk updates.
Systems Connectivity: Managing the lifecycle of Bridges, WebSockets, and REST APIs for global traffic.
Global Reliability: Troubleshooting performance regressions, managing SSL/TLS certificates, and optimizing AWS S3/EC2 resources.
Core Logic: Implementing server-side push notifications, custom payment flows, and background Cron orchestration.
Requirements (Must Have)
Production Experience: Proven experience managing backend services for live, high-traffic applications. (Must have)
Note: Learning projects, bootcamps, and online courses will not be considered.
Python Mastery: Expert-level proficiency in backend Python development. (Must have)
MongoDB Expertise: Deep knowledge of Aggregations and Bulk operations. (Must have)
AWS Proficiency: Direct experience with S3 and EC2 in a production setting. (Must have)
Infrastructure Logic: Ability to manage SSL/Encryption and optimize resource allocation (Cron vs. API). (Must have)

Germany-Headquartered Fast-Growing IT Consulting Company
Location: Bengaluru, Kadugodi
Experience: 4-6 years
About company:
Client is a Germany-headquarted IT consulting and service organization. With over 25 years of expertise and global presence, we are committed to customer excellence and focused in addressing niche areas of product engineering, process consulting and software development in automotive, railways, production automation, data management and business IT domains.
Key Responsibilities:
- Develop or enhance features to meet industry standards, safety regulations, and project specifications.
- Collaborate with Business stakeholders to understand Business Requirements
- Work closely with hardware engineers, QA, and Scrum Master to integrate software solutions into embedded systems.
- Identify Problems and resolve technical issues within embedded systems, making critical decisions on system architecture and software design.
- Strive towards Improving Processes, system performance, optimize code and innovate in software design.
- Work closely with vendors to design and implement edge AI solutions
Requirements:
- Must have done B.Tech/B.E preferably in ECE stream
- Must have Proficiency in Python/C/C++, Go Lang, Scripting in Bash
- Must have Strong Fundamentals on Embedded Development Life cycle
- Must have Strong knowledge on Embedded Linux, Unix/Linux commands, RTOS and SQL
- Sound knowledge of CAN/J1939 protocol, Sensor Data Processing and Telemetry
- Experience with tools like JIRA and Agile/Scrum methodology
- Excellent communication skills and ability to collaborate with cross-functional teams.
- Ability to work on multiple projects and prioritize work effectively
- Ability to work independently and as a team member
- Strong analytical and problem-solving skills
Nice to Have:
- Understanding of ADAS, Driver Monitoring Systems
- Experience with embedded video coupled with edge AI
About FloBiz
Website : https : //flobiz.in/
FloBiz is India's first neobusiness platform, revolutionizing the way Small and
Medium-sized Enterprises (SMEs) operate in India. Our mission is to digitize 65 million
MSMEs in the country, and we are well on our way to achieving this goal. Our flagship
product, myBillBook, has already empowered over 10 million businesses across 2000+
towns with its billing, accounting, inventory management, and payment collection
solutions. With over $25 billion in annual transactions, we are proud to be a rapidly
growing tech startup serving the needs of SMBs in India.
Our Flagship product : MyBillBook
myBillBook is India's leading GST billing & accounting software with mobile, web app & native desktop offerings and runs on Android as well as iOS. myBillBook has been designed to aid SMB owners to conduct their operations from anywhere and anytime and provides a secure platform for business owners to record transactions & track business performance on the go. It is an ideal software for GST registered businesses where invoicing is one of the core business activities. Also, businesses looking to digitise their operations to understand their financial position better can use this software. It helps them create bills (GST & non-GST), record purchases & expenses, manage inventory and track payables/receivables directly from their mobile phones or computers. Also, the app generates 25 critical business reports that help business owners make effective business decisions. myBillBook is currently available in English, Hindi, Gujarati & Tamil.
Currently, the app has been downloaded by over 6.5M SMBs across the country with over 10x growth in user base in the last 12 months alone. Even with such pace of adoption of the product, myBillBook continues to be the highest rated application in its category on Google Play Store.
Key Responsibilities :
• Design, develop, maintain and optimise complex, scalable and distributed systems capable of handling large-scale datasets and high-throughput workloads.
• Optimise performance, reliability and availability across the whole system.
• Write clean, efficient, and maintainable code in multiple programming languages as needed.
• Contribute to architectural decisions and help improve engineering best practices.
• Work with a builder mindset, contribute and collaborate across cross-functional teams, to unblock and accelerate delivery. No role silos.
• Actively mentor juniors through code reviews, design discussions and pairing.
• Leverage LLM-assisted tools (Claude, Cursor, LLM-powered code review and testing) to accelerate development velocity and improve code quality.
• Build and evolve the platform to be LLM-ready - design APIs, data pipelines and system interfaces that enable seamless LLM integration and automation.
Required Qualifications :
• 3-5 years of experience in back-end software development focusing on large-scale distributed systems.
• BE/B.Tech in Computer Science or a related technical field (or equivalent practical experience).
• Strong software development skills in one or more languages such as Java / Ruby on Rails / Python.
• Working experience with SQL and NoSQL databases (e.g. PostgreSQL, MongoDB) with ability to design effective schema and perform various optimisations for large-scale data.
• Deep understanding of system design principles and best practices for building scalable and resilient systems with microservices.
• Excellent problem-solving with experience in incident management, monitoring, alerting, and root cause analysis.
• Experience with event-driven architectures (Kafka, SQS, RabbitMQ, or similar).
• Experience in building intelligent AI agents and systems powered by Large Language Models.
• Hands-on experience with cloud platforms like AWS or Google Cloud Platform.
• Deep understanding of software development best practices, patterns and code reviews.
• Effective communication skills to coordinate with cross-functional teams during large-scale projects.
Perks @ Benefits :
• Competitive salary with performance-linked rewards and recognitions.
• An extensive medical insurance that looks out for our employees & their dependants. Well love you and take care of you, our promise
• Flobiz Academy : Helps you in terms of Learning and enhancing your skills
• A reward system that celebrates hard work and milestones and Performances throughout the year.
• A cool work-from-home setup that makes you feel right at home. An environment so comfortable that you won't miss your home.
Location : Remote (WFH) 5 days working
We're hiring a Tech Lead to own the backend architecture and engineering culture of a fast-growing HR-Tech platform used by 100+ enterprise clients across India.
This isn't a "manage the team and attend standups" role. You'll be hands-on with architecture decisions, own the deployment environment, and work directly with founders who came from Gallup and IIM Ahmedabad.
The stack is Node.js, Docker, Kafka, and cloud (AWS / Azure / GCP).
The role is right for you if:
- You've led backend teams and still enjoy being close to the code
- You've built or significantly improved a CI/CD pipeline in production
- You can explain a technical trade-off to a business stakeholder without making them feel talked down to
- You want ownership, not just a title.
What you'll be doing:
- Designing and maintaining backend systems built for scale, security, and reliability
- Leading and mentoring a team of backend engineers through sprint cycles and beyond
- Owning cloud infrastructure (AWS / Azure / GCP) and keeping deployments stable and fast
- Improving CI/CD practices and automating what shouldn't require human intervention
- Translating product requirements into technical specs the team can actually execute
- Conducting code reviews that raise quality without slowing people down
- Working directly with founders and business stakeholders — not through layers of management
What we're looking for:
- 5–8 years of software engineering experience, with at least 2 years in a lead or senior role
- Strong backend development skills in Node.js (Python or Java is a bonus)
- Hands-on cloud experience — building and managing infrastructure, not just deploying to it
- Comfort with Docker, Kafka, and Linux environments
- Experience running Agile teams — sprint planning, retrospectives, the works
- Clear communication and the ability to make decisions under ambiguity
- B.Tech / B.E. in Computer Science or Information Technology
Location: Mumbai (on-site) Experience: 5–8 years Start: Immediate
If this sounds like the role you've been looking for, apply below or send a message directly. The process is fast and so is the decision.
You will be at the forefront of Byteridge's AI infrastructure capabilities, helping customers unlock the full potential of foundation models through expert-level deployment on GPU infrastructure.
This highly technical role requires deep expertise in machine learning infrastructure, GPU optimization, and production ML systems, combined with the ability to translate complex technical concepts into customer success.
What You'll Do
Model Deployment & Optimization
• Lead end-to-end deployments of large language models on AWS infrastructure for strategic
customers
• Design and implement training, fine-tuning, and inference pipelines using Amazon SageMaker AI
• Optimize model performance through GPU-level tuning, kernel optimization, and infrastructure
configuration
• Deploy models on diverse GPU architectures including NVIDIA and AWS custom silicon (Trainium,
Inferentia)
Infrastructure Architecture & Performance
• Architect scalable ML infrastructure using SageMaker AI Inference, HyperPod, and distributed
training frameworks
• Implement CUDA-level optimizations and custom kernels for improved model performance
• Design storage and networking architectures optimized for high-throughput ML workloads
• Troubleshoot and resolve complex performance bottlenecks at the GPU driver and kernel level
Customer Engagement & Technical Leadership
• Partner with AWS AI Specialist Solution Architects and customer ML teams to understand model
requirements and deployment constraints
• Provide technical guidance on model selection, fine-tuning strategies, and production best practices
• Conduct performance benchmarking and cost optimization analysis for ML workloads
• Share field insights with AWS product teams to influence infrastructure and service roadmaps
What We're Looking For
Core Qualifications
• Bachelor's degree in Computer Science, Engineering, or equivalent practical experience (Master's or
PhD preferred)
• 5+ years of experience in machine learning infrastructure, model deployment, or GPU computing
• Strong programming skills in Python and experience with ML frameworks (PyTorch, TensorFlow, JAX)• Deep understanding of LLM architectures, training methodologies, and inference optimization
Technical Expertise (High-Level Alignment)
• Hands-on experience training, fine-tuning, or deploying large language models in production
• Proficiency with GPU programming, CUDA, and kernel-level optimization techniques
• Experience with distributed training frameworks and multi-GPU/multi-node orchestration
• Strong knowledge of AWS core services: EC2 (GPU instances), S3, EFS, VPC, and networking
Preferred Experience
• Direct experience with Amazon SageMaker AI (Training, Inference, HyperPod) or equivalent ML
platforms
• Understanding of GPU architectures (NVIDIA A100, H100) and AWS custom silicon (Trainium,
Inferentia)
• Experience with model compression techniques (quantization, pruning, distillation)
• Knowledge of MLOps practices, model monitoring, and production ML system design
• Background in high-performance computing, distributed systems, or systems programming
Essential Attributes
• Ability to dive deep into technical problems and debug complex infrastructure issues
• Strong analytical skills with data-driven approach to optimization
• Excellent communication skills to explain complex technical concepts to diverse audiences
• Comfortable working in ambiguous, fast-paced environments with evolving requirements
• Ownership mindset with ability to drive projects from architecture to production
About Us
We believe the future of software development is AI-native — where engineers operate at a higher level of abstraction and quality remains non-negotiable.
Incubyte is a software craft consultancy where the “how” of building software matters as much as the “what”.
We partner with companies of all sizes, from helping enterprises build, scale, and modernize to early-stage founders bring their ideas to life.
Our engineers operate in an AI-native development model, using AI as a collaborator across the SDLC to accelerate development while upholding the discipline of software craftsmanship. Guided by Software Craftsmanship and Extreme Programming practices, we build reliable, maintainable, and scalable systems with speed, without compromising quality. If this way of building software resonates with you, we’d like to talk.
Our Guiding Principles
These principles define how we work at Incubyte. They are non-negotiable.
Relentless Pursuit of Quality with Pragmatism
We build high-quality systems without losing sight of delivery.
Extreme Ownership
We take responsibility end-to-end for decisions, execution, and outcomes.
Proactive Collaboration
We collaborate closely, challenge each other, and solve problems together.
Active Pursuit of Mastery
We continuously improve our craft and raise our bar.
Invite, Give, and Act on Feedback
We seek, give, and act on feedback to get better every day.
Ensuring Client Success
We act as trusted partners and focus on real outcomes, not just output.
Job Description
This is a remote position.
Experience Level
This role is ideal for engineers with 3-5 years of experience and a strong background in building secure, scalable platforms.
We are looking for hands-on DevOps and Backend Engineers with real-world experience in application/feature development, system design, testing practices such as TDD, full-stack development, handling production incidents, distributed systems, and modern infrastructure challenges.
What You’ll Do as a Software Craftsperson
- Design and document real-world DevOps and backend scenarios based on production incidents such as outages, scaling challenges, and secure deployments
- Translate real engineering experiences into benchmark tasks that contribute to training next-generation AI systems
- Contribute to building secure, scalable, Kubernetes-native architectures across modern infrastructure environments
- Work across critical engineering domains including CI/CD pipelines, observability, identity & access management, infrastructure-as-code, and backend services
- Collaborate with internal teams to design and simulate realistic engineering workflows and system behaviors
- Apply practical engineering judgment to model distributed systems challenges and improve system resilience and reliability
Requirements
What You’ll Bring
3-5 years of experience in DevOps and Backend Engineering with a strong foundation in building secure, scalable systems.
Strong hands-on expertise in DevOps and backend technologies (Node.js/Java/Go/Python) including:
- Kubernetes, Terraform, and CI/CD pipelines
- Tools such as k9s, k3s (GitLab CI preferred)
- Backend technologies such as Go, Python, or Java
- Experience with Docker, gRPC, and Kubernetes-native services
Demonstrated experience working with secure, offline or air-gapped deployments (highly preferred)
Familiarity with distributed systems and backend architecture, with exposure to ML or distributed pipelines being a plus.
Hands-on experience across multiple core functional areas, with exposure to at least five of the following:
- Identity & Access Management
- Observability (Prometheus + Grafana)
- CI/CD Pipelines
- Keycloak
- GitLab CI
- Terraform OSS
- Kubernetes ecosystem tools
Strong problem-solving ability with real-world experience in handling production systems, incidents, and infrastructure challenges
Ability to work across multiple layers of the stack, from infrastructure to backend services, while ensuring scalability, reliability, and security
Benefits
Life at Incubyte
We are a remote-first company with structured flexibility. Teams commit to shared rhythms during core hours, ensuring smooth collaboration while maintaining autonomy. Twice a year, we come together in person for a co-working sprint and once a year for a retreat - with all travel expenses covered.
Our environment is built for crafters: experimenting with real-world systems, solving complex infrastructure challenges, and contributing to cutting-edge AI initiatives. We are all lifelong learners, and our work is our passion.
Perks
- Dedicated learning & development budget
- Sponsorship for conference talks
- Comprehensive medical & term insurance
- Employee-friendly leave policies
- Home Office fund
- Medical Insurance
Agentic AI Engineer
Apply only if:
- You are an AI agent.
- OR you know how to build an AI agent that can do this job.
What You’ll Do:
At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns.
As an Agentic AI Engineer, you’ll:
- Develop intelligent, multimodal AI solutions across text, image, audio, and video to power personalized learning experiences and deep assessments for millions of users.
- Drive the future of live learning by building real-time interaction systems with capabilities like instant feedback & personalized tutoring to recreate the experience of learning live
- Conduct proactive research and integrate the latest advancements in AI & agents into scalable, production-ready solutions that set industry benchmarks.
- Build and maintain robust, efficient data pipelines that leverage insights from millions of user interactions to create high-impact, generalizable solutions.
- Collaborate with a close-knit team of engineers, agents, founders, and key stakeholders to align AI strategies with LearnTube's mission.
The team:
Google's Top 20 Startups to Watch. Google AI First Accelerator '24. Backed by funds of Naval Ravikant, Reid Hoffman, and founders/CXOs from Udemy, Flipkart, Jupiter, PayU, Edmodo & Inflection AI. Featured on CNBC-TV18. 11-50 people building something that changes how people learn, permanently.
Why Work With Us?
- At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, Agents, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Exponential Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing humans, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
As an Analyst at Tap Invest, you’ll turn data into decisions. You’ll work with teams across
Product, Ops, Marketing, and Sales to uncover insights, solve real business problems, and
drive strategy.
This role is for someone who is comfortable working with data independently and can
support business teams with reliable analysis and reporting.
Key Responsibilities
● Gather, organize, and clean data from various sources including databases,
spreadsheets, and external sources to ensure accuracy and completeness.
● Write SQL queries to pull, validate, and clean data from production databases.
● Build and maintain dashboards, and generate KPI reports. Track performance
against targets and identify areas for optimization.
● Analyze user funnels and investment patterns to surface actionable insights.
● Prepare and present clear, concise reports and visualizations to communicate
findings and recommendations to stakeholders across teams.
● Document data definitions, metrics, and assumptions clearly for consistency and
reuse.
What We’re Looking For
● 1 to 2 years of experience in Data Analytics, Business Analytics, or a similar role.
● Comfortable writing in SQL and validating queries.
● Solid with Excel / Google Sheets (pivot tables, lookups, charts).
● Genuine curiosity about how businesses use data to make decisions.
● Experience with scripts for data automations.
● Prior projects involving production datasets.
Nice to Have
● Familiarity with pandas or any data manipulation library for advanced automations.
● Interest in capital markets, bonds, fixed income or FinTech.
● Exposure to AI tools
Role & Responsibilities
Own the Client’s Outcome:
- Embed with enterprise customers – on-site and remotely – to understand their supply chain operations, data estate, and what success actually looks like for their business.
- Scope and design technical solutions for messy, real-world logistics problems – with a clear line to measurable impact: cost per delivery, SLA performance, empty kilometres.
- Own the full deployment lifecycle: architecture through go-live through steady-state. You’re accountable for the outcome, not just the code.
Build and Ship:
- Design, build, and maintain backend services in Node.js or Python that power routing, planning, and execution at enterprise scale.
- Build and own the integrations connecting Locus to client ERPs, TMS, WMS, and OMS platforms – these integrations are often the riskiest part of a deployment.
- Write production code that runs under real load. If it isn’t in production, it hasn’t shipped.
Be the Technical Interface with the Client:
- Run architecture reviews, lead integration workshops, and represent Locus in executive steering meetings. You need to be credible at every level of the client organisation.
- Bring field learnings back into the product and platform teams. Some of Locus’s best features started as a client workaround.
- Push back when a client request would compromise platform integrity – and propose a better alternative.
Show Up On-Site:
- Travel to client sites – domestic and international, up to ~30% of the time – for kick-offs, integration sprints, go-lives, and post-live reviews.
- Build the kind of relationship where the client’s ops lead calls you directly when something goes wrong at 2am, not a support ticket.
- Be comfortable wherever the work is: a warehouse floor, a logistics control tower, a C-suite boardroom.
Make the Next Deployment Easier:
- Document architecture decisions, integration patterns, and deployment playbooks – every engagement should make the next one faster.
- Work closely with Product, Customer Success, and Platform Engineering. Share what you’re seeing in the field; don’t wait to be asked.
- Mentor junior FDEs and raise the technical bar across the team.
Ideal Candidate
- Strong Forward Deployed / Field Engineer
- Mandatory (Experience 1): Must have 5+ years of backend engineering experience with hands-on coding in Node.js or Python, building production-grade systems
- Mandatory (Experience 2): Must have minimum 2+ years in client-facing / deployment-heavy roles, where they worked directly with enterprise customers
- Mandatory (Experience 3): Must have experience shipping and owning production systems end-to-end: From design → build → deployment → post-production support
- Mandatory (Tech Skills 1 - Backend & Systems): Strong in: Node.js or Python (must-have), Building scalable backend services
- Mandatory (Tech Skills 2 - Integrations): Must have experience with: Enterprise integrations (APIs, third-party systems), Systems like ERP / TMS / WMS / OMS
- Mandatory (Tech Skills 3 - Data & Messaging): Hands-on with: Relational + NoSQL databases, Event streaming / queues (Kafka / RabbitMQ or similar)
- Mandatory (Tech Skills 4 - Cloud & Deployment): Experience with: Cloud platforms (AWS / GCP / Azure), Docker + Kubernetes (or containerised deployments)
- Mandatory (Company): Top Product companies / Startups / SaaS / platform companies
Role: Senior Software Developer (Full Stack) - Python
Location: Coimbatore
YOE: 6+ years
Mandatory Skills: Python, AWS
Good to have: React, SQL, React Native, Knowledge in Flutter/Android Native Benefits: Learn more about our perks below
Compensation: Competitive compensation as per industry standards.
About the Role:
We aspire to build high-quality, innovative & robust software. If you are a hands-on platform builder with significant experience in developing scalable data platforms, look no further. Click on Apply and we will reach out to you soon.
Responsibilities:
● Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions.
● Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code.
● Prepares and installs solutions by determining and designing system specifications, standards, and programming.
● Improves operations by conducting systems analysis; recommending changes in policies and procedures.
● Obtains and licenses software by obtaining required information from vendors; recommending purchases; testing and approving products.
● Updates job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment
● Participate in educational opportunities & read professional publications;
● Protects operations by keeping information confidential.
● Provides information by collecting, analyzing, and summarizing development and service issues.
● Accomplishes engineering and organization mission by completing related results as needed.
● Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle.
Requirements:
● Proven work experience as a Full Stack Engineer or Senior Software Developer
● Strong experience designing and developing scalable and interactive applications
● Hands-on expertise in React or similar UI technologies for frontend development and Python or other modern backend languages
● Experience in mobile app development (e.g., React Native, Flutter, or Native Android/iOS)
● Deep understanding of relational databases (e.g., PostgreSQL/MySQL) with strong proficiency in SQL
● Experience with ORM frameworks (e.g., TypeORM, SQLAlchemy or similar)
● Familiarity with NoSQL databases (e.g., MongoDB) and caching systems like Redis is a plus
● Test-driven development and automated testing experience is a plus ● Proficiency with modern software engineering tools, Git-based workflows, and CI/CD pipelines
● Strong ownership mindset with ability to lead teams, mentor developers, and drive end-to-end delivery
● Excellent communication and collaboration skills with cross-functional stakeholders
● Working knowledge of AWS or other cloud platforms is an added advantage
About the Role
We are looking for a hands-on AI Agentic Lead to drive Agentic AI implementations on the Lyzr platform and lead in-house Agentic AI infusion into our products. This role is ideal for someone who combines strong technical depth with product thinking and has experience taking AI solutions from concept to deployment.
What We Are Looking For
- 6+ years to 15 years of overall experience
- At least 2 years of Agentic AI experience with product deployment exposure
- Strong experience in designing, building, and deploying AI agents/workflows for real business use cases
- Ability to lead architecture, development, deployment, and optimization of agentic solutions
- Strong problem-solving, ownership, and stakeholder-handling skills
- Interested to work in BENGALURU - WFO only.
Key Responsibilities
- Lead end-to-end delivery of Agentic AI solutions on the Lyzr platform
- Drive Agentic AI adoption across in-house products
- Design multi-agent workflows, orchestration patterns, tool usage, memory, guardrails, and evaluation approaches
- Work closely with product, business, and engineering teams to identify high-impact AI use cases
- Build scalable, production-ready solutions with focus on reliability, performance, and business value
- Mentor the team and shape best practices for Agentic AI delivery
Preferred Skills
- Hands-on experience with LLMs, AI agents, RAG, orchestration frameworks, prompt design, tool calling, and evaluation
- Exposure to production deployments, monitoring, debugging, and optimization of AI systems
- Experience integrating AI into enterprise products/platforms
- ML background is a plus, but not mandatory
Why Join Us
- Opportunity to work on live Agentic AI implementations
- Play a key role in building next-generation AI capabilities for both client solutions and internal products
- High ownership, strong growth opportunity, and direct impact on product direction
JD -
We are looking for a strong Data Engineer having hands on experience in building pipelines using Snowflake and DBT.
Key Responsibilities:
- Develop, maintain, and optimize data pipelines using DBT and SQL on Snowflake DB.
- Collaborate with data analysts, QA and business teams to build scalable data models.
- Implement data transformations, testing, and documentation within the DBT framework.
- Work on Snowflake for data warehousing tasks, including data ingestion, query optimization, and performance tuning.
- Use Python (preferred) for automation, scripting, and additional data processing as needed.
Required Skills:
- 6+ years of experience in building data engineering pipelines.
- Strong hands-on expertise with DBT and advanced SQL.
- Experience working with modern columnar/MPP data warehouses, preferably Snowflake.
- Knowledge of Python for data manipulation and workflow automation (preferred).
- Good understanding of data modeling concepts, ETL/ELT processes, and best practice.
Backend Developer
📍 Noida | 🕐 Full-Time | 🧭 Experience: 2–3 Years
The Mission
We aren't building traditional backend systems — we're powering the infrastructure behind Agentic Intelligence. TestMu AI is building the world's first AI-native platform where backend systems don't just serve requests, they enable autonomous decision-making, execution, and scale.
The name "TestMu" comes from our community conference. Our users and team aren't an audience — they're the heartbeat of what we build. We believe AI augments human potential. It doesn't replace it.
You'll be building the core backend systems that power AI-driven workflows — ensuring high performance, scalability, and reliability at every layer.
The Pillars of Impact
🚀 1. Core Backend & System Architecture (50%)
- Build and scale high-performance backend services and APIs
- Design efficient database schemas, query optimization, and data flows
- Write clean, logical, production-grade code (Python, Golang, or similar)
- Own system performance — latency, throughput, and reliability
⚙️ 2. Backend for AI Systems (30%)
- Develop backend systems supporting AI agents and autonomous workflows
- Handle large-scale data processing, async tasks, and event-driven systems
- Integrate backend infrastructure seamlessly with AI/ML components
🧠 3. Scalability & Distributed Systems (20%)
- Contribute to microservices architecture and service decomposition
- Build fault-tolerant, highly available distributed systems
- Optimize systems for high concurrency and real-time execution
Your Engineering Stack
Tech/ToolsPython / GolangBuilding core backend services and logic-heavy systemsAWS / GCPDeploying and scaling distributed backend infrastructureKafka / RabbitMQHandling asynchronous processing and event-driven workflows
The Bar
SignalCore Backend Experience2–3 years of hands-on experience building APIs, backend systems, and scalable servicesProblem-Solving AbilityStrong fundamentals in data structures, algorithms, and logical thinkingSystem Design UnderstandingAbility to design scalable backend systems with clear architectural thinkingOwnership & ExecutionExperience owning backend features end-to-end in a fast-paced environment
The Interview Loop · Screening for the Top 1%
RoundsRound I · Recruiter ScreenEvaluation of backend experience, problem-solving approach, and project depthRound II · Hiring ManagerDeep dive into backend projects, APIs, databases, and system design thinkingRound III · Domain LeadLive coding + backend problem-solving + discussion on scalability and distributed systemsFinal · LeadershipCulture fit, ownership mindset, and ability to operate in a high-velocity startup environment
Your Growth Trajectory TestMu AI is a high-growth environment where we promote based on complexity solved, not years of tenure. As a Backend Developer, you have a massive runway to scale from an Individual Contributor (IC) into a core Engineering Leadership role, working alongside pioneers in agentic intelligence.
Perks of the Future
- Health & Wellness: 100% premium covered insurance for you + family (spouse, kids, parents) with annual check-ups.
- Fuel for Innovation: Fresh, daily gourmet lunch and dinner served at our Noida HQ.
- Seamless Transit: Safe, GPS-enabled cab facilities for eligible shifts (home-office-home).
- POD Culture: Dedicated quarterly budgets for team-building, offsites, and collaborative celebrations.
Backend Engineer at Reo.Dev : Job Description
[Disclaimer: This is a longish read. However, we felt you might be interested to read in detail, about what you could be doing for the next 5ish years 😊]
Job Function: Backend Engineer
Experience: 2 – 4 years [number of years of experience is not a filter]
Salary and Incentives: Open for discussion
Location: Bangalore, India [Hybrid work - Remote + Office]
👋 Meet Reo.Dev
- Reo.Dev was founded in January 2023. So we are quite young 😊
- Reo was started by Achintya, Gaurav and Piyush – All of them have successfully built companies before [more on the Founding team below]
- We are building a Revenue Operating System for the Developer Focussed Companies (Think of us like a 6sense.com for Dev Focussed Companies).
- What we are building is quite innovative. Currently, no other company offers the capabilities Reo.Dev is building
- We recently closed our Seed round with top early stage investors (not disclosed yet)
Experience Required: 3-5 Years
No. of vacancies: 2
Job Type: Full Time
Vacancy Role: WFO
Job Category: Development
Job Description
ChicMic Studios is hiring for a highly skilled and experienced Sr. Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Roles & Responsibilities
- Develop, maintain, and scale web applications using Django & DRF.
- Implement and manage payment gateway integrations and ensure secure transaction handling.
- Design and optimize SQL queries, transaction management, and data integrity.
- Work with Redis and Celery for caching, task queues, and background job processing.
- Develop and deploy applications on AWS services (EC2, S3, RDS, Lambda, Cloud Formation).
- Implement strong security practices including CSRF token generation, SQL injection prevention, JWT authentication, and other security mechanisms.
- Build and maintain microservices architectures with scalability and modularity in mind.
- Develop Web Socket-based solutions including real-time chat rooms and notifications.
- Ensure robust application testing with unit testing and test automation frameworks.
- Collaborate with cross-functional teams to analyze requirements and deliver effective solutions.
- Monitor, debug, and optimize application performance, scalability, and reliability.
- Stay updated with emerging technologies, frameworks, and industry best practices.
- Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
- Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
- Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
- Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 3-5 years of professional experience as a Python Developer.
- Proficient in Python with a strong understanding of its ecosystem.
- Extensive experience with Django and Flask frameworks.
- Hands-on experience with AWS services for application deployment and management.
- Strong knowledge of Django Rest Framework (DRF) for building APIs.
- Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
- Experience with transformer architectures for NLP and advanced AI solutions.
- Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
- Familiarity with MLOps practices for managing the machine learning lifecycle.
- Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
- Excellent problem-solving skills and the ability to work independently and as part of a team.
- Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.
Responsibilities:
•Design and execute automated test scripts using Playwright.
•Build and maintain test frameworks for web applications.
•Perform regression, functional, and performance testing.
•Collaborate with developers to resolve defects.
•Ensure CI/CD integration of automated tests.
Skills:
•Strong experience with Playwright automation.
•Knowledge of testing frameworks (PyTest, Jest, Mocha).
•Proficiency in JavaScript/TypeScript or Python.
•Familiarity with CI/CD tools (Jenkins, GitHub Actions).
•Understanding of Agile/Scrum methodologies.
Responsibilities:
•Develop and deploy agentic AI solutions for automation and decision-making.
•Build intelligent agents capable of reasoning, planning, and interacting with environments.
•Integrate AI models with enterprise applications.
•Collaborate with data scientists to fine-tune models.
•Ensure ethical AI practices and compliance.
Skills:
•Strong knowledge of agentic AI frameworks.
•Proficiency in Python and ML libraries (TensorFlow, PyTorch).
•Experience with LLMs and reinforcement learning.
•Familiarity with APIs, cloud AI services, and orchestration tools.
About TIFIN
TIFIN is an AI-first fintech platform transforming wealth management through data science, machine learning, and intelligent automation. With strong global backing and a rapidly growing India hub, TIFIN is building scalable, next-gen financial products used by global institutions.
Role Overview
We are looking for a Senior Software Engineer with strong backend and AI integration experience to build scalable, high-performance systems. This role involves working closely with product, data science, and AI teams to develop intelligent platforms leveraging modern technologies and LLMs.
Key Responsibilities
- Design, develop, and scale backend systems and APIs using Golang and Python
- Build and integrate AI-driven features, including prompt-based workflows (Claude or similar LLMs)
- Work with MongoDB and Elasticsearch for high-performance data handling and search capabilities
- Optimize system performance, scalability, and reliability
- Collaborate with cross-functional teams (Product, AI/ML, Data Engineering)
- Contribute to architecture decisions and best engineering practices
- Write clean, maintainable, and production-grade code
Required Skills & Experience
- 3–5 years of experience in backend engineering
- Strong proficiency in Golang and/or Python
- Hands-on experience with MongoDB and Elasticsearch
- Experience working with LLMs / AI tools (Claude, OpenAI, etc.) and prompt engineering
- Good understanding of REST APIs, microservices architecture, and distributed systems
- Strong problem-solving and debugging skills
Good to Have
- Experience in fintech / SaaS platforms
- Exposure to AI/ML pipelines or data platforms
- Knowledge of cloud platforms (AWS/GCP/Azure)
- Familiarity with CI/CD and DevOps practices























