50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A
Job Title: AI/ML Engineer
Work Location: U.S Complex, Adjacent to Jasola Apollo Metro Station, Mathura Road New Delhi-110076
We, Infinity Assurance Solutions specialize in Warranty Service Administration, Extended Warranty, Accidental Damage Protection and a wide range of service products under our own brand “InfyShield.”
Our offerings cover Mobile Phones, Home Appliances, Consumer Electronics, Kitchen Appliances, IT Equipment, Office Automation, AV Solutions, Classroom and Conference Room Technologies, and more.
· We have a very extensive Enterprise grade End to End Business Management Software Application that is unmatched in the industry.
· The application has multiple Sub-applications and functionalities including Sales, Insurance Claims, Warranty Claims, Payments, Collections, Approvals, Billing / Invoicing, Payment / Tax / Bank Reconciliations, Partners Management, HRMS, Client Management etc. to suite end to end business needs of any enterprise.
· The application also has multiple integrations for Payment gateways, Voice calls, Video Calls, SMS, emails, WhatsApp, client applications, courier, Maps and databases, etc.
· To fuel our growth, we are inviting Computer Vision Engineer as we are building our software development team to execute new business growth plans and a fresh product roadmap.
· This position requires talents who are multi-skilled with hands-on experience; to work independently as well as in teams.
· Ideal candidates will be responsible to design, modify, develop, write, and implement software applications and components.
· Our technology processes documents and images across warranty, insurance, claims, and identity workflows—where trust, precision, and fraud prevention are paramount.
Detailed Role Description
· Assist in developing a secure, AI-powered platform for verification and assurance.
· Contribute to developing and improving computer vision models for image forgery detection, replay detection, and advance fraud analysis.
· Implement and experiment with image processing techniques such as noise analysis, Error Level Analysis (ELA), blur detection, and frequency-domain features.
· Support OCR pipelines using tools like PaddleOCR or Azure AI Vision to extract text from photos of identity documents.
· To Help prepare and clean real-world image datasets, including handling low-quality, noisy, and partially occluded images.
· Integrate trained models into Python-based APIs (FastAPI) for internal testing and production use.
· Collaborate with senior engineers to test, debug, and optimize model performance.
· Document experiments, findings, and implementation details clearly.
Required Skills
· 2-4 years experience in Machine Learning, computer vision, image processing, and AI
· Bachelor’s or Master’s degree in AI & Machine Learning /Computer Science/Data Science or related fields.
· Proficiency in Machine Learning, Python
· Core experience with OpenCV, NumPy and core image processing concepts
· Handson experience with PyTorch, TensorFlow
· Understanding of Convolutional Neural Networks (CNNs) fundamentals
· Hands-on experience REST APIs or FastAPI
· Exposure to OCR, document processing, or facial analysis
Desired Candidate Profile
· Perior experience related image forensics, fraud detection, and biometrics
· Comfortable working with imperfect real-world data
· Good communication skills and team-oriented mindset
Important Notes & Perks:
· Attractive pay structure as per the Market Standards
· Huge career growth opportunity
· Preference will be given to candidates who can join early
· Should have worked in small teams with multi-skilled resources
· This is a full-time, work-from-office opportunity (Preference will be given to candidates who are interested in Monday to Saturday; 6 days a week)
· Applications may be submitted via google form as per the link: https://forms.gle/TC8kypz3SwN256sP6
About us:
We, Infinity Assurance Solutions, Private Limited, a New Delhi-based portfolio company of Indian Angel Network, Aegis Centre for Entrepreneurship, Artha Venture Fund, eVista Venture and other marquee industry veterans; specialize in Warranty Service Administration, Extended Warranty, Accidental Damage Protection and various other service products for wide range of Mobile Phones, Home Appliances, Consumer Electronics, AV Solutions, Classroom / Conference-room Solutions, Kitchen Appliances, IT, Office automation, Personal Gadgets etc.
Incorporated in January 2014; as a debt-free, operationally profitable with positive net retained earnings, we have grown rapidly. Going forward, we are looking to grow multi-fold with newer areas of business expansion.
Our success is attributed to a very agile and technologically driven unique service delivery model, loyal long-term clients, in-house application, and lean organization structure.
More about us:
https://www.infinityassurance.com
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
We are a forward-thinking company Hookux seeking a skilled Full Stack Developer to join our team. You will work on a variety of exciting projects that require problem-solving, innovation, and scalability. One such project is, a stock market and crypto investing simulation platform that teaches children financial skills through gamified competition.
Key Responsibilities:
- Develop and maintain robust, scalable, and efficient front-end and back-end systems.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Design and implement API endpoints and server-side logic.
- Work closely with the design and product teams to ensure the technical feasibility of UI/UX designs.
- Optimize the application for maximum speed and scalability.
- Write well-documented, clean code.
- Troubleshoot and debug applications.
- Stay up-to-date with emerging technologies and industry trends.
Technical Skills & Experience:
- Proficient in JavaScript/TypeScript, with expertise in React.js for front-end development.
- Strong experience with Node.js, Express.js, or other backend technologies.
- Familiarity with database technologies such as MongoDB, PostgreSQL, or MySQL.
- Experience with RESTful APIs and third-party integrations.
- Knowledge of cloud platforms like AWS, Azure, or Google Cloud.
- Proficient in version control (e.g., Git) and collaboration tools.
- Experience with agile methodologies and continuous integration/deployment (CI/CD).
Bonus Skills:
- Experience with React Native for mobile app development.
- Familiarity with blockchain technology or cryptocurrency-related platforms.
- Experience with containerization (e.g., Docker, Kubernetes).
- Knowledge of testing frameworks and tools.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- years of experience in full stack development.
- Ability to manage multiple priorities and work independently as well as in a team environment.
Benefits:
- Competitive salary and performance bonuses.
- Opportunities for career growth and learning.
- Flexible working hours and remote working options.
mail me your CV and portfolio at hr @ hookux.com
We are hiring for a Python Developer at Wissen Technology!
📍 Location: Pune (Hybrid)
💼 Experience: 3–6 Years
⏱️ Notice Period: Immediate / 15 days preferred
🔧 Key Skills:
• Strong experience in Python
• Hands-on with Pandas & NumPy
• Experience with AWS (S3, Lambda preferred)
• Good understanding of data processing & APIs
• SQL knowledge
🏢 About Wissen Technology:
Wissen Technology, part of the Wissen Group (est. 2000), is a fast-growing technology company specializing in high-end consulting across Banking, Finance, Telecom, and Healthcare domains.
✔️ Global presence – US, India, UK, Australia, Mexico & Canada
✔️ Certified Great Place to Work®
✔️ Trusted by Fortune 500 clients like Morgan Stanley, Goldman Sachs, and more
✔️ Strong growth with 400% revenue increase in recent years
🌐 Website: www.wissen.com
🔗 LinkedIn: https://www.linkedin.com/company/wissen-technology/
If you’re interested or have relevant candidates, please share your resume at [your email].
#Hiring #PythonDeveloper #PuneJobs #AWS #ImmediateJoiner
While you may already know about Wissen and the company history, here is a quick rundown for you.
About Wissen Technology:
· The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
· Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
· Our workforce has highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
· Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
· Globally present with offices US, India, UK, Australia, Mexico, and Canada.
· We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
· Wissen Technology has been certified as a Great Place to Work®.
· Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
· Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
· We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, Goldman Sachs, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
De
Job Title: Application Development Engineer (Python – Backtesting & Index Platforms)
Role Overview
Key Responsibilities
Engine Development: Design and implement modular, reusable Python components for index construction, rebalancing, and backtesting.
Large-Scale Simulation: Use Pandas, NumPy, and PySpark to run historical calculations across long time horizons and multiple index variants.
Workflow Integration: Integrate engines with orchestrators such as Airflow or Temporal using parameterized, config-driven execution.
Reference Data Consumption: Query and utilize pricing, security master, and corporate action data from Snowflake.
Quality & Reconciliation: Build automated test harnesses to validate outputs, compare against benchmarks, and guarantee reproducibility.
Performance Optimization: Improve runtime efficiency through vectorization, caching, and distributed computing patterns.
Cross-Team Collaboration: Partner with Business, Index Ops, and Platform teams to accelerate research-to-production onboarding.
Required Technical Capabilities
Python Expertise: Strong proficiency in Python application development with emphasis on clean architecture and maintainable design.
Data & Numerical Libraries: Deep experience with Pandas and NumPy; working knowledge of PySpark for distributed workloads.
Financial Computation: Ability to implement portfolio mathematics, weighting algorithms, and time-series transformations.
Config-Driven Systems: Experience building rule-based or metadata-driven processing frameworks.
Database Skills: Strong SQL and experience consuming structured data from Snowflake.
Testing Discipline: Expertise in unit testing, regression testing, and deterministic replay of calculations.
Orchestration Integration: Familiarity with Airflow, Temporal, or similar workflow engines.
Cloud Infrastructure: Solid understanding of AWS ecosystem services (S3, Lambda, IAM) and how they integrate with the Snowflake Data Cloud.
Department
Product & Technology
Location
On-site | Prabhat Road, Pune
Experience
3-5 Years in a Data Engineering or Analytics Role
Domain
Fintech / Wealth Management — non-negotiable
Compensation
11-12 LPA Fixed + Performance Bonus
Growth
Title upgrade + salary revision at 12–18 months for strong performers
Why this role is different from most Data Engineer postings
You will work directly with the founding team on a live wealth management platform used by HNI and NRI clients. You will not spend years in a queue waiting to matter your work ships to production, your analysis influences product decisions, and you will guide junior teammates from day one. If you perform, a raise and title upgrade are on the table within 1218 months. This is the kind of early-team role that defines careers.
About Cambridge Wealth
Cambridge Wealth is a fast-growing, award-winning Financial Services and Fintech firm obsessed with quality and exceptional client service. We serve a high-profile clientele NRI, Mass Affluent, HNI, and ultra-HNI professionals and have received multiple awards from major Mutual Fund houses and BSE. We are past the zero-to-one stage and now focused on scaling our features and intelligence layer. You will be joining at exactly the right time.
What You Will Be Doing
This is a central, hands-on data engineering role at the intersection of financial analytics and applied ML. You will own the data pipelines and analytical models that power investment insights for wealth management clients transforming transaction data and portfolio information into measurable, actionable intelligence.
We are not looking for someone who just keeps the lights on. We want someone who looks at a working system and immediately sees how to make it 10x faster, cleaner, and smarter using AI and automation wherever possible.
Key Responsibilities:
Data Engineering & Pipelines
- Build and optimize PostgreSQL-based pipelines to process large volumes of investment transaction data.
- Design and maintain database schemas, foreign tables, and analytical structures for performance at scale.
- Write advanced SQL — window functions, stored procedures, query optimization, index design.
- Build Python automation scripts for data ingestion, transformation, and scheduled pipeline runs.
- Monitor AWS RDS workloads and troubleshoot performance issues proactively.
Financial Analytics & Modelling
- Develop analytical frameworks to evaluate client portfolios against benchmarks and category averages.
- Build data models covering mutual fund schemes, SIPs, redemptions, switches, and transfer lifecycles.
- Create materialized views and derived tables optimized for dashboards and internal reporting tools.
- Analyse client transaction history to surface patterns in investment behaviour and financial discipline.
Applied ML & AI-Driven Development
- Use Python (Pandas, NumPy, Scikit-learn) for trend analysis, forecasting, and predictive modelling.
- Implement classification or regression models to support financial pattern detection.
- Use AI tools — LLMs, Copilots — to accelerate ETL development, code quality, and data cleaning.
- Identify opportunities to automate repetitive data tasks and advocate for smarter tooling.
Data Quality & Governance
- Own data integrity end-to-end in a live, high-stakes financial environment.
- Build and maintain validation and cleaning protocols across all financial datasets.
- Maintain Excel models, Power Query workflows, and structured reporting outputs.
Collaboration & Junior Mentorship
- Work directly with Product, Investment Research, and Wealth Advisory teams.
- Translate open-ended business questions into structured queries and measurable outputs.
- Guide 1–2 junior trainees — review their work, set code quality standards, and help them grow.
- Present findings clearly to non-technical stakeholders — no jargon, just clarity.
Skills — What We Need vs. What Helps
Skill / Tool
Requirement
Must-Haves:
SQL & PostgreSQL (window functions, stored procedures, optimization)
Python — Pandas, NumPy for data processing and automation
ML fundamentals — classification or regression (Scikit-learn)
AWS RDS or equivalent cloud database experience
Financial domain knowledge — mutual funds, SIPs, portfolio concepts
Python data visualization — Matplotlib, Seaborn, or Plotly
Strong Advantage
Excel — Power Query, advanced modelling
Materialized views, query planning, index optimization
Experience with BI/dashboard tools
Good to Have
NoSQL databases
Prior fintech or wealth management startup experience
Financial Domain — Non-Negotiable
This is a wealth management platform. You must come in with a working understanding of:
- Mutual fund structures, scheme types, and NAV-based transactions
- Investment lifecycle — SIPs, Lump Sum, Redemptions, Switches, and STPs
- Portfolio allocation and benchmarking against indices (e.g. Nifty 50, category averages)
- How HNI/NRI clients interact with financial products differently from retail investors
You do not need to be a CFA. But if mutual funds and portfolio analytics are completely new territory, this role is not the right fit right now.
The Culture Fit — Read This Carefully
We are a small, fast-moving team. This is not a place where you wait for a ticket to arrive in your queue. The right person for this role:
- Has worked at a small startup before and is used to wearing multiple hats
- Finds broken or slow data systems genuinely irritating and fixes them without being asked
- Reaches for Python or an LLM when there is a repetitive task — automating is instinctive
- Is comfortable saying 'I don't know but I'll find out' and follows through independently
- Wants visibility and ownership, not just a well-defined job description
- Is looking for a role where strong performance is directly visible and rewarded
Growth Path — What Happens If You Perform
This is not a vague 'growth opportunity' pitch.
If you hit the bar in your first 12–18 months, you will receive a salary revision and a title upgrade to Senior Data Engineer or Lead Data Engineer depending on team expansion. As we scale our Data and AI team, this role is the natural stepping stone to a team lead position. You will also gain direct exposure to founding-team decision-making — the kind of access that is hard to get at larger companies.
Preferred Background
- 2–4 years in a data engineering or analytics role at a startup or small Fintech
- Experience in a live product environment where data errors have real consequences
- Exposure to portfolio analytics, investment research, or wealth management platforms
- Has mentored or reviewed code for at least one junior team member
Hiring Process
We respect your time. The process is direct and moves fast.
- Screening Questions — 5 minutes online
- Online Challenge — MCQ(Data, SQL, AWS, etc), and one applied ML or analytics problem, Communication Skills and Personality (focused, not trick questions)
- People Round — 30-minute video call, culture and communication
- Technical Deep-Dive — 1 hour in person, live financial data problems and your past work
- Founder's Interview — 1 hour in person, growth conversation and mutual fit
- Offer & Background Verification
Job Title: Software Developer (Contractor)
Location: Remote, Up to 1-year contract
Compensation: Hourly
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Key Responsibilities:
- Development & Customization: Develop and support client-specific customizations, integration, and automation under guidance.
- Ownership: Deliver assigned development tasks with quality, within estimated effort and timelines
- Established Tools and Processes: Follow established tools, coding standards, SDLC, CI/CD, and security practices.
- Collaboration: Partner effectively across a global team, including Team Lead/Senior Developers, consultants, project managers, Deltek partners and subcontractors, and cloud operations.
- Quality Assurance: Follow established security, quality, and testing protocols. Support testing activities, fix defects and rework items under guidance to maintain customer satisfaction and governance standards.
- Leverage AI-first methodology throughout the project lifecycle: use AI-powered tools to design, develop, and maintain scalable technical solutions.
- Continuous Improvement: Actively engage in learning new tools, technologies, and Deltek product capabilities.
Qualifications :
- Required Skills:
- Academic qualification: Bachelor’s degree (2025/2026 Pass out) in Computer Science/IT & E&C/ MCA. Minimum 70% & above in academics throughout.
- Job Location: Only Bangalore Candidates
- Project experience: Entry‑level experience through academic projects, internships, labs, or personal/open‑source projects.
- Development & Engineering Practices: Knowledge of object-oriented programming, core software development principles, and computer science fundamentals such as data structures, algorithms, and logical problem solving.
- Analytical and Problem‑Solving Skills: Strong analytical and problem‑solving skills, with the ability to learn and apply new concepts quickly
- Communication Skills: Good verbal and written communication skills in English, with the ability to participate in technical discussions and explain ideas clearly.
- Learning Mindset: Strong analytical and problem‑solving skills, with the ability to learn and apply new concepts quickly
- Technical Skills
- Programming Fundamentals: Basic proficiency in at least one programming language such as Python, JavaScript (Node.js preferred), Java, or C/C++, with understanding of object‑oriented programming concepts.
- Computer Science Foundations: Knowledge of data structures, algorithms, and basic software design principles gained through academic or project work.
- Web & Integration (Exposure): Introductory experience with web applications, APIs, integrations, or automation through coursework or hands‑on projects.
- Testing & Debugging: Basic Understanding of unit testing, debugging, and defect fixing as part of the development lifecycle.
- Tools & Platforms (Exposure): Familiarity with development tools such as IDEs, version control (Git), and basic build or deployment concepts.
- AI Tools (Plus): Hands‑on experience or foundational knowledge of AI/LLM‑based tools (such as AI assistants or copilots) and prompt engineering.
- Success Criteria for the Role
- Requirement Clarity: Quickly grasp and clarify assigned requirements or technical specifications, ensuring tasks are well-defined and minimizing the need for rework.
- Execution: Consistently completes development tasks and project assignments within agreed timelines, proactively communicating risks or blockers to avoid delays or scope drift.
- Quality: Delivers code with low defect rates by following coding standards and thorough testing, leading to successful QA/UAT outcomes with minimal rework or iterations.
- Collaboration & Communication: Receives positive feedback from team leads/Senior developer, peers, and stakeholders for clear communication, teamwork, and reliable technical contributions.
- AI Adoption: Demonstrate efficiency gains through AI usage including faster specification writing, improved code quality, automated testing.
- Why Join Deltek?
- At Deltek, you'll be part of a forward-thinking team dedicated to delivering innovative ERP solutions that empower organizations to achieve their goals. Our culture values collaboration, professional growth, and flexibility, providing you with opportunities to work on impactful projects and advance your career. You'll benefit from our commitment to leveraging cutting-edge AI capabilities, enabling you to design more innovative, more efficient solutions for our clients. Join us to make a difference in a supportive environment where your expertise is valued and your contributions drive real business success.
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
Senior Data Engineer (Azure Databricks)
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
- Work extensively with PySpark notebooks within Databricks for data processing and transformation
- Build and optimize batch data processing workflows
- Develop and manage data integrations using Azure Functions and Logic Apps
- Write efficient and optimized SQL queries for data extraction and transformation
Required Skills:
- Strong hands-on experience with Azure Databricks, PySpark, and SQL
- Experience working with batch processing frameworks
- Proficiency in building and managing data pipelines in Azure ecosystem
Good to Have:
- Experience with Python
Mandatory Requirement:
- Candidate must have hands-on experience working with PySpark notebooks in Databricks
AI Lead (Backend Systems & Architecture)
This is not a feature-delivery role. This is an architecture, ownership, and AI systems leadership role.
At Techjays, we build production-grade AI platforms for global clients. We are looking for an AI Lead with strong backend engineering expertise—someone who can design, scale, and take complete ownership of intelligent systems end-to-end.
You will operate at the intersection of backend engineering, distributed systems, and applied AI, driving both technical direction and execution.
What You’ll Do
- Architect and scale backend systems powering AI-driven applications
- Design and implement AI workflows such as RAG pipelines, agents, and LLM integrations
- Own systems end-to-end: architecture, development, deployment, and scaling
- Build reliable, high-performance distributed systems
- Integrate and optimize LLMs (Claude, GPT, etc.) for real-world use cases
- Lead backend and AI initiatives with strong technical ownership
- Ensure performance, scalability, observability, and cost efficiency
- Mentor engineers and raise the technical bar across teams
- Collaborate with product and AI teams to build AI-native solutions
What We’re Looking For
- Proven experience in architecting and scaling backend systems end-to-end
- Strong expertise in Python (Django / Flask / FastAPI)
- Deep understanding of distributed systems and system design
- Hands-on experience with AWS or GCP in production environments
- Solid experience working with LLMs (Claude, GPT, etc.)
- Strong knowledge of:
- Retrieval-Augmented Generation (RAG)
- Vector databases (Pinecone, FAISS, Weaviate, etc.)
- Experience in building and managing microservices architectures
- Ability to lead teams, mentor engineers, and drive technical excellence
- Strong problem-solving skills with an ownership mindset
Nice to Have
- Experience building AI agents or autonomous systems
- Familiarity with real-time data systems or streaming (Kafka, etc.)
- Understanding of MLOps and AI system lifecycle
- Experience optimizing AI systems for latency, cost, and scalability
Who You Are
- You think in systems, not just features
- You take full ownership of what you build
- You are comfortable in fast-moving, ambiguous environments
- You stay updated with the latest advancements in AI and backend technologies
This role is ideal for someone who wants to lead, build, and scale AI-powered backend systems in production while driving real-world impact.
Job Summary
We are seeking a highly motivated Technical Product Manager with strong exposure to software development to drive product strategy, roadmap execution, and cross-functional collaboration. The ideal candidate will bridge the gap between business stakeholders and engineering teams, ensuring delivery of scalable, high-quality software products.
Key Responsibilities.
prior exposure to Software Development.
Collaborate closely with engineering, design, QA, and business teams to deliver end-to-end product solutions.
Translate business requirements into technical specifications and user stories.
Work with development teams to ensure timely delivery of features and releases.
Prioritize product backlog based on business value, customer needs, and technical feasibility.
Participate in Agile/Scrum ceremonies such as sprint planning, stand-ups, and retrospectives.
Analyze product performance and user feedback to drive continuous improvement.
Ensure product scalability, performance, and technical feasibility.
Coordinate with stakeholders for product launches and go-to-market strategies.
- Maintain product documentation including PRDs, technical documents, and release notes.
Required Skills
Strong understanding of software development lifecycle (SDLC).
Hands-on experience or exposure to programming languages (Python preferred most )
Experience working with Agile/Scrum methodologies.
Strong knowledge of APIs, microservices, and system design concepts.
Ability to work closely with engineering teams and understand technical challenges.
Excellent analytical, problem-solving, and communication skills.
- Experience with product management tools (JIRA, Confluence, etc.).
Job Title: Software Developer
Location: Remote
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Blue Owls Solutions is looking for a mid-level Azure Data Engineer with approximately 4 years of hands-on experience to join our growing data team. In this role, you will design, build, and maintain scalable data pipelines and architectures that power business-critical analytics and reporting. You'll work closely with cross-functional teams to transform raw data into reliable, high-quality datasets that drive decision-making across the organization.
Required Skills
- 4+ years of professional experience as a Data Engineer or in a similar data-focused role
- Strong proficiency in SQL for data manipulation, querying, and performance optimization
- Hands-on experience with PySpark for large-scale data processing and transformation
- Solid working knowledge of the Microsoft Azure ecosystem (Azure Data Factory, Azure Data Lake, Azure Synapse, etc.)
- Experience with Microsoft Fabric for end-to-end data analytics workflows
- Ability to design and implement robust data architectures including data warehouses, lakehouses, and ETL/ELT frameworks
- Strong coding and scripting skills with Python
- Proven problem-solving ability with a knack for debugging complex data issues and optimizing pipeline performance
- Understanding of data modeling concepts, dimensional modeling, and data governance best practices
Preferred Skills & Certifications
- Microsoft Certified: Fabric Analytics Engineer Associate (DP-600)
- Microsoft Certified: Fabric Data Engineer Associate (DP-700)
- Experience with CI/CD practices for data pipelines
- Familiarity with version control systems such as Git
- Exposure to real-time streaming data solutions
- Experience working in Agile or Scrum environments
- Strong communication skills with the ability to translate technical concepts for non-technical stakeholders
What We Offer
- Competitive salary and performance-based bonuses
- Flexible hybrid options
- Opportunities for professional development, training, and certification sponsorship
- A collaborative, innovation-driven team culture
- Paid time off and company holidays
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
JOB DESRIPTION: C++ Developer
Experience : 4 –7 Years
Location : Pune
No of Position : 1
We are seeking an experienced C++ Developer with 4–7 years of experience to work in financial
systems. The role involves working on mission-critical applications such as trading platforms,
market data systems, risk engines, or payment processing systems, where performance, stability,
and correctness are paramount.
1. General Req -
•4–7 years of professional C++ experience in performance-critical systems
•Expert knowledge of modern C++ (C++11/14/17)
•Strong understanding of data structures, algorithms, and memory models
•Deep experience with multithreading, atomics, lock-free programming, and CPU cache
behavior
•Excellent knowledge of Linux internals and system-level programming
•Experience with low-level debugging and profiling (gdb, perf, valgrind, flamegraphs)
•Proficiency with CMake/Make and Git
2. Trading Systems Experience (Highly Preferred)
•Hands-on experience with order management systems (OMS) and execution engines
•Knowledge of exchange protocols: FIX, ITCH, OUCH, FAST
•Experience handling market data feeds (L1/L2, multicast, UDP)
•Understanding of latency measurement, clock synchronization, and time stamping
Backend Engineer III – Senior Python Developer (LLM & AI)
Location: Gurgaon, India
Positions: 1
Experience: 6 to 9 Years
Gurgaon Hybrid
About the Role
We are seeking an experienced Backend Engineer III / Senior Python Developer to join our AI engineering team and play a critical role in building scalable, secure, and high-performance backend platforms for LLM and AI-driven applications. You will work as a hands-on individual contributor while collaborating closely with Machine Learning Engineers, Data Scientists, Product Managers, and Cloud/DevOps teams to deliver innovative, production-grade AI solutions.
Key Responsibilities
- Design, develop, and maintain scalable backend systems and services using Python to support LLM and AI-based applications
- Build and maintain RESTful APIs and microservices that serve machine learning models and AI components
- Write clean, modular, efficient, and testable code following industry best practices and coding standards
- Participate actively in code reviews, ensuring high quality, security, and maintainability of the codebase
- Debug, profile, and optimize applications to improve performance, reliability, and scalability
- Identify and resolve performance bottlenecks in AI/ML pipelines and backend services
- Collaborate with ML engineers, data scientists, and product teams to translate business and technical requirements into robust backend solutions
- Mentor and support junior developers, promoting a culture of technical excellence and continuous learning
- Design and implement CI/CD pipelines and automate deployment workflows to ensure consistent and reliable releases
- Stay up to date with emerging trends in Python, cloud-native development, and LLM/AI engineering practices and apply them to improve systems and processes
Required Skills & Experience
- 6 to 9 years of strong hands-on experience in Python development
- Solid understanding of Python software design, architecture patterns, and testing best practices
- Proven experience working on AI, Machine Learning, or LLM-based projects
- Strong experience in building and consuming RESTful APIs and microservices architectures
- Hands-on experience with FastAPI, Flask, or similar model-serving frameworks
- Strong debugging, performance profiling, and optimization skills
- Experience with CI/CD tools and workflows (e.g., GitHub Actions, Azure DevOps, Jenkins, etc.)
- Working knowledge of Docker and Kubernetes is a strong plus
- Excellent analytical, problem-solving, and communication skills
- Ability to work independently in a fast-paced, evolving AI/ML environment while mentoring junior team members
Education & Certifications
- Bachelor’s degree in Computer Science, Software Engineering, or a related technical field
- AWS or other relevant cloud certifications are preferred but not mandatory
Why Join Us?
- Work on cutting-edge AI and LLM platforms
- Collaborate with top-tier engineering and data science teams
- Opportunity to influence system architecture and technical direction
- Competitive compensation and career growth opportunities
Position: Member of Technical Staff - Linux Specialist - DevOps
Location: Bengaluru - India
Experience: 2-6 Years
● Responsibilities: We are looking for experienced Linux specialists (Linux system administrators) to be part of RtBrick’s DevOps team. The DevOps team handles the CI/CD, compute and networking infrastructure and tools that together form a multi-tenant multi-environment delivery and deployment system for RBFS (RtBrick Full Stack). You will be part of a high performance team responsible for managing, improving and adapting these systems.
● CI/CD
Knowledge of software compilation and packaging for various Linux environments is required. Expertise in Linux system administration, Linux package management and Linux internals is essential. Ability to build custom Linux images for different types of container and/or virtual machine (VM) environments is also required. Experience with the Linux boot process, init system and service manager is highly desirable.
● Tools
Good knowledge of shell (bash) scripting and the Ansible automation framework is required. Knowledge of other automation frameworks and/or infrastructure-as-code tools is considered a plus. Experience with managing network infrastructure (switches, routes, firewalls) is highly desirable. Experience with monitoring solutions based on Prometheus and Grafana is desirable. Knowledge of the Python or Golang programming languages is considered a plus.
●Operations
Manage compute and networking infrastructure for a private cloud. Manage applications and services deployed in the private cloud but also in public clouds. This position will be part of an on-call engineer rotation during certain critical periods for the company.
Required Skills:
- About 2-6 years of industry experience in Linux system administration with emphasis on automation.
- Experience with networking focused Linux distributions (ONL/Open Network Linux and/or SONiC) is considered a plus.
- Good understanding and troubleshooting skills of networking issues, both at the host (Linux) level but also at the network (switches, routes, firewalls) level is required.
- Experience with CI/CD systems (Jenkins or similar) is required.
- Experience with software development tools like git, Gitlab, CMake, GNU build tools.
- Proficient in shell (bash) scripting. Experience with the Python or Golang programming languages is considered a plus.
- Knowledge and experience of Linux container technologies (Docker, LXC) and container orchestration (Kubernetes) or any other equivalent container technologies is desirable.
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Azure CI/CD Engineer
Bangalore
Fulltime
Skill Set Required
- Cloud Platforms: Experienced in cloud-native development on both AWS, Azure (including Azure DevOps)
- Programming: Proficient in Python, with a focus on backend development that is in AWS.
- CI/CD: Skilled in developing and optimizing CI/CD pipelines using Azure DevOps and GitHub/GitLab.
- API Integration: Well-versed in integrating with Jira REST API and Azure DevOps API.
- Agile: Well versed with Agile methodology
- Communication: Good communication skills
- Front end is Azure DevOps board and backend is supported by AWS environment.
Therefore, looking for a resource with a mix of the above mentioned skills.
Senior Quality Engineer – AI Products
Fulltime
Remote
Requirements
● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.
● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.
● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.
● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.
● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.
● Experience with AWS or other major cloud platforms.
● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.
● Advanced skills with API and SQL testing methodologies.
● Familiarity with test management tools such as TestRail; experience with Qase is a plus.
● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.
● Experience with testing tools: Jira, Sentry, DataDog.
● Strong understanding of Agile/Scrum methodologies.
● Proven track record of mentoring junior engineers and contributing to process improvements.
● Excellent analytical and problem-solving abilities.
● Strong communication skills with ability to present to both technical and non-technical stakeholders.
● Proficiency in English (C1-C2 level).
● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.
Preferred Qualifications
● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).
● Hands-on experience with document parsing, OCR, or unstructured data pipelines.
● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.
● Experience testing SaaS products in regulated industries (such as PCI-compliant).
● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).
● Experience with microservice architectures and distributed systems.
● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.
● Background in security or compliance testing for AI systems.
● Certifications such as ISTQB or CSTE.
● Experience working in legal technology, fintech, or professional services software.
● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.
● Experience evaluating and implementing new QE tools and processes

Location: Chennai (Hybrid Model)
Commitment: Minimum 2 Years (Excluding 3 months of Probation)
Experience Level: Fresher / Entry Level
About the Role
We are looking for enthusiastic and fast‑learning fresh graduates to join our Infrastructure & Security Engineering team. This role involves hands‑on work in system administration, implementation of infrastructure and security components, and continuous learning across multiple technology vendors and cloud environments including Microsoft, AWS, GCP, and others.
You will receive extensive training, mentorship, and opportunities to work directly with customers to demonstrate new products and solutions.
Key Responsibilities
Infrastructure & System Administration
- Assist in the deployment, configuration, and administration of IT infrastructure components (servers, networks, cloud services, and security tools).
- Work with multi‑vendor environments such as Microsoft, AWS, GCP, and other OEMs.
- Support day‑to‑day system monitoring, performance checks, and troubleshooting activities.
Security Implementation
- Participate in the implementation and maintenance of security solutions including identity management, endpoint security, SIEM, firewalls, and cloud security tools.
- Learn and follow best practices for secure configurations and compliance requirements.
Scripting & Automation
- Develop automation scripts using PowerShell, Python, and JavaScript to streamline operational tasks.
- Contribute to internal automation projects and efficiency improvement initiatives.
AI/ML Exposure
- Gain foundational understanding of AI & ML product development.
- Assist in integrating AI capabilities into internal or customer‑facing tools where applicable.
Customer Engagement
- Learn and perform product demos for customers on demand.
- Participate in customer visit and meetings alongside senior team members to support solution discussions.
- Present technical concepts in clear and professional English.
Required Skills
- Basic understanding of system administration, networking, cloud fundamentals, or security concepts.
- Strong scripting capabilities in PowerShell, Python, and JavaScript.
- Curiosity and willingness to learn AI/ML‑related product development.
- Excellent verbal and written English communication skills.
- Ability to quickly learn new technologies and adapt to dynamic project needs.
Who Should Apply?
- Fresh graduates (B.E/B.Tech/B.Sc/BCA/MCA or equivalent) passionate about IT infrastructure, security, cloud, and automation.
- Individuals who are eager to learn, enthusiastic about hands‑on work, and comfortable interacting with customers.
- Candidates willing to commit 2 years to grow within the organization as we invest in extensive training and development.
Work Model
- Hybrid, based in Chennai, with flexibility to work from both office and home as needed.
What We Offer
- Structured training in multi‑cloud, security, scripting, and automation.
- Hands‑on exposure to real‑world implementation projects.
- Opportunities to explore AI/ML product workflows.
- Mentorship from experienced engineers and architects.
- Career growth into Infra Engineer, Security Engineer, Cloud Engineer, Automation Engineer, or AI/ML Solution Specialist.
The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
You can contact me on nine three one six one two zero one three two
About The Nexora Group Inc.
The Nexora Group Inc. is a technology-driven organization focused on building intelligent digital solutions using modern software engineering and artificial intelligence technologies. Our teams work on projects involving data-driven applications, automation systems, and AI-powered tools designed to solve real-world business challenges.
We are looking for motivated and enthusiastic Python Developer Interns with an interest in Artificial Intelligence who want to gain practical experience working on live development projects.
Internship Responsibilities
- Assist in developing backend applications using Python
- Work on AI-related modules such as machine learning models, data processing pipelines, and automation tools
- Write clean, scalable, and well-documented code
- Support the development of APIs and backend services
- Participate in debugging, testing, and performance optimization
- Collaborate with development teams on project tasks and deliverables
- Contribute to research and implementation of AI/ML solutions
Required Skills
- Basic understanding of Python programming
- Familiarity with data structures and algorithms
- Interest in Artificial Intelligence and Machine Learning
- Basic knowledge of NumPy, Pandas, or similar Python libraries
- Understanding of REST APIs is a plus
- Strong problem-solving skills
- Ability to learn quickly and work in a collaborative environment
Preferred Qualifications
- Students or recent graduates in Computer Science, IT, Data Science, or related fields
- Basic knowledge of Machine Learning concepts
- Experience with Git or version control systems is beneficial
- Familiarity with Flask, Django, or FastAPI is a plus
What Interns Will Gain
- Hands-on experience working on real-world development projects
- Exposure to AI and machine learning development workflows
- Mentorship from experienced developers
- Opportunity to build a strong portfolio with practical project experience
- Internship completion certificate based on performance and participation
Years of Experience – 3 to 6 years
Location – Chennai
Work Mode: Hybrid – 3 days mandatory Work From Office (WFO).
Job Type: Full-Time
Role Description:
• Develops software solutions by studying information needs; conferring with users; studying
systems flow, data usage, and work processes; investigating problem areas; following the
software development lifecycle.
• Determines operational feasibility by evaluating analysis, problem definition, requirements,
solution development, and proposed solutions.
• Documents and demonstrates solutions by developing documentation, flowcharts, layouts,
diagrams, charts, code comments and clear code.
• Prepares and installs solutions by determining and designing system specifications,
standards, and programming.
• Improves operations by conducting systems analysis, recommending changes in policies and
procedures.
• Updates job knowledge by studying state-of-the-art development tools, programming
techniques, and computing equipment; participating in educational opportunities; reading
professional publications; maintaining personal networks; participating in professional
organizations.
• Protects operations by keeping information confidential.
• Provides information by collecting, analyzing, and summarizing development and service
issues. Accomplishes engineering and organization mission by completing related results as
needed.
• Supports and develops software engineers by providing advice, coaching, and educational
opportunities.
Mandatory skills:
• Hands-on experience with web development in any of the following programming languages:
Python, JavaScript
• Hands-on experience in the following JavaScript framework: React
• Hands-on experience in any of the following framework: Python (Django, Flask) or NodeJS
(Express, NestJS)
• Experience with back-end development, basic microservices implementation and
containerization using Docker
• Expertise in Relational databases such as Postgres, MySQL, Oracle, etc.
• Expertise in NoSQL DB such as MongoDB, Amazon DynamoDB, Cassandra, etc.
• Good Knowledge with any of the cloud providers such as Amazon Web Services, Microsoft
Azure or Google Cloud.
• Excellent verbal and written communication skills.
About the Internship
The Nexora Group Inc. is looking for enthusiastic and motivated interns who want to build practical experience in Data Science and Artificial Intelligence. This internship is designed to provide hands-on exposure to real-world datasets, machine learning techniques, and AI-driven problem solving.
Interns will work closely with our technical team to analyze data, build predictive models, and explore AI tools that support data-driven decision-making.
Key Responsibilities
- Collect, clean, and preprocess structured and unstructured datasets
- Perform exploratory data analysis (EDA) to identify trends and patterns
- Develop machine learning models using Python-based libraries
- Assist in building AI-powered data analysis workflows
- Create dashboards, reports, and visualizations to communicate insights
- Work with tools such as Python, Pandas, NumPy, and visualization libraries
- Collaborate with team members on real-world data science projects
- Document project findings and maintain clear technical reports
Required Skills
- Basic knowledge of Python programming
- Understanding of data analysis and statistics
- Familiarity with Machine Learning concepts
- Knowledge of libraries such as Pandas, NumPy, Matplotlib, or Scikit-learn
- Strong analytical and problem-solving skills
- Good communication and documentation skills
Preferred Qualifications
- Students or recent graduates in Computer Science, Data Science, Statistics, Mathematics, or related fields
- Basic understanding of Artificial Intelligence concepts
- Familiarity with Jupyter Notebook or Google Colab
- Interest in working with real-world datasets and analytics tools
What You Will Gain
- Hands-on experience with Data Science and AI projects
- Mentorship from experienced professionals
- Internship completion certificate
- Opportunity to build portfolio projects
- Exposure to real-world industry workflows
About CK-12 Foundation
CK-12’s mission is to provide free access to open-source content and technology tools that empower both students and teachers to enhance learning across different styles, resources, competence levels, and circumstances.
To achieve this ambitious vision, CK-12 challenges the traditional education model by leveraging technology to revolutionize learning for students, teachers, and parents.
CK-12 operates as a non-profit organization so it can experiment with bold ideas and focus on doing the right thing for education. The organization is backed by Vinod Khosla, a renowned technology venture capitalist.
At CK-12, you’ll work in a dynamic, entrepreneurial, and innovative environment where passionate individuals collaborate to disrupt traditional education through technology.
Technology is at the heart of scaling education, and CK-12 builds solutions on a cloud-based (AWS) and AI-first platform delivering rich and interactive learning experiences.
If you are a great technologist who enjoys challenging the status quo and building innovative products, this could be the place for you.
Together, we aim to transform education globally.
Product Offerings
Flexi 2.0 – AI-Powered Student Tutor
AI-Powered Teacher Assistant
https://www.ck12.org/pages/teacher-assistant/
Core Responsibilities
• Translate high-level directions and open-ended product ideas into deliverable ML projects and drive their completion.
• Architect and implement highly scalable ML solutions for systems such as multimodal information retrieval, conversational chatbots, recommender systems, and ranking systems.
• Own end-to-end product delivery from research and experimentation to production deployment.
• Work closely with cross-functional teams including Product, Engineering, DevOps, QA, and Content teams.
• Manage ML workflows involving data gathering, working with annotators, and collaborating with ML researchers.
• Extract and analyze large volumes of data to generate insights about student and teacher behavior based on platform usage.
• Design and build innovative ML-driven solutions that can improve learning experiences in the EdTech space.
• Apply statistical hypothesis testing and experimentation to evaluate and improve models.
• Continuously innovate and challenge the traditional approach to education through ML solutions.
Requirements
• Bachelor’s degree or higher in Computer Science or a related quantitative discipline, or equivalent practical experience.
• 4+ years of hands-on development experience with strong programming skills, preferably in Python.
• Expertise in deep learning approaches for NLP including transformer-based models, predictive modeling, search and recommendation systems, and autoregressive models.
• 2+ years of experience in NLP applications such as information retrieval, chatbots, summarization, or generative models.
• Proven experience building scalable ML applications on cloud infrastructure such as AWS, GCP, or Azure.
• Strong understanding of trade-offs between model architecture, deployment costs, and model accuracy.
• Ability to manage multiple tasks and collaborate effectively with geographically distributed teams.
• Up-to-date knowledge of advancements in NLP and computer vision and the ability to apply them in the education domain.
Technical Skills
• Python, PyTorch, TorchServe
• Pandas
• SQL and NoSQL databases such as MySQL, MongoDB, Redis, and Redshift
• Cloud infrastructure (AWS / GCP / Azure)
• Vector databases and search technologies such as Elasticsearch
• Linux
Nice to Have
• Familiarity with Reinforcement Learning
• Experience with Deep Knowledge Tracing
About the Role
We are looking for an experienced Senior Backend Developer to design and build scalable, secure, and high-performance backend systems. The ideal candidate will have deep expertise in Python/Django, microservices architecture, and cloud technologies, along with strong problem-solving skills and leadership capabilities.
Key Responsibilities
•Design and develop backend services using Django and Python.
•Architect and implement microservices-based solutions for scalability and maintainability.
•Work with PostgreSQL and Redis for efficient data storage and caching.
•Build and maintain RESTful APIs and ensure robust API design principles.
•Implement system design best practices for high availability and fault tolerance.
•Containerize applications using Docker and manage deployments with Kubernetes.
•Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.
•Apply security best practices to protect data and application integrity.
•Collaborate with frontend, QA, and DevOps teams for seamless delivery.
•Mentor junior developers and conduct code reviews to maintain quality standards.
Required Skills & Expertise
•Django/Python – Advanced proficiency in backend development.
•Microservices Architecture – Strong understanding of distributed systems.
•PostgreSQL & Redis – Expertise in relational and in-memory databases.
•Docker/Kubernetes – Hands-on experience with containerization and orchestration.
•API Design & System Design – Ability to design scalable and secure systems.
•Cloud (AWS/Azure) – Practical experience with cloud services and deployments.
•Security Best Practices – Knowledge of authentication, authorization, and data protection.
Preferred Qualifications
•Experience with CI/CD pipelines and DevOps practices.
•Familiarity with message queues (e.g., RabbitMQ, Kafka).
•Exposure to monitoring tools (Prometheus, Grafana).
What We Offer
•Competitive salary and benefits.
•Opportunity to work on cutting-edge backend technologies.
•Collaborative and growth-oriented work environment.
Key Responsibilities
• Design and develop backend services using Django and Python.
• Architect and implement microservices-based solutions for scalability and maintainability.
• Work with PostgreSQL and Redis for efficient data storage and caching.
• Build and maintain RESTful APIs and ensure robust API design principles.
• Implement system design best practices for high availability and fault tolerance.
• Containerize applications using Docker and manage deployments with Kubernetes.
• Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.
• Apply security best practices to protect data and application integrity.
• Collaborate with frontend, QA, and DevOps teams for seamless delivery.
• Mentor junior developers and conduct code reviews to maintain quality standards.
Required Skills & Expertise
• Django/Python – Advanced proficiency in backend development.
Microservices Architecture – Strong understanding of distributed systems.
• PostgreSQL & Redis – Expertise in relational and in-memory databases.
• Docker/Kubernetes – Hands-on experience with containerization and orchestration.
• API Design & System Design – Ability to design scalable and secure systems.
• Cloud (AWS/Azure) – Practical experience with cloud services and deployments.
• Security Best Practices – Knowledge of authentication, authorization, and data protection.
About TVARIT
TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.
Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.
Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.
· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.
· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.
· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.
· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards
· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.
· Utilize Docker and Kubernetes for scalable data processing.
· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.
Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.
. 2 years of team handling experience.
· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).
· Strong analytical and problem-solving skills with attention to detail.
· Good to have MLOps, DevOps including model lifecycle management
· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.
· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.
Job Title: Devops Engineer
Location: Delhi, Arjan Garh
Job Type: Full-Time
IMMEDIATE JOINERS REQUIRED
About Us:
Timble is a forward-thinking organization dedicated to leveraging cutting-edge technology to solve real-world problems. Our mission is to drive innovation and create impactful solutions through artificial intelligence and machine learning.
About the Role
We are looking for a high-ownership Senior DevOps Engineer to architect and maintain the mission-critical infrastructure supporting our global algorithmic trading operations. You will be the bridge between development and live trading, ensuring zero-latency performance and 100% system availability.
Key Responsibilities
- Infrastructure Architecture: Design scalable, fault-tolerant systems for high-frequency trading environments.
- Performance Optimization: Tune Linux servers and Python environments for maximum speed and efficiency.
- Incident Management: Lead real-time response for live trading systems, performing RCA and preventive fixes.
- Automation & CI/CD: Build and enhance robust pipelines using Docker, Jenkins, and Ansible.
- Proactive Monitoring: Implement advanced logging and alerting (Prometheus/Grafana) to ensure high uptime.
- Database Admin: Manage relational databases and write optimized SQL for operational reporting.
- Mentorship: Guide junior DevOps members and maintain rigorous system documentation.
Technical Requirements
- OS/Scripting: Advanced Linux Admin and expert-level Python scripting.
- IaC & Tools: Hands-on experience with Ansible, Terraform, and Docker.
- CI/CD: Proficiency in Jenkins or GitLab CI.
- Data: Strong SQL skills with experience in performance tuning.
- Education: B.Tech/M.Tech in Computer Science or related engineering field.
We are looking to recruit an expert for backend software development at Webnyay. We are an enterprise SaaS startup catering to India and international markets. We are now growing fast and need a rockstar senior software developer who is an expert in Python/Django and GCP.
What we are looking for:
- At least 6 years of professional software development experience.
- At least 4 years of experience with Python & Django.
- Proficiency in Natural Language Processing (tokenization, stopword removal, lemmatization, embeddings, etc.)
- Experience in computer vision fundamentals, particularly object detection concepts and architectures (e.g., YOLO, Faster R-CNN)
- Experience in search and retrieval systems and related concepts like ranking models, vector search, or semantic search techniques
- Experience with multiple databases (relational and non-relational).
- Experience with hosting on GCP and other cloud services.
- Familiar with continuous integration and other automation.
- Focus on code quality and writing scalable code.
- Ability to learn and adopt new technologies depending on business requirements.
- Prior startup experience will be a plus!
Some of your responsibilities would include:
- Work closely in a highly AGILE environment with a team of engineers.
- Create and maintain technical documentation of technical design and solution.
- Build products/features that are highly scalable, secure, highly available, high performing and cost-effective.
- Help team in debugging.
- Perform code reviews.
- Understand the full feature set/ implementation and architecture of the applications.
- Analyze business goals and product requirements and contribute to application architecture design, development and delivery.
- Provide technical expertise for every phase of the project lifecycle; from concept development to solution design, implementation, optimization and support.
- Act as an Interface with business teams to understand and create technical specifications for workable solutions within the project.
- Explore and work with LLM APIs and Generative AI.
- Make performance-related recommendations, identify and eliminate performance bottlenecks (hardware, software, configuration); drive performance tuning, re-design and re-factoring.
- Participate in the software development lifecycle, which includes research, new development, modification, security, reuse, re-engineering and maintenance of common component libraries.
- Participate in product definition and feature prioritization.
- Collaborate with internal teams and stakeholders across business verticals.
Highlights - Current location of candidate should be Bangalore
Total Exp - 6-12yrs
Joining Time period - Within 30 days
GCP Bigquery expert, GCP Certified
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 6+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - GCP Certification
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
About Us
Wednesday is a technology consulting and engineering firm based in Pune. We specialise in helping digital-first businesses solve complex engineering problems. Our expertise lies in data engineering, applied AI, and app development. We offer our expertise through our services: Launch, Catalyse, Amplify, and Control.
We're a passionate bunch of people who take their work seriously. We deeply care about each other and are united by the cause of building teams that delivery great digital products & experiences.
Job Description
We are seeking Senior Software Engineers who can architect and ship fullstack digital products at a high bar — using AI-assisted development tools to move faster without cutting corners. This role spans platform, product, and go-to-market — you'll own backend systems, shape frontend experiences, make infrastructure decisions, and set a higher engineering standard for the team around you. The ideal candidate has designed systems they can defend, shipped products at scale, and knows what it takes to get there.
Requirements
Product & Client Ownership Be the day-to-day technical owner on engagements — understand the client's business deeply, shape the product roadmap, and translate ambiguous problems into clear engineering direction. Show up to demos and reviews with the confidence to defend tradeoffs and flag risks early.
Architecture & Judgment Make architectural decisions that hold up at scale. AI can generate code — your job is to decide what gets built, how it fits together, and when to push back. Evaluate tradeoffs, review TRDs, and set the technical direction the rest of the team executes against.
Fullstack Execution Ship backend services, APIs, database schemas, and user-facing features end-to-end. Use AI-assisted tools (Cursor, Claude Code, Antigravity) to move at the speed of a small team without cutting corners on quality.
Platform & Reliability Own cloud infrastructure, CI/CD, and production systems. Define how the team monitors, debugs, and responds to incidents. If something breaks at 2am, you've already thought about it.
AI & Automation Drive AI adoption in products — LLM APIs, RAG pipelines, agentic workflows. Push for automation across client and internal workflows. Know what these tools are good at and, more importantly, where they fail.
Raising the Bar Be the judgment layer for junior engineers who are moving fast with AI tools. Review code for what matters — not style, but correctness, scalability, and whether the author actually understood what they shipped. Run knowledge-sharing sessions. Onboard people well.
Must Haves
3–5 years of professional engineering experience with production systems you've owned end-to-end.
Active user of AI IDEs (Cursor, Claude Code, Antigravity, or similar).
Demonstrated system design ability — you've made architectural decisions and can evaluate trade-offs.
Good exposure to cloud platforms and deployments.
Familiarity with observability and monitoring tools — you can track down issues and identify bottlenecks.
Deep backend proficiency: API design, databases, microservices, distributed systems, event-driven architecture, and message brokers.
Worked with at least two of REST, GraphQL, or gRPC in production.
Eye for design — you care about the experiences you build for users.
High rate of learning — you figure things out fast.
Nice to Have
Cloud architecture experience (AWS, GCP, Azure) with containerisation and orchestration.
Familiarity with AI/ML: prompt engineering, embeddings, agent frameworks (LangChain, CrewAI, LangGraph).
Experience with automation and workflow tools (n8n, Make, Zapier).
Benefits
Mentorship: Work next to some of the best engineers and designers — and be one for others.
Freedom: An environment where you get to practice your craft. No micromanagement.
Comprehensive healthcare: Healthcare for you and your family.
Growth: A tailor-made program to help you achieve your career goals.
A voice that is heard: We don't claim to know the best way of doing things. We like to listen to ideas from our team.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming skills and expertise with data engineering and machine learning deployment
● Experience in databases including MySQL and NoSQL
● Experience in developing and maintaining critical and high availability systems will be given strong preference
● Experience in software design using design principles and architectural modeling.
● Experience working with AWS cloud platform.
● Strong analytical and data driven approach to problem solving
Core Responsibilities:
- Design & Development: Architect and implement scalable backend services and APIs using Python or Golang, ensuring high performance, resilience, and extensibility.
- System Ownership: Take end-to-end ownership of critical modules, from design and development to deployment and support.
- Technical Leadership: Conduct design and code reviews, enforce best practices, and mentor junior engineers to raise the team’s technical bar.
- Collaboration: Work closely with product managers, architects, and other engineers to translate business requirements into technical solutions.
- Performance & Reliability: Troubleshoot complex issues in production systems, identify root causes, and design sustainable long-term solutions.
- Innovation: Evaluate new technologies, contribute to proof-of-concepts, and recommend tools that can improve developer productivity.
- Process Improvement: Drive initiatives to improve coding standards, CI/CD pipelines, and automated testing practices.
- Knowledge Sharing: Document designs, create technical guides, and share insights with the broader engineering team.
Experience and Expertise:
- 4–7 years of backend development experience with Python or Golang.
- Strong expertise in designing, developing, and scaling microservices and distributed systems.
- Solid understanding of concurrency, multi-threading, and performance optimization.
- Proficiency with databases (SQL/NoSQL), caching systems (Redis, Memcached), and messaging systems (Kafka, RabbitMQ, etc.).
- Hands-on experience with Linux development, Docker, and Kubernetes.
- Familiarity with cloud platforms (AWS/GCP/Azure) and related services.
- Strong debugging, profiling, and optimization skills for production-grade systems.
- Experience with AI-powered development tools is a strong plus; familiarity with concepts like 'agentic coding' for workflow automation or 'context engineering' for leveraging LLMs in system design is highly desirable.
Skills:
- Strong problem-solving ability, with experience handling complex technical challenges.
- Ability to lead technical initiatives and mentor junior engineers.
- Excellent communication skills to collaborate with cross-functional teams and articulate trade-offs.
- Self-motivated, proactive, and able to operate independently while aligning with team goals.
- Passionate about engineering culture, quality, and developer productivity.
Core Responsibilities:
- Design, develop, and maintain backend services and APIs using Python or Golang.
- Write high-quality, testable, and maintainable code with a focus on performance and scalability.
- Implement automated tests and contribute to CI/CD pipelines.
- Collaborate with product, QA, and DevOps teams for end-to-end feature delivery.
- Troubleshoot production issues and provide timely resolutions.
- Participate in design and architecture discussions to improve system efficiency.
- Contribute to improving development processes, coding standards, and best practices.
Experience and Expertise:
- 2–4 years of experience in backend development with Python or Golang.
- Solid understanding of RESTful APIs, microservices, and distributed systems.
- Strong knowledge of data structures, algorithms, and OOPS principles.
- Hands-on experience with relational and/or NoSQL databases.
- Familiarity with Linux development, Docker, and basic cloud concepts (AWS/GCP/Azure).
- Proficiency with Git and version control workflows.
- Familiarity with AI-powered development tools or exposure to projects involving large language models (LLMs) is a plus.
Skills:
- Strong analytical and debugging skills with the ability to solve complex problems.
- Good communication and collaboration skills across teams.
- Ability to work independently with minimal supervision while being a strong team player.
- Growth mindset – eagerness to learn new technologies and improve continuously.
Core Responsibilities:
- Design, develop, and maintain backend services using Python or Golang.
- Write clean, efficient, and well-documented code following best practices.
- Build and consume RESTful APIs and microservices.
- Collaborate with QA, DevOps, and product teams for smooth feature delivery.
- Participate in peer code reviews and technical discussions.
- Debug and fix issues, ensuring system stability and performance.
- Continuously learn and apply new technologies and tools in backend development.
Experience and Expertise:
- 0–2 years of software development experience (internships or projects acceptable).
- Proficiency in at least one backend programming language (Python or Golang).
- Strong understanding of object-oriented programming and software fundamentals.
- Knowledge of data structures, algorithms, and database concepts.
- Familiarity with Linux-based development environments.
- Exposure to Git and version control workflows.
Skills:
- Strong analytical and problem-solving ability.
- Willingness to learn, adapt, and take ownership.
- Effective communication and teamwork skills.
- Curiosity for emerging technologies, including AI-driven development, backend technologies, distributed systems, and modern engineering practices.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Job Description – Backend Python Developer(Mid-Level)
📍 Location: Mumbai/Gurgaon | Full-time
Backend Python Developer
Role Overview
We are seeking a skilled Backend Python Developer to design, develop, and maintain backend services, APIs, and integrations that power our AI-driven automation solutions.
You will collaborate closely with senior engineers, AI/ML teams, and frontend developers to build scalable, high-performance systems. This role is ideal for professionals with solid backend experience who are eager to deepen their expertise in Python, cloud technologies, and AI-based applications.
Key Responsibilities
- Develop and maintain backend APIs, services, and system integrations using Python
- Collaborate on system design and architecture discussions with senior engineers
- Write clean, scalable, and well-documented code following best practices
- Ensure performance, scalability, and reliability in cloud environments
- Design and manage SQL/NoSQL databases for structured and unstructured data
- Support integration of AI/ML models into production workflows
- Participate in code reviews, unit testing, and debugging
- Contribute to CI/CD pipelines, containerization, and DevOps processes
Required Skills & Qualifications
- 3–5 years of experience in backend development
- Strong proficiency in Python
- Hands-on experience with frameworks such as FastAPI, Flask, or Django
- Experience building and consuming REST APIs (GraphQL is a plus)
- Strong database knowledge: PostgreSQL, MySQL, MongoDB, or Redis
- Familiarity with cloud platforms (AWS, GCP, or Azure)
- Hands-on experience with Docker and Kubernetes
- Strong understanding of OOP, data structures, algorithms, and design patterns
Preferred Skills
- Exposure to AI/ML workflows or a strong interest in learning
- Experience with message brokers such as Kafka, RabbitMQ, or Celery
- Knowledge of asynchronous programming (asyncio, Celery, etc.)
- Experience with unit testing frameworks (PyTest, unittest)
- Understanding of API security and authentication (OAuth2, JWT)
What We Offer
- Competitive compensation with growth opportunities
- Opportunity to work on AI-first automation products used globally
- Mentorship from experienced senior engineers
- Flexible work environment
- Continuous learning support in Python, Cloud, and AI/Automation technologies
Immediate hiring for Senior Data Engineer
📍 Location: Hyderabad/Bangalore
💼 Experience: 7+Years
🕒 Employment Type: Full-Time
🏢 Work Mode: Hybrid
📅 Notice Period: 0-1Month serving notice only
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
🔎 Key Responsibilities:
- Data Pipeline Development
- Data Modeling and Architecture
- Data Integration and API Development
- Data Infrastructure Management
- Collaboration and Documentation
🎯 Required Skills:
- Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
- 7+ years of proven experience in data engineering, software development, or related technical roles.
- 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
- 7+ years of experience with database systems, data modeling, and advanced SQL.
- 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
- Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
- 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
- Strong analytical, problem-solving, and debugging skills with high attention to detail.
- Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
- Ability to adapt to rapidly evolving technologies and business requirements.
Data Engineer
Location: Bangalore
Experience: 4+ Years
Notice Period: Immediate Joiners
Key Skills
- Strong experience in Python
- Hands-on experience with Hadoop
- Experience with ELK Stack (Elasticsearch, Logstash, Kibana) – Mandatory
- Strong knowledge of SQL
- Experience in building and maintaining data pipelines
Responsibilities
- Design, build, and optimize scalable data pipelines
- Work with large-scale datasets using Hadoop ecosystem
- Implement and maintain ELK stack for data logging and monitoring
- Write efficient and optimized SQL queries
- Collaborate with engineering and analytics teams to support data-driven solutions
Job Requirements
• 3+ years of professional backend development experience with Python, and working knowledge
of TypeScript.
• Solid understanding of Python frameworks (e.g., FastAPI, Django, Flask) and TypeScript-based
backend frameworks (e.g., Node.js, NestJS, Express)
• Hands-on experience using Temporal to design and orchestrate workflows.
• Proven expertise in data extraction, normalization, and deduplication.
• Strong experience implementing proxy solutions and navigating bot-detection mechanisms (e.g.,
Cloudflare).
• Experience with Docker, containerized deployments, and cloud platforms such as GCP or Azure.
• Proficiency with database technologies including MongoDB and Elasticsearch.
• Demonstrated experience designing and maintaining scalable, high-performance APIs.
• Working knowledge of software testing methodologies (unit, integration, and end-to-end).
• Familiarity with CI/CD pipelines and version control systems like Git.
• Strong problem-solving abilities, attention to detail, and comfort working in agile, fast-paced
environments.
• Excellent communication skills with the ability to operate effectively in ambiguous or loosely
defined problem spaces.
We are seeking a skilled and detail-oriented Member of Technical Staff focusing on Network Infrastructure, Linux Administration and Automation. The role involves managing and maintaining Linux-based systems and infrastructure, automating repetitive tasks, and ensuring smooth operation.
Requirements
- In-depth experience with Linux systems (configuration, troubleshooting, networking, and administration)
- Network infrastructure management knowledge. CCNA/CCNP or an equivalent certification is a plus
- Scripting skills in at least one language (e.g., Bash, Python, Go).
- Knowledge of version control systems like Git and experience with branching, merging, and tagging workflows
- Experience with virtualization technologies such as Proxmox or VMWare, including the design, implementation, and management of virtualized infrastructures. Understanding of virtual machine provisioning, resource management, and performance optimization in virtual environments.
- Experience with containerization technologies like Docker
- Familiarity with monitoring and logging tools.
- Experience with end point security.
Responsibilities
- Network Infrastructure Management: Configure, manage, and troubleshoot routers, switches, firewalls, and wireless networks, Maintain and optimize network performance to ensure reliability and security.
- Linux Administration: Manage and maintain Linux-based systems, ensuring high availability and performance.
- Infrastructure Management: Managing servers, networks, storage, and other infrastructure components, capacity planning, and disaster recovery.
- Automation: Scripting (Bash, Python, Golang, etc.), configuration management (Ansible, Puppet, Chef).
- Virtualization: Design, implement, and manage virtualized environments, ensuring optimal performance and resource efficiency.
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Role: ODOO Developer
Exp: 2+ Years
Location : Chennai
Preferred : Chennai Based Candidates
Key Responsibilities
- Develop and customise Odoo modules based on business requirements.
- Design, develop, and maintain ERP applications using the Odoo framework.
- Implement and customise Odoo Manufacturing (MRP) modules including Work Orders, Bills of Materials (BoM), Routings, and Production Planning.
- Integrate third-party applications and APIs using web services.
- Work with the PostgreSQL database for data management, optimisation, and administration.
- Develop Odoo views, reports, and UI components using HTML, CSS, XML.
- Support server deployment, troubleshooting, and performance optimisation of Odoo applications.
- Understand and enhance existing Odoo functionalities and provide technical improvements.
- Collaborate with functional teams to translate business requirements into technical solutions.
- Interact with clients and functional teams to understand requirements and support project delivery.
Required Skills
- 2 years of experience in Odoo (OpenERP) development and customisation.
- Hands-on experience in Odoo Manufacturing (MRP) module implementation and customisation.
- Familiarity with Python web frameworks such as Django or Flask.
- Strong understanding of Object-Orientated Programming (OOP).
- Experience with web services and API integrations.
- Experience with PostgreSQL database management and optimisation.
- Understanding of ORM (Object Relational Mapper) frameworks.
- Knowledge of server deployment and troubleshooting.
Role Overview
We are looking for a QA Automation Engineer who can leverage AI-driven testing approaches to improve automation coverage, test reliability, and data generation.
The ideal candidate should have strong experience in backend-heavy automation testing, modern automation frameworks, and using AI tools to generate test cases, maintain test scripts, and create synthetic data for testing.
Key Responsibilities
- Design and develop automated test frameworks for backend and API-heavy applications.
- Use AI tools to generate test scripts from requirements (e.g., Gherkin/Cucumber-based test generation).
- Implement and maintain self-healing test automation frameworks that adapt to UI changes.
- Develop automated tests using Playwright, Appium, and other modern automation tools.
- Create synthetic test data using AI while ensuring PII compliance.
- Perform backend stress testing and API validation.
- Work closely with engineering teams to ensure product quality and release readiness.
- Continuously improve test coverage, test reliability, and automation efficiency.
Must-Have Skills
- 4+ years of experience in QA Automation
- Strong experience in automation testing frameworks
- Hands-on experience with Playwright for web automation
- Experience with Appium for mobile automation
- Proficiency in Python for test scripting and data generation
- Experience writing BDD-style test cases (Gherkin / Cucumber)
- Experience in API testing and backend automation
- Familiarity with AI-assisted test generation tools
- Strong knowledge of CI/CD pipelines and automated testing workflows
Relevant Skills
- Backend automation testing
- Test automation frameworks design
- AI-assisted test generation
- Synthetic test data generation
- Performance and stress testing
- API testing tools (Postman, REST clients)
- Test reporting and debugging
- Version control using Git
AI & Automation Expertise
- Using AI tools to generate test cases from requirements
- Experience with self-healing test automation frameworks such as Mabl or Testim
- Using AI to generate synthetic financial datasets for testing
- Testing AI-powered applications or AI features
Tools & Technologies
- Playwright
- Appium
- Python
- Cucumber / Gherkin
- CI/CD tools
- Git
Strong Plus
- Experience working in the Finance / FinTech sector
- Experience testing AI-powered applications
- Experience working closely with AI engineering teams
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is one part a Digital Product Studio that specializes in building superior product experiences, and one part Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi million valuation companies in the US, and a handful of sister ventures for large corporations including Target, US Ventures, Imprint Engine.
We’re a team of 100 strong from around the world that are radically open minded and believes in excellence, respecting one another and pushing our boundaries to furthest its ever been.
The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.
Responsibilities
- Act as a passionate representative of the Albert product and brand.
- Work closely with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
- Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
- Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
- Be responsible for the design and delivery of the mission-critical stack with a focus on security, resiliency, scale, and performance.
- Own end-to-end performance and operability.
- Demonstrate a clear understanding of automation and orchestration principles.
- Act as the escalation point for complex or critical issues that have not yet been documented as Standard Operating Procedures (SOPs).
- Use a deep understanding of service topology and dependencies to troubleshoot issues and define mitigations.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 1+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
- Strong experience with Infrastructure as Code (IaC), preferably using Terraform.
- Strong expertise in Python or Node.js, including designing RESTful APIs and microservices architecture.
- Strong expertise in cloud infrastructure (AWS) and platform technologies including microservices, APIs, and distributed systems.
- Hands-on experience with observability stacks including centralized log management, metrics, and tracing.
- Familiarity with CI/CD tools such as CircleCI and performance testing using K6.
- Passion for bringing more automation and engineering standards to organizations.
- Experience building high-performance APIs with low latency (<200 ms).
- Ability to work in a fast-paced environment and collaborate with peers and leaders.
- Ability to lead technically, mentor engineers, and contribute to hiring and team growth.
Good to Have
- Experience with Kubernetes and container orchestration.
- Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
- Experience building internal developer platforms (IDPs) or reusable engineering frameworks.
- Exposure to ML infrastructure or data engineering workflows.
- Experience working in compliance-heavy environments (SOC2, HIPAA, etc.).
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, helping bring better products to market faster.
Why Join Albert Invent
- Work with a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- Collaborate with world-class scientists and technologists to redefine how new materials are discovered and developed.
- Culture built on curiosity, collaboration, ownership, and continuous learning.
- Opportunity to build cutting-edge AI tools that accelerate real-world R&D and solve global challenges such as sustainability and advanced manufacturing.
- Mandatory Skills:
- Python (min 4yrs)
- React.js (min 4yrs)
- Django, Fast API (min 4yrs)
- Solid understanding of RESTful APIs and backend-frontend integration
- PostgreSQ/ MySQL/MongoDB
We are looking for passionate and motivated Developers to join our growing technical team. The ideal candidate should have strong foundational knowledge in Python/Django or React with Django and be eager to work on real-time web development projects.
Open Positions:
Python Django Developer
React + Django Developer
Key Responsibilities:
- Develop, test, and maintain scalable web applications.
- Write clean, efficient, and reusable code using Django and/or React.
- Collaborate with UI/UX designers and backend developers to implement new features.
- Debug, troubleshoot, and optimize application performance.
- Participate in code reviews and contribute to team discussions.
- Stay updated with the latest web development trends and technologies.
Requirements:
- Basic to strong knowledge of Python and Django framework.
- Familiarity with React.js (for React + Django role).
- Understanding of REST APIs and database concepts.
- Knowledge of HTML, CSS, and JavaScript.
- Strong problem-solving and logical thinking skills.
- Good communication and teamwork abilities.
- Freshers and career restart candidates are welcome to apply.
More Info:
Company: Altos Technologies
Website: www.altostechnologies.in
Job Type: Permanent Job
Industry: IT / Web Development
Function: Software Development
Employment Type: Full-time
Location: Kochi & Chennai
We're hiring a Python Developer in Jaipur.
Not looking for someone who can recite design patterns. Looking for someone who can open a Django codebase, figure out what's broken,
and fix it by end of day. 3-4 years. Django / Flask / FastAPI. REST APIs. PostgreSQL. If you've maintained production code (not just built tutorial projects) — this is your role.
Full-time | Jaipur | Industry-standard pay | Small team = real ownership
Job Title: Python Backend / GenAI Engineer (4+ Years)
Job Summary:
Looking for a Python Backend Engineer with experience in Generative AI, LangGraph workflows, data engineering, and AI evaluation using Arize AI.
Responsibilities
* Develop backend APIs using Python (FastAPI / Flask / Django)
* Build Generative AI and RAG-based applications
* Design LangGraph / agent workflows
* Create data engineering pipelines (ETL, data processing)
* Implement LLM monitoring and evaluation using Arize AI
* Integrate vector databases and AI services
* Maintain scalable and production-ready backend systems
Required Skills
* 4+ years of Python backend development
* Experience in Generative AI / LLM applications
* Knowledge of LangGraph / LangChain
* Experience in data engineering pipelines
* Familiarity with Arize AI or model evaluation tools
* Understanding of REST APIs, databases, Docker
Good to Have
* Cloud platforms (Azure / AWS )
* Vector databases (FAISS, Pinecone, Azure AI Search)
























