50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Job Responsibilities :
- Work closely with product managers and other cross-functional teams to help define, scope, and deliver world-class products and high-quality features addressing key user needs.
- Translate requirements into system architecture and implement code while considering performance issues of dealing with billions of rows of data and serving millions of API requests every hour.
- Ability to take full ownership of the software development lifecycle from requirement to release.
-Writing and maintaining clear technical documentation enabling other engineers to step in and deliver efficiently.
- Embrace design and code reviews to deliver quality code.
- Play a key role in taking Trendlyne to the next level as a world-class engineering team
- Develop and iterate on best practices for the development team, ensuring adherence through code reviews.
- As part of the core team, you will be working on cutting-edge technologies like AI products, online backtesting, data visualization, and machine learning.
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Proficient understanding of the performance of web and mobile applications.
- Mentor junior developers and foster skill development within the team.
Job Requirements :
- 1+ years of experience with Python and Django.
- Strong understanding of relational databases like PostgreSQL or MySQL and Redis.
- (Optional) : Experience with web front-end technologies such as JavaScript, HTML, and CSS
Who are we :
Trendlyne, is a Series-A products startup in the financial markets space with cutting-edge analytics products aimed at businesses in stock markets and mutual funds.
Our founders are IIT + IIM graduates, with strong tech, analytics, and marketing experience. We have top finance and management experts on the Board of Directors.
What do we do :
We build powerful analytics products in the stock market space that are best in class. Organic growth in B2B and B2C products have already made the company profitable. We deliver 900 million+ APIs every month to B2B customers. Trendlyne analytics deals with 100s of millions rows of data to generate insights, scores, and visualizations, which are an industry benchmark.
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 1-3 years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
Job Responsibilities :
- Work closely with product managers and other cross-functional teams to help define, scope, and deliver world-class products and high-quality features addressing key user needs.
- Translate requirements into system architecture and implement code while considering performance issues of dealing with billions of rows of data and serving millions of API requests every hour.
- Ability to take full ownership of the software development lifecycle from requirement to release.
-Writing and maintaining clear technical documentation enabling other engineers to step in and deliver efficiently.
- Embrace design and code reviews to deliver quality code.
- Play a key role in taking Trendlyne to the next level as a world-class engineering team
- Develop and iterate on best practices for the development team, ensuring adherence through code reviews.
- As part of the core team, you will be working on cutting-edge technologies like AI products, online backtesting, data visualization, and machine learning.
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Proficient understanding of the performance of web and mobile applications.
- Mentor junior developers and foster skill development within the team.
Job Requirements :
- 4+ years of experience with Python and Django.
- Strong understanding of relational databases like PostgreSQL or MySQL and Redis.
- (Optional) : Experience with web front-end technologies such as JavaScript, HTML, and CSS
Who are we :
Trendlyne, is a Series-A products startup in the financial markets space with cutting-edge analytics products aimed at businesses in stock markets and mutual funds.
Our founders are IIT + IIM graduates, with strong tech, analytics, and marketing experience. We have top finance and management experts on the Board of Directors.
What do we do :
We build powerful analytics products in the stock market space that are best in class. Organic growth in B2B and B2C products have already made the company profitable. We deliver 900 million+ APIs every month to B2B customers. Trendlyne analytics deals with 100s of millions rows of data to generate insights, scores, and visualizations, which are an industry benchmark.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding.
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (Preferably Top product companies, AI native companies, B2B SaaS)
Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)
Mandatory (Note): Candidates who have owned end-to-end product development or worked on app development projects during their graduation will be highly preferred.
Mandatory (Note 2): The role offers a mix of work setups, including remote, Mumbai (in-office), and Bangalore (in-office) opportunities
Role- Data Analyst
Experience- 2 to 5 years
Location-Bangalore
Job Role-
● Experience: Minimum of 2+ years of professional experience in a data-heavy
environment (E-commerce or Fintech experience is a plus).
● SQL Mastery: Exceptional ability to write complex joins, window functions, Analytical
functions, and CTEs. Experience with high-scale databases (e.g., BigQuery, Hive, or
Postgres).
● Scripting: Functional knowledge of Python for data manipulation (Pandas, NumPy)
and basic automation scripts.
● Systems Thinking: Ability to understand upstream data flows and how they impact
downstream reporting.
● Problem-Solving: A "detective" mindset—you enjoy digging into a Rs 600Cr discrepancy until you find the root cause
We are looking for a skilled ML Engineer with 3–5 years of experience in building and deploying production-grade AI solutions, particularly around LLMs, RAG systems, and agentic AI frameworks. The role involves designing end-to-end ML architectures, optimizing models at scale, and delivering client-ready AI solutions. You will collaborate closely with stakeholders, mentor junior engineers, and drive AI projects from experimentation to production.
What will you need to be successful in this role?
Core Technical Skills
• Strong hands-on experience with Python for ML/AI (NumPy, Pandas, Scikit-learn, PyTorch/TensorFlow)
• Proven experience deploying production LLM applications with 1M+ tokens processed
• Advanced prompt engineering expertise including ReAct, meta-prompting, and function calling
• Production experience with RAG systems including hybrid search and re-ranking
• Deep understanding of embedding models and vector databases at scale
• Experience with agentic AI frameworks (LangGraph, CrewAI, or AutoGen)
• Strong knowledge of LLM evaluation frameworks (RAGAS, LLM-as-judge patterns)
• Experience implementing multi-agent systems and orchestration
• Proficiency with cloud ML platforms (AWS SageMaker, Azure ML, or Vertex AI)
Advanced Capabilities
• Experience with model fine-tuning (LoRA, QLoRA, PEFT, instruction tuning)
• Knowledge of knowledge graphs and graph-based RAG implementations
• Understanding of model hosting, inference optimization, and cost management
• Experience with MLOps pipelines, CI/CD for ML, and model versioning
• Ability to architect end-to-end ML solutions from data ingestion to deployment
• Experience with data pipelines and ETL for ML workflows
• Proficiency in containerization and orchestration (Docker, Kubernetes)
Client Engagement & Delivery
• Experience presenting technical solutions to clients and stakeholders
• Ability to translate business requirements into technical ML solutions
• Track record of delivering client POCs and production implementations
• Experience creating technical documentation and implementation guides
Good to have
• Experience hosting private LLMs (7B-13B models on-premises or cloud)
• Knowledge of graph databases (Neo4j) and graph neural networks
• Experience with streaming and real-time ML inference
• Published research papers or contributions to open-source ML projects
• DeepLearning.AI certifications in Agentic AI, RAG, or Finetuning
• AWS/Azure ML certifications or working towards them
Competencies
• Excellent verbal and written communication skills
• Strong mentoring ability for junior ML engineers
• Self-driven with ability to work independently on complex problems
• Excellent problem-solving skills with systematic debugging approach
• Proactive ownership of projects from ideation to deployment
• Ability to stay current with rapidly evolving AI/ML landscape
• Excellent academic record – B.E./B.Tech, MCA,
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● Bachelor's or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming along with Frameworks like Django/Fast Api/Flask and Java frameworks like Spring, Hibernate, SpringBoot, etc
● Debug and resolve technical issues that arise during the development or after deployment at various stages.
● Experience in databases including MySQL and NoSQL
● Experience in designing, developing and maintaining high availability systems.
● Experience in MVC pattern, Tomcat, Git, and Jira.
● Experience working with AWS cloud platform.
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Strong analytical and data driven approach to problem solving
Senior Data Engineer (Azure Databricks)
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
- Work extensively with PySpark notebooks within Databricks for data processing and transformation
- Build and optimize batch data processing workflows
- Develop and manage data integrations using Azure Functions and Logic Apps
- Write efficient and optimized SQL queries for data extraction and transformation
Required Skills:
- Strong hands-on experience with Azure Databricks, PySpark, and SQL
- Experience working with batch processing frameworks
- Proficiency in building and managing data pipelines in Azure ecosystem
Good to Have:
- Experience with Python
Mandatory Requirement:
- Candidate must have hands-on experience working with PySpark notebooks in Databricks
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Leading drive specialist for machine and plant engineering
Job Details
- Job Title: Sr. Python Automation Developer
- Industry: Engineering
- Domain - Information technology (IT)
- Experience Required: 7-9 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description
• Designation – Python Automation Developer (Sr./ Advanced Sr.)
• Experience: 7 to 9 years.
• Qualifications: B.E./MCA//M.Sc./B.Sc.
• Location: Pune (Near Sangamwadi )
Skills & Technologies:
Mandatory:
• Experience using OOPs in Python/Java/C++/C# (If the candidate has experience in Java/C++/C#, he must be willing to learn Python and work in it)
• Good analytical, design, coding and debugging skills
• Good analytical and requirement of understanding skills.
• Good design patterns, frameworks & coding skills – able to translate requirements into design and able to translate design into fully functional & efficient code.
• English communication skills.
Desirable:
• Working experience on any defect management tool
• Working experience on GIT/SVN or any code repository management tool
• Development Using Eclipse IDE or equivalent IDE
Behaviors:
• Good team player
• Openness to learn new technologies.
• Self-motivated and proactive
• Should work with minimum supervision.
• Should be able to supervise juniors.
• Take ownership
Must-Haves
• 5.9 years of relevant experience using OOPs in Python/Java/C++/C#
(If the candidate has experience in Java/C++/C#, he must be willing to learn Python and work in it.)
• Good analytical, design, coding, and debugging skills
• Good analytical and requirement of understanding skills.
• Good design patterns, frameworks & coding skills – able to translate
requirements into design and able to translate design into fully functional & efficient code.
• English communication skills.
About Certa
Certa is a leading innovator in the no-code SaaS workflow space, powering the full lifecycle for suppliers, partners, and third parties. From onboarding and risk assessment to contract management and ongoing monitoring, Certa enables businesses with automation, collaborative workflows, and continuously updated insights. Join us in our mission to revolutionize third-party management!
What You'll Do
- Partner closely with Customer Success Managers to understand client workflows, identify quality gaps, and ensure smooth solution delivery.
- Design, implement, and execute both manual and automated tests for client-facing workflows across our web platform.
- Write robust and maintainable test scripts using Python (Selenium) to validate workflows, integrations, and configurations.
- Own test planning for client-specific features, including writing clear test cases and sanity scenarios — even in the absence of detailed specs.
- Collaborate with Product, Engineering, and Customer Success teams to reproduce client-reported issues, root-cause them, and verify fixes.
- Lead or contribute to exploratory testing, regression cycles, and release validations before client rollouts.
- Proactively identify gaps, edge cases, and risks in client implementations and communicate them effectively to stakeholders.
- Act as a client-facing QA representative during solution validation, ensuring confidence in delivery and post-deployment success.
What We're Looking For
- 3–5 years of experience in Software QA (manual + automation), ideally with exposure to client-facing or Customer Success workflows.
- Strong understanding of core QA principles (priority vs. severity, regression vs. sanity, risk-based testing).
- Hands-on experience writing automation test scripts with Python (Selenium).
- Experience with modern automation frameworks (Playwright + TypeScript or equivalent) is a strong plus.
- Familiarity with SaaS workflows, integrations, or APIs (JSON, REST, etc.).
- Excellent communication skills — able to interface directly with clients, translate feedback into testable requirements, and clearly articulate risks/solutions.
- Proactive, curious, and comfortable navigating ambiguity when working on client-specific use cases.
Good to Have
- Previous experience in a Customer Success, Professional Services, or client-facing QA role.
- Experience with CI/CD pipelines, BDD/TDD frameworks, and test data management.
- Knowledge of security testing, performance testing, or accessibility testing.
- Familiarity with no-code platforms or workflow automation tools.
Perks
- Best-in-class compensation
- Fully remote work
- Flexible schedules
- Engineering-first, high-ownership culture
- Massive learning and growth opportunities
- Paid vacation, comprehensive health coverage, maternity leave
- Yearly offsite, quarterly hacker house
- Workstation setup allowance
- Latest tech tools and hardware
- A collaborative and high-trust team environment
About NonStop:
NonStop is a software services company at the intersection of bioinformatics, genomics, and healthcare technology. We partner with biotech firms, pharma organizations, genomics labs, and clinical institutions to design and deliver production-grade bioinformatics software, AI-powered analytical platforms, and end-to-end genomic data pipelines.
We work on problems that matter: from accelerating variant interpretation workflows and building HIPAA-compliant AI platforms, to orchestrating large-scale multi-omics pipelines for disease diagnostics and pharmacogenomics. Our team blends deep domain expertise with engineering rigor, and we're growing to meet the increasing demand from the life sciences industry for smart, scalable, and compliant bioinformatics solutions.
We work this way:
- We dig deep into biological problems, not just the code. Domain knowledge is valued as much as engineering craft.
- Bioinformatics is a team sport. You'll work alongside software engineers, clinicians, and research scientists.
- You own your work end-to-end, from design to delivery. We trust you to make good decisions and learn fast.
- Life sciences move fast. We encourage continuous learning, conference participation, and staying ahead of the field.
Your role:
As a Bioinformatics Engineer at NonStop, you will be a key contributor in designing, building, and maintaining bioinformatics software solutions and analytical pipelines for our clients across genomics, clinical diagnostics, and precision medicine. You'll bring both biological insight and engineering excellence to every project, collaborating with product, engineering, and scientific teams to deliver solutions that are scalable, reproducible, and compliant.
You’ll be responsible for:
- Build scalable bioinformatics applications and pipelines for efficient processing of genomic, transcriptomic, and multi-omics data.
- Produce high-quality, detailed documentation for all projects, pipelines, tools, APIs, and analytical methods.
- Provide technical consultation and solutions across cross-functional bioinformatics projects.
- Coach and mentor team members through knowledge sharing, code reviews, and pairing on domain-specific challenges.
- Ensure compliance with our SDLC process throughout the product development lifecycle.
- Stay current with evolving bioinformatics technologies and evangelize technical excellence within the team.
We’re looking for:
- Minimum 2 to 3 years of experience in designing, developing, and maintaining bioinformatics solutions.
- Master's degree in Bioinformatics, Computational Biology, or a closely related technical discipline.
- Strong understanding of genomic data analysis, variant calling, targeted sequencing, whole-exome (WES), and whole-genome sequencing (WGS) workflows.
- Hands-on experience with RNA-seq analysis, including differential expression and transcriptomic workflows.
- Proficiency in variant interpretation and ACMG/AMP classification workflows is a plus.
- Knowledge of algorithms and computational model development applied to biological data.
- Strong foundation in statistics and data analysis as applied to genomics and bioinformatics.
- Experience developing and debugging bioinformatics pipelines using Nextflow, Snakemake, WDL, or CWL.
- Proficiency in shell scripting and Linux/Unix environments for NGS data analysis.
- Familiarity with workflow automation and best practices in reproducible pipeline design.
- Excellent programming skills in Python (primary); familiarity with R or Java is a plus.
- Proficiency with standard bioinformatics tools (GATK, DeepVariant, VEP, ANNOVAR, MultiQC, FastQC, etc.).
- Experience with relational databases (PostgreSQL, MySQL, or Oracle) and NoSQL databases (MongoDB).
- Confident use of Git and GitHub for version control and collaborative development.
- Experience with cloud computing platforms (AWS or GCP, or Azure) for bioinformatics workloads.
- Familiarity with high-performance computing (HPC) environments is a plus.
Why join NonStop:
- Work on real-world genomics and clinical bioinformatics problems that directly impact patient care and scientific discovery.
- Collaborate with life sciences clients at the cutting edge, from rare disease diagnostics to AI-assisted bioinformatics platforms.
Who are we aka "About Us":
We are an early-stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Baker street Fintech Pvt Ltd (Parent Company) might be the place for you. We have a flat, ownership-oriented culture, and deliver world-class quality. You will be working with a founding team that has delivered over 26 industry-leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
As Cambridge Wealth, we are well-established in the wealth and mutual fund distribution segment, having won awards from BSE Star as well as Mutual Fund houses. Our UHNI/HNI/NRI clients include renowned professionals from various industries.
What are we looking for a.k.a “The JD” :
We are seeking a skilled and detail-oriented Data Analyst to join our product team. As a Data Analyst, you will play a crucial role in extracting, analysing, and interpreting complex financial data to drive strategic decision-making and optimize our data solutions. The ideal candidate should possess a strong foundation in SQL / NoSQL databases, Python programming, and proficiency in tools like PostgreSQL and Excel. A deep understanding of financial concepts is also a plus. Additionally, having an interest in business intelligence tools and machine learning will be valuable for this role.
Responsibilities:
- Proficient in writing complex SQL Queries
- Utilize Python for data manipulation, analysis, and visualisation, using libraries such as pandas, matplotlib, psycopg etc.
- Perform database optimization, indexing, and query tuning to ensure high performance.
- Monitor and maintain data quality, troubleshoot data-related issues, and implement solutions to optimize data integrity and performance.
- Design, configure, and maintain PostgreSQL databases
- Set up and manage database clusters, replication, and backups for disaster recovery
Preferred Qualifications:
- Intermediate-level Excel skills for data analysis and reporting.
- Strong communication skills to present findings effectively and recommendations to both technical and non-technical stakeholders.
- Detail-oriented mindset with a commitment to data accuracy and quality.
*(Only Applicants who have finished their educational commitments are requested to apply)
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- Has worked (0-1.5 years preferably) or is looking to work specifically with an early-stage startup.
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and process from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements.
- You want complete ownership for your role & be able to drive it the way you think is right.
- You can be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
- Are looking to stick around for the long term and grow with the company.

AI & ML (45 Days – Live Hands‑On)
Program Fee: ₹25,000
Duration: 45 Days
Mode: Hybrid (Online Sessions + Live Lab Access)
Eligibility: Freshers, Final‑Year Students, and Career Switchers
About the Internship
This intensive 45‑day internship program is designed for freshers who want to build strong, industry‑relevant skills in Cloud Infrastructure, Cybersecurity, and AI/ML model development. The program offers live production hands‑on training, allowing interns to work on real-world architectures, security workflows, and AI/ML deployments.
Participants will receive end‑to‑end exposure to modern cloud platforms, DevOps practices, security operations, and machine learning deployment, making them job‑ready for roles like:
- Cloud/Infra Engineer
- DevOps Engineer
- Security Analyst
- AI/ML Engineer
- Site Reliability Engineer (SRE)
Key Highlights
- 45 days of practical, mentor-led training
- Live production-style projects and deployments
- Hands‑on experience with:
- AWS / Azure Cloud
- Terraform, CI/CD, Docker, Kubernetes
- Security Hardening & IAM
- Python ML pipelines & model deployment
- Architect & deploy real systems using best practices
- Build portfolio-ready projects
- Receive an industry-recognized Internship Certificate
What You Will Learn
1️⃣ Cloud Infrastructure & DevOps
- Cloud fundamentals (AWS/Azure/GCP)
- Linux administration & scripting
- VPC, Subnets, Routing, NAT, Firewalls
- EC2 provisioning & autoscaling
- Load balancing & High Availability
- Terraform for Infrastructure as Code
- CI/CD pipelines using Jenkins / GitHub Actions
- Docker containerization & Kubernetes basics
2️⃣ Cybersecurity & Cloud Security
- IAM roles, policies, access control
- Server security & hardening
- SSL/TLS, encryption, key management
- Secure VPC & subnet design
- Threat detection & logging
- Secrets management
- Network segmentation & firewall best practices
3️⃣ AI & Machine Learning
- Python for ML
- Supervised and unsupervised algorithms
- Data preprocessing & model training
- Model evaluation & optimization
- Build ML inference APIs using FastAPI/Flask
- Containerize and deploy ML models to cloud
- Integrate monitoring for ML workflows
✅ Live Hands‑On Projects
Interns will work on real-world, production-grade projects such as:
- Deploy a secure 3‑tier web application on cloud
- Automate infra provisioning using Terraform
- Build CI/CD pipelines for automated deployments
- Harden servers & configure security groups, IAM
- Develop and deploy an ML model as a cloud API
- Create monitoring dashboards with Prometheus/Grafana
- End-to-end system deployment with logging and alerting
Each intern will complete a Capstone Project and present it during the final evaluation.
✅ Internship Deliverables
- Internship Completion Certificate
- 3+ Production‑level projects
- GitHub portfolio with all code and deployments
- Cloud & ML documentation
- Resume enhancement and guidance
- Career mentoring + interview preparation
✅ Who Should Apply
This internship is ideal for:
- Fresh graduates
- Final‑year engineering or IT students
- BSc, BCA, MCA, B.Tech learners
- Professionals switching careers to Cloud/DevOps/AI
- Anyone seeking hands‑on, real‑time industry experience
✅ Program Fee
₹25,000/- (includes training, labs, live‑project access, certificate, and mentorship)
✅ Certificate Provided
All participants will receive a verified Internship Certificate, including:
- Candidate Name
- Internship Duration & Dates
- Skills Covered
- Project Evaluation Score
- Authorized Signatory & Company Seal
We have an urgent opening for a highly skilled and passionate professional for the below role:
Quick Role Overview:
- Role: Python Automation Developer
- Location: Pune (Near Sangamwadi – Metro Connectivity)
- Working Model: Hybrid (4 Days Work from Office)
- Experience: 6 – 9 Years (Minimum 5.9+ Years in OOPs Development)
- Qualification: B.E. / MCA / M.Sc. / B.Sc.
- Notice Period: Early Joiners Preferred (15–30 Days Max)
Job Description
We are looking for a strong Python Automation Developer with solid Object-Oriented Programming expertise. This role is ideal for professionals who are strong in Java / C++ / C# and are willing to transition into Python (if not already working in Python).
You will be responsible for designing, developing, and maintaining high-quality automation solutions while translating business requirements into scalable and efficient technical implementations.
This is an excellent opportunity to work in a German-based product company offering strong work-life balance and a global work culture.
Key Responsibilities
- Design and develop automation solutions using Python (preferred) or other OOP-based languages.
- Translate functional requirements into scalable technical designs.
- Apply strong design patterns and coding best practices.
- Write clean, efficient, maintainable, and well-documented code.
- Debug, troubleshoot, and optimize performance issues.
- Work closely with cross-functional teams in a global environment.
- Supervise and mentor junior team members when required.
- Take complete ownership of assigned modules.
Desired Skills & Competencies
Must-Have Skills:
- 5.9+ years of relevant experience in OOPs (Python / Java / C++ / C#)
- Strong analytical, coding, debugging, and design skills
- Excellent understanding of design patterns and frameworks
- Ability to convert requirements → design → fully functional implementation
- Strong problem-solving mindset
- Good English communication skills
- Ability to work independently with minimum supervision
(Candidates with Java/C++/C# background must be willing to work in Python.)
Good to Have:
- Experience with defect management tools
- Experience with GIT / SVN or any code repository management tools
- Experience with Eclipse IDE or equivalent IDE
ROLE SUMMARY
The Senior Python Developer designs, builds, and improves Python and Django applications. The role includes developing end‑to‑end integrations using REST and SOAP services and delivering reliable, scalable solutions through hands‑on coding and data transformation work. The developer works closely with Business Analysts, architects, and other teams to ensure technical solutions support business needs. Key responsibilities also include improving SQL performance, taking part in code reviews, supporting DevOps workflows with Git and Azure DevOps, and helping integrate GenAI features—such as GPT models, embeddings, and agent‑based tools—into enterprise applications.
ROLE RESPONSIBILITIES
- Design and develop Python and Django applications that are scalable, secure, and maintainable.
- Implement UI components using CSS, Bootstrap, jQuery, or similar technologies as needed.
- Develop integrations with internal and external systems using REST, SOAP, and WSDL‑based services.
- Create and optimize SQL queries, database structures, and data access logic to support application features.
- Work with Business Analysts and stakeholders to translate functional requirements into technical specifications and solutions.
- Implement accurate data mappings and transformations in accordance with business and technical requirements.
- Contribute to code reviews, follow established coding standards, and ensure high‑quality deliverables.
- Support the implementation and maintenance of DevOps pipelines using Git and Azure DevOps.
- Contribute to the integration of GenAI capabilities—including GPT models, embeddings, and agent‑based components—into enterprise applications.
- Troubleshoot issues across the application stack and collaborate closely with peers to resolve technical challenges.
TECHNICAL QUALIFICATIONS
- 7+ years of hands‑on experience with Python and Django, including complex application development.
- 5+ years of experience with SQL development, optimization, and database design.
- At least 1-2 years of applied experience with GenAI technologies (GPT models, embeddings, agents, etc.).
- Deep expertise in application architecture, system integration, and service‑oriented design.
- Strong experience with DevOps tools and practices, including Git, Azure DevOps, CI/CD pipelines, and automated deployments.
- Advanced understanding of REST, SOAP, WSDL, and large‑scale service integrations.
GENERAL QUALIFICATIONS
- Exceptional verbal and written communication skills.
- Strong analytical, problem‑solving, and architectural reasoning abilities.
- Demonstrated leadership experience with the ability to guide and mentor technical teams.
- Proven ability to work effectively in fast‑paced, collaborative environments.
EDUCATION REQUIREMENTS
- Bachelor’s degree in Computer Science, MIS, or a related field.
- Advanced certifications in Python, cloud technologies, or GenAI are preferred but not required.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that entrust us with their most critical infrastructure and operations. We're bootstrapped, profitable, and scaling rapidly by consistently solving real, impactful problems.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who we seek
We are looking for a Fullstack Developer Intern to join our Engineering team(Freshers can apply too). You’ll build and improve internal products. This is a hands-on internship focused on learning by shipping. Your ultimate goal will be to build highly responsive and innovative AI based software solutions that meet our business needs.
We're looking for individuals who genuinely care, ship fast, and are driven to make a significant impact.
What You Will Be Doing
- Build user-facing features using Next.js and TypeScript.
- Convert designs into responsive UI using Tailwind CSS and reusable components.
- Work with APIs to integrate frontend with backend services.
- Implement common product workflows: authentication, forms, dashboards, tables, and navigation.
- Fix bugs, write clean code, and improve performance.
- Collaborate in a PR-based workflow on GitHub.
- Write and maintain documentation for the features you ship.
- Learn and apply best practices: component structure, state management, error handling, accessibility basics.
What We’re Looking For
- Basic to intermediate experience with JavaScript and NextJS.
- Familiarity with TypeScript basics.
- Comfortable with HTML/CSS and responsive design, Tailwind CSS is a plus.
- Understanding of how APIs work and how to consume them from the frontend.
- Strong Git knowledge.
- Strong learning mindset, ownership, and attention to detail.
Benefits
- Work directly with founders and the leadership team.
- Drive projects that create real business impact, not busywork.
- Gain practical skills that traditional education misses.
- Experience rapid growth as you tackle meaningful challenges.
- Fuel your career journey with continuous learning and advancement paths.
- Thrive in a workplace where collaboration powers innovation daily.
Company Overview:
Planview has one mission: to build the future of connected work with market-leading portfolio management and work management solutions. Planview is a recognized innovator and industry leader, our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Our solutions span every class of work, resource, and organization to address the varying needs of diverse and distributed teams, departments, and enterprises.
As a Sr CloudOps Engineer II, you will oversee teams of Engineers and be a champion for configuration management, technologies in the cloud, and continuous improvement. You will work closely with global leaders to ensure that our applications, infrastructure, and processes are scalable, secure, and supportable. By leveraging your production experience and development skills you will work hand in hand with Engineers (Dev, DevOps, DBOps) to design and implement solutions that improve delivery of value to customers, reduce costs, and eliminate toil.
Responsibilities (What you will do):
- Guide the professional development of Engineers and support the teams to accomplish business goals
- Work closely with leaders in the Israel to align on priorities and architect, deliver, and manage our products
- Build systems that are secure, scalable, and self-healing.
- Manage and improve deployment pipelines.
- Triage and remediate production issues.
- Participate in on-call rotations for escalations.
Qualifications (What you will bring):
- Bachelor's degree is CS or equivalent experience in related field.
- 2+ years managing Engineering teams.
- 8+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment
- 5+ years administering Linux and Windows environments.
- 3+ years programming / scripting experience (e.g., Python, JavaScript, PowerShell)
- Strong technical knowledge in OS’s (Linux and Windows), virtualizations, storage systems, networking, and firewall implementations
- Maintaining production environments in the On Premise (90%) and Cloud (10%) (e.g., AWS, Google Cloud, Azure)
- Solid understanding of networking principles and how it applies to data flow and security.
- Automating deployments of cloud based available services (e.g., AWS EC2 / RDS, Docker, Kubernetes)
- Experience managing CI/CD infrastructures, with a strong proficiency in platforms like bitbucket and Jenkins to streamline deployment pipelines and ensure efficient software delivery.
- Management of resources using Infrastructure as Code tools (e.g., CloudFormation, Terraform, Chef)
- Knowledge of observability tools such as LogicMonitor, New Relic, Prometheus, and Coralogix, as well as their implementation.
- Worked within Agile and Lean software development teams.
- Experience working in globally distributed teams.
- Ability to look on the big picture and manage risks.
About Shopflo
At Shopflo, we're trying to change the way consumers experience brands and businesses. Our first product was a cart and checkout platform for e-commerce, that allowed marketers to personalise discounts, rewards, and payments. We are currently also working on a new product that takes it a notch higher by unlocking enterprise-grade personalization for all consumer tech businesses.
Team & Company
Shopflo was founded by three co-founders:
- Ankit Bansal (ex-IIT Kharagpur, Oracle, Gupshup)
- Ishan Rakshit (ex-IIT Bombay, Parthenon, Elevation Capital)
- Priy Ranjan (ex-IIT Madras, McKinsey, Elevation Capital)
We’re a fast-growing team of ~50 people, based in HSR Layout, Bengaluru. We raised a $3.8M seed round from Tiger Global, TQ Ventures.
What you will do
- Design and develop microservice that can work in a large-scale multi-tenant environment.
- Explore design implications and work towards an appropriate balance between functionality, performance, and maintainability.
- Working with a cross-discipline team of Design, Product, Data Science and Analytics team.
- Deploy and maintain the application in a secured AWS environment.
- Take ownership from the ideation phase to deployment and maintenance.
- Active participation in the hiring process to bring world-class programmers in the team.
You should apply if you have:
- 2-4 years of experience in server-side development
- Strong programming skills in Java, Python, Node or Golang
- Hands-on experience in API development and frameworks such as Spring, Node, or Django.
- Good Understanding of SQL and NoSQL databases.
- Experience in test-driven development. (writing unit test and API test).
- Understanding of basic cloud computing concepts and experience in using any of the major cloud service providers(AWS/GCP/Azure).
- Ability to build and deploy the application in a containerized environment.
- Understanding of application logging and monitoring systems like Prometheus or Kibana.
- B. E/B. Tech/M. E. /M. Tech/M. S. from a reputed university with a good academic record.
- Curiosity to explore cutting-edge technologies and bake them into the products.
- Zeal and drive to take end-to-end ownership.
Role: Sr. Azure Data Engineer
Experience: 8–10 Years
Work Timings: 1:30 PM – 10:30 PM IST
Location: Bellandur Bengaluru (Work from Office)
Company: Chevron
Employment Type: 6- 12 months Contract
Role Overview
We are seeking an experienced Senior Data Engineer to design and deliver scalable cloud data solutions on Azure. The ideal candidate will have strong expertise in Databricks, PySpark, and modern data architectures, with exposure to energy domain standards like OSDU.
Key Responsibilities
- Architect and design robust Azure-based data solutions using Databricks, ADLS, and PaaS services
- Define and implement scalable data Lakehouse architectures aligned with OSDU standards
- Build and manage end-to-end data pipelines for batch and real-time processing using PySpark
- Establish data governance frameworks including metadata, lineage, security, and access control
- Implement DevOps best practices (CI/CD, Azure Pipelines, GitHub, automated deployments)
- Collaborate with stakeholders to translate business needs into technical solutions
- Develop and maintain architecture documentation, solution patterns, and standards
- Provide technical leadership and mentorship to engineering teams
- Optimize solutions for performance, cost, reliability, and security
- Ensure alignment with enterprise architecture and compliance standards
- Drive adoption of modular and reusable cloud data components
Required Skills & Qualifications
Core Technical Skills
- Azure Databricks, Apache Spark (PySpark), Delta Lake, Unity Catalog
- Azure Data Lake Storage (ADLS), Azure Data Factory, Synapse Analytics
- Strong experience in Python-based data engineering
- Data pipeline development (batch + real-time)
Architecture & Advanced Skills
- Data Lakehouse architecture and distributed systems
- Microservices, APIs, and integration frameworks
- OSDU (Open Subsurface Data Universe) or similar energy data models
DevOps & Tools
- CI/CD tools: Azure Pipelines, GitHub Actions
- Infrastructure as Code: Terraform or similar
Other Skills
- Data governance, security, compliance, and cost optimization
- Strong analytical and problem-solving skills
- Excellent communication and stakeholder management
What we are looking for:
We are looking for a motivated AI Developer with 1–3 years of experience to join our team and build cutting-edge applications powered by Large Language Models (LLMs). You will work on designing, developing, and optimizing intelligent systems using modern AI frameworks and tools.
Responsibilities:
- Design and develop applications leveraging Large Language Models (LLMs)
- Build and optimize RAG (Retrieval-Augmented Generation) pipelines
- Work with frameworks like LangChain, LangGraph, or similar LLM orchestration tools
- Integrate and manage vector databases (e.g., Pinecone, Weaviate, Qdrant, FAISS)
- Implement prompt engineering strategies and improve model responses
- Develop scalable and efficient AI system architectures
- Monitor, debug, and optimize LLM applications using observability tools (e.g., Langfuse or similar)
- Collaborate with backend and product teams to integrate AI features into production systems
- Stay updated with the latest advancements in AI/LLM ecosystem
Skills:
- 1–3 years of hands-on experience in AI/ML or backend development with AI exposure
- Strong understanding of LLMs and generative AI concepts
- Experience with LangChain, LangGraph, or similar frameworks
- Practical experience with RAG architectures and pipelines
- Hands-on experience with vector databases (e.g., Pinecone, Qdrant, Weaviate, FAISS)
- Familiarity with observability tools like Langfuse, Helicone, or similar
- Proficiency in Python (preferred) or Node.js
- Experience working with APIs (OpenAI, Anthropic, etc.)
- Understanding of embeddings, chunking, and retrieval strategies
- Good problem-solving and debugging skills
Experience:
- 1-3 years of experience in AI application development
Remuneration: Industry Standard basis experience
Job Title: Senior Linux Kernel Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Domain: Enterprise Linux / Kernel Development
Job Summary
We are seeking a highly skilled Senior Linux Kernel Engineer with deep expertise in kernel development, debugging, and performance optimization. The role involves working on enterprise-grade Linux distributions, kernel lifecycle management, security patching, and low-level hardware integration.
Key Responsibilities
1. Kernel Lifecycle & Maintenance
- Lead kernel upgrade strategies (e.g., LTS migrations such as 5.15 → 6.x) while ensuring stability and compatibility.
- Perform patch porting across kernel versions, resolving API and dependency conflicts.
- Track and mitigate security vulnerabilities by monitoring CVEs and upstream sources (e.g., LKML).
- Backport critical fixes to production kernels without impacting system stability.
2. Debugging & System Stability
- Act as an escalation point for kernel panics and system crashes.
- Perform post-mortem analysis using kdump, crash, and gdb.
- Debug early boot issues (UEFI, initramfs, kernel initialization).
- Conduct performance analysis using eBPF, ftrace, and perf to optimize system behavior.
3. Driver Development & Hardware Integration
- Design, develop, and maintain device drivers (network, storage, GPU, or character devices).
- Work closely with hardware through DMA, interrupts (MSI-X), and register-level programming.
- Maintain out-of-tree drivers using DKMS or similar frameworks.
- Ensure compatibility of drivers across kernel updates.
Required Technical Skills
- Programming: Strong expertise in C (mandatory) and C++
- Kernel Internals: Deep understanding of:
- Virtual File System (VFS)
- Memory Management (MMU, Paging)
- Process Scheduler
- Linux Networking Stack
- Debugging Tools:
- kdump, crash, gdb
- kprobes, trace-cmd, ftrace
- perf, valgrind
- Hardware debugging tools (JTAG, Serial Console)
- Build Systems:
- Kbuild, Makefiles
- Kernel packaging (RPM/Debian)
- Security:
- Experience with CVE patching and backporting
- Knowledge of SELinux/AppArmor
- Kernel hardening (FIPS, KSPP)
Preferred Skills
- Experience contributing to open-source kernel projects
- Familiarity with Linux Kernel Mailing List (LKML) workflows
- Exposure to enterprise Linux distributions (RHEL, Ubuntu, SUSE)
- Experience with performance tuning and system optimization at scale
1. Core Programming (C Language)
- Must have strong hands-on experience in C programming
- Comfortable with pointers, memory management, and low-level concepts
2. Kernel Internals Expertise
- Should have worked in at least one subsystem:
- VFS / File Systems
- Memory Management
- Scheduler / Networking
3. Debugging & Crash Analysis
- Experience handling kernel panics
- Hands-on with vmcore analysis tools
4. Security & Patching
- Understanding of CVE fixes and backporting
5. Driver Development
- Experience in writing or maintaining device drivers
6. Performance & Advanced Debugging
- Exposure to eBPF, ftrace, perf
7. Hardware-Level Understanding
- Knowledge of DMA, interrupts, hardware interaction
Soft Skills
- Strong analytical and problem-solving abilities
- Excellent communication skills
- Ability to work independently and in collaborative environments
- Quick learner with adaptability to new technologies
Job Title: Cloud Development & Linux Debugging Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Job Summary
We are looking for an experienced Cloud Development & Linux Debugging Engineer with strong expertise in Linux internals, system-level programming, and cloud technologies. The ideal candidate will have hands-on experience in developing, debugging, and optimizing Linux-based systems along with exposure to DevOps tools and containerized environments.
Key Responsibilities
- Develop and debug software at the Linux system level (kernel/user space).
- Work on Linux internals, low-level system components, and performance optimization.
- Design, develop, and maintain applications using Python and C/C++.
- Troubleshoot complex issues in Linux and cloud-based environments.
- Collaborate with cross-functional teams in an Agile/Scrum environment.
- Contribute to automation and infrastructure using DevOps tools.
- Work with containerized and cloud platforms such as Kubernetes and OpenStack.
Required Skills
- Strong experience in Linux software development (Linux internals, system-level programming).
- Proficiency in Python and C/C++.
- Solid debugging and analytical skills.
- Hands-on experience with Ansible, Puppet, and DevOps practices.
- Experience working with OpenStack and Kubernetes.
- Good understanding of Agile/Scrum methodologies.
- Excellent communication and teamwork skills.
Preferred Skills (Good to Have)
- Experience with Go / Golang and Go templating.
- Knowledge of Kubernetes Operators and Helm.
- Exposure to containerization technologies (Docker, Kubernetes).
- Contributions to open-source projects.
- Experience with cloud-native architectures.
Qualifications
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field.
- Self-driven individual with a strong learning mindset.
- Ability to work independently and in collaborative team environments.
About Evatt AI
Evatt AI is a scale‑up on a mission to make advanced legal reasoning and document understanding accessible through natural language. Over the past two years we’ve combined vector search and large language models to give lawyers instant access to case law and legislation. We’re entering a new expansion phase: building an all‑in‑one legal workplace platform that unifies a searchable casebase (like AustLII/Jade.io), agentic workflows, practice management tools and Microsoft Word integrations—delivering the AI assistance analogous to Harvey with the casebase power of Lexis AI+. To achieve this, we’re growing our development team in Bangalore and seeking a Head of Engineering who can own the technical vision and build out our team in India.

Tech Lead (India) — Help Build WebLager’s Next Engineering Hub
Location: India
Team: Product & Development
Reporting to: Head of Product & Development (Denmark)
Why this role exists
WebLager is scaling fast, and 2026 will be a breakout year. We’re building an Indian IT office that’s not an outsourced extension of Denmark.
This is a real “build it right from day one” leadership role.
You’ll be our right hand in India — shaping the team, culture, and delivery. If you want to build something meaningful that’s expected to grow a lot next year, keep reading.
What you’ll do
You’re not here to babysit Jira. You’re here to ship, lead, and raise the bar.
● Build and lead our India engineering team from early stage into real scale in 2026.
● Set standards for quality and delivery — clean code, stable systems, smart execution.
● Coach and grow people across levels: students, juniors, mid-levels, seniors.
● Create a local WebLager community that feels like one company, not two offices.
● Work tightly with Denmark on product, architecture, and delivery — as a partner, not a follower.
● Stay hands-on: design, code, review, refactor, deploy.
● Scale enterprise systems: performance, reliability, maintainability, observability.
● Improve how we work: CI/CD, engineering rituals, docs that matter, fewer surprises.
● Be the technical anchor when things are complex, messy, or moving fast.
What you bring
We don’t care about buzzwords. We care about proof you can build and lead.
Must-haves:
● 5+ years as a developer, with real production systems behind you.
● Strong backend skills — ideally Python or another scripting language, plus Java/C# or similar, and also extensive database knowledge of both relational and
non-relational databases.
● Frontend experience with a reactive framework like Angular, React, Vue, etc.
● Experience scaling enterprise-grade systems and making architecture tradeoffs that hold up.
● You’ve led people before (formally or naturally) and enjoy helping others grow.
● Excellent problem-solving skills — you don’t freeze when things are unclear; you untangle them.
● Near-perfect English (spoken and written). This is non-negotiable — you’ll work daily across countries and levels.
● You take ownership by default and don’t need a map for every step.
Nice-to-haves:
● You’ve helped build or grow a team from scratch.
● Cloud + DevOps experience.
● Product-minded engineering: you care about outcomes, not just tasks.
The kind of person who’ll thrive here
Let’s be direct:
● You’re driven to create real results, not just “do your part.”
● You want to build something from the ground up and shape the future of a company.
● You lead with calm, clarity, and high standards.
● You’re motivated beyond the norm — you don’t settle for “good enough.”
● You know a Tech Lead is someone who steps up, helps others win, and keeps shipping.
● You’re hungry to learn, and confident enough to challenge weak solutions.
The kind of person who won’t
Also direct:
● If you expect everything to be built around you, look for another job.
● If you want Denmark to hand you tasks, this isn’t it.
● If you avoid responsibility or hard conversations, this will hurt.
● If “average and comfortable” is your goal, don’t apply.
We’re building an exceptional team. Mediocre doesn’t survive here.
What you get
● A rare chance to build an office, a culture, and a high-performing team in India from scratch.
● Direct partnership with Danish leadership and product org.
● Real influence over architecture, standards, and execution.
● A company that values ownership and speed over politics.
● Massive growth opportunity as the India office scales in 2026.
● Competitive salary + benefits.
How to apply
Only reach out if you genuinely believe you’re the right fit and you’re motivated to build something one-of-a-kind.
Send: (This is mandatory)
● A short page about you and what you’ve built.
● CV/LinkedIn/GitHub/portfolio.
● 2–3 projects you’re proud of, and why.
Requirements:
- Bachelor's degree in engineering with a specialization in computer science or a related field.
- 5+ years of experience as a software engineer in a product development setting.
- Love of technology and experience with one or more programming languages, such as Python or Go.
- Experience in full-stack development, including designing APIs and integration patterns, implementing security, and implementing frameworks for unit and end-to-end testing.
- Experience with microservices architecture.
- Experience in one or more frameworks like FastAPI, Spring, GRPC, Flask, etc.
- Extensive experience in a test-driven development environment.
- Understanding of CI/CD practices, including code check-in policies, automated unit tests, automated code deployments, etc.
- Ability to grasp new technologies and use them effectively to create industrial-strength software.
- Good communication skills. You can communicate well in the English language with product managers, your team members, and external stakeholders to understand their needs and convey yours in a clear, precise manner, verbally or in writing.
- Strong collaboration skills. You have demonstrated the ability to work with both senior and junior technical professionals and get work done. You quickly earn the trust of the people you work with. People enjoy and have fun working with you.
- Deadline-oriented. You understand that deadlines are meant to be met.
- Challenges will surface, and obstacles and roadblocks will cause delays, but you plan for them in advance and still ship your features on time to meet your commitments.
- Bias for action. Your default setting is to take action and not wait for things to happen. You love to learn about new technologies and advancements in the software industry.
Benefits at 314e Corporation:
- Medical Benefits
- Office Game space
- Referral Program
- Holiday parties
Role Overview
We are looking for an Automated QA Test Engineer (3–4 years experience) to design and implement
automated testing frameworks that ensure the quality and reliability of Hosted.ai’s core platform. The ideal
candidate will have hands-on experience with Pytest, Python scripting, and test automation systems,
along with the ability to architect test harnesses, plan test coverage, and triage bugs effectively.
Key Responsibilities
● Design and develop automated test frameworks and test harness logic.
● Implement end-to-end, integration, and regression tests using Pytest and Python.
● Define and execute test coverage plans for critical components of the platform.
● Conduct bug analysis, triage, and root cause identification in collaboration with
engineering teams.● Ensure tests are reliable, repeatable, and integrated into CI/CD pipelines.
●Continuously improve test automation practices and tooling.
● Document test strategies, results, and defect reports clearly.
Requirements
● 3–4 years of experience in software QA, with a focus on test automation.
● Strong background in manual testing.
● Strong hands-on experience with Pytest for UI and end-to-end testing.
● Proficiency in Python coding for test development and scripting.
● Experience architecting test harnesses and automation frameworks.
● Familiarity with CI/CD pipelines and version control systems (Git).
● Solid understanding of QA methodologies, test planning, and coverage
strategies.
● Strong debugging, analytical, and problem-solving skills.
Nice to Have
● Experience testing distributed systems, APIs, or cloud-native platforms.
● Exposure to performance/load testing tools.● Familiarity with Kubernetes, containers, or GPU-based workloads.
🚀 Hiring: Data Engineer ( Azure ) at Deqode
⭐ Experience: 5+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
⭐ Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence
We are looking for a Databricks Data Engineer ( Azure ) to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.
🔹 Key Responsibilities
✅ Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)
✅ Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing
✅ Develop Structured Streaming pipelines with watermarking, late data handling & restart safety
✅ Implement declarative pipelines using Lakeflow
✅ Design idempotent, replayable pipelines with safe backfills
✅ Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)
✅ Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications
✅ Package and deploy using Databricks Repos & Asset Bundles (CI/CD)
Ensure governance using Unity Catalog and embedded data quality checks
✅ Mandatory Skills (Must Have)
👉 Databricks & Delta Lake (Advanced Optimization & Performance Tuning)
👉 Structured Streaming & Autoloader Implementation
👉 Databricks SQL (DBSQL) & Data Modeling for Analytics
🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)
Experience: 4–8 Years
Location: Bengaluru (On-site / Hybrid)
Company: Publicly Listed, Global Product Platform
🧠 About the Mission
We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.
This is not incremental improvement.
This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.
We are re-architecting:
- Legacy systems → AI-native architectures
- Static pipelines → autonomous, self-healing systems
- Data platforms → intelligent, learning systems
- Software workflows → agentic execution layers
This is the kind of shift you would expect from companies like Google or Microsoft —
Except here, you will build it from day zero and scale it globally.
🧠 The Opportunity: This role sits at the intersection of three high-impact domains:
1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI
2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines
3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure
We are building systems where:
- Data platforms optimize themselves using ML/LLMs
- Pipelines are autonomous, self-healing, and adaptive
- Queries are generated, optimized, and executed intelligently
- Infrastructure learns from usage and evolves continuously
This is: AI as the control plane for data infrastructure
🧩 What You’ll Work On
You will design and build AI-native systems deeply embedded inside data infrastructure.
1. AI-Native Data Platforms
- Build LLM-powered interfaces:
- Natural language → SQL / pipelines / transformations
- Design semantic data layers:
- Embeddings, vector search, knowledge graphs
- Develop AI copilots:
- For data engineers, analysts, and platform users
2. Autonomous Data Pipelines
- Build self-healing ETL/ELT systems using AI agents
- Create pipelines that:
- Detect anomalies in real time
- Automatically debug failures
- Dynamically optimize transformations
3. Intelligent Query & Compute Optimization
- Apply ML/LLMs to:
- Query planning and execution
- Cost-based optimization using learned models
- Workload prediction and scheduling
- Build systems that:
- Learn from query patterns
- Continuously improve performance and cost efficiency
4. Distributed Data + AI Infrastructure
- Architect systems operating at:
- Billions of events per day
- Petabyte-scale data
- Work with:
- Distributed compute engines (Spark / Flink / Ray class systems)
- Streaming systems (Kafka-class infra)
- Vector databases and hybrid retrieval systems
5. Learning Systems & Feedback Loops
- Build closed-loop AI systems:
- Execution → feedback → model updates
- Develop:
- Continual learning pipelines
- Online learning systems for infra optimization
- Experimentation frameworks (A/B, bandits, eval pipelines)
6. LLM & Agentic Systems (Infra-Aware)
- Build agents that understand data systems
- Enable:
- Autonomous pipeline debugging
- Root cause analysis for infra failures
- Intelligent orchestration of data workflows
🧠 What We’re Looking For
Core Foundations
- Strong grounding in:
- Machine Learning, Deep Learning, NLP
- Statistics, optimization, probabilistic systems
- Distributed systems fundamentals
- Deep understanding of:
- Transformer architectures
- Modern LLM ecosystems
Hands-On Expertise
- Experience building:
- LLM / GenAI systems (RAG, fine-tuning, embeddings)
- Data platforms (warehouse, lake, lakehouse architectures)
- Distributed pipelines and compute systems
- Strong programming skills:
- Python (ML/AI stack)
- SQL (deep understanding — query planning, optimization mindset)
Systems Thinking (Critical)
You think in systems, not components.
- Built or worked on:
- Large-scale data pipelines
- High-throughput distributed systems
- Low-latency, high-concurrency architectures
- Understand:
- Query optimization and execution
- Data partitioning, indexing, caching
- Trade-offs in distributed systems
🔥 What Sets You Apart (Top 1%)
- Built AI-powered data platforms or infra systems in production
- Designed or contributed to:
- Query engines / optimizers
- Data observability / lineage systems
- AI-driven infra or AIOps platforms
- Experience with:
- Multi-modal AI (logs, metrics, traces, text)
- Agentic AI systems
- Autonomous infrastructure
- Worked on systems at scale comparable to:
- Google (BigQuery-like systems)
- Meta (real-time analytics infra)
- Snowflake / Databricks (lakehouse architectures)
🧬 Ideal Background (Not Mandatory)
We often see strong candidates from:
- Data infrastructure or platform engineering teams
- AI-first startups or research-driven environments
- High-scale product companies
Experience building:
- Internal platforms used by 1000s of engineers
- Systems serving millions of users / high throughput workloads
- Multi-region, distributed cloud systems
🧠 The Kind of Problems You’ll Solve
- Can LLMs replace traditional query optimizers?
- How do we build self-healing data pipelines at scale?
- Can data systems learn from every query and improve automatically?
- How do we embed reasoning and planning into infrastructure layers?
- What does a fully autonomous data platform look like?
Background: We Commonly See (But Not Limited To)
Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.
🧭 Tech Lead (Backend / Fullstack | 7–10 Years)
Location: Bangalore (On-Site, Hybrid)
Company Type: Public-Listed Product Company
We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Lead execution of mission-critical systems while staying hands-on — bridging architecture and delivery.
🧩 What You’ll Do
- Own end-to-end delivery of complex engineering initiatives (0→1, 1→N)
- Design systems across backend + frontend (if fullstack)
- Translate ambiguous problems into structured technical solutions
- Drive engineering best practices, code quality, and velocity
- Mentor engineers and elevate team performance
- Collaborate with stakeholders on roadmap and execution strategy
🧠 What We’re Looking For
- Strong experience in backend systems + optional frontend frameworks
- Proven ability to lead projects and deliver at scale
- Solid understanding of system design and architecture patterns
- Ability to balance speed vs quality vs scalability trade-offs
- Strong communication and leadership without authority
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Experience leading platform builds or major system rewrites
- Exposure to AI systems, LLM integrations, or intelligent workflows
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Background: We Commonly See (But Not Limited To)
- Our team often includes engineers from top-tier institutions and strong research or product company or DeepTech or AI Product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.
Role: ML Engineer
Location: Remote
Experience: 5+ Years
𝗞𝗲𝘆 𝗦𝗸𝗶𝗹𝗹𝘀 Required:
• Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines
• Model deployment & versioning via Azure ML
• MLflow for experiment tracking & model lifecycle management
• MLOps best practices — orchestration, CI/CD, model monitoring
• Strong Python skills (Linting, Black, dependency management)
• Drift detection & performance monitoring
• Docker-based deployment (good to have)
Job Details
- Job Title: Director of Engineering
- Industry: SAAS
- Function – Information Technology
- Experience Required: 9-14 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, AWS, NodeJS, mongodb, React.js, WebGL, Three.js, AI/ML, Docker,nKubernetes
Criteria
Candidate must be having 9+ years of engineering experience, with 3u20134 years in technical leadership
Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
Ability to design scalable architectures for high-performance systems.
Should have AI/ML deployment experience
Strong 3D graphics/WebGL/Three.js knowledge.
Candidates should be from SAAS/Software/IT Services based startups or scaleup companies only
Job Description
The Role:
Company is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.
This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy.
What You’ll Own:
1. Technical Leadership & Architecture
● Architect company’s full-stack platform across frontend, backend, infrastructure, and AI.
● Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.
● Make decisions on stack, scalability patterns, architecture, and technical debt.
● Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.
● Lead architectural discussions, design reviews, and set engineering standards.
2. Hands-On Development
● Write production-grade code across frontend, backend, APIs, and cloud infra.
● Build critical features and core system components independently.
● Debug complex systems and optimize performance end-to-end.
● Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.
● Build scalable backend services for large-scale asset processing and real-time pipelines.
● Develop WebGL/Three.js rendering and AR workflows.
3. Team Building & Engineering Management
● Hire and grow a team of 5–8 engineers initially (scaling to 15–20).
● Establish engineering culture, values, and best practices.
● Build career frameworks, performance systems, and growth plans.
● Conduct 1:1s, mentor engineers, and drive continuous improvement.
● Set up processes for agile execution, deployments, and incident response.
4. Product & Cross-Functional Collaboration
● Work with the founder and product team on roadmap, feasibility, and prioritization.
● Translate product requirements into technical execution plans.
● Collaborate with design for UX quality and technical alignment.
● Support sales and customer success with integrations and technical discussions.
● Contribute technical inputs to product strategy and customer-facing initiatives.
5. Engineering Operations & Infrastructure
● Own CI/CD, testing frameworks, deployments, and automation.
● Create monitoring, logging, and alerting setups for reliability.
● Manage AWS infrastructure with a focus on cost and performance.
● Build internal tools, documentation, and developer workflows.
● Ensure enterprise-grade security, compliance, and reliability.
Tech Stack:
1. Frontend
React.js, Next.js, TypeScript, WebGL, Three.js
2. Backend
Node.js, Python, Express/FastAPI, REST, GraphQL
3. AI/ML
PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines
4. 3D & Graphics
Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization
5. Databases
PostgreSQL, MongoDB, Redis, vector databases
6. Cloud & Infra
AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes CI/CD: GitHub Actions
Monitoring: Datadog, Sentry
What We’re Looking For:
1. Must-Haves
● 9+ years of engineering experience, with 3–4 years in technical leadership.
● Deep full-stack experience with strong system design fundamentals.
● Proven success building products from 0→1 in fast-paced environments.
● Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
● Ability to design scalable architectures for high-performance systems.
● Strong people leadership with experience hiring and mentoring teams.
● Ready to code, review, design, and lead from the front.
● Startup mindset: fast execution, problem-solving, ownership.
2. Highly Desirable
● AI/ML deployment experience (CV, generative AI, 3D reconstruction).
● Strong 3D graphics/WebGL/Three.js knowledge.
● Experience with real-time systems, rendering optimizations, or large-scale pipelines.
● Background in B2B SaaS, XR, gaming, or immersive tech.
● Experience scaling engineering teams from 5 → 20+.
● Open-source contributions or technical content creation.
● Experience working closely with founders or executive leadership.
Why Company:
● Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.
● Build from day zero – architecture, team, and culture.
● Path to CTO as the company scales.
● High autonomy to drive technical decisions.
● Direct founder collaboration on product vision.
● High ownership, high-growth environment.
● Backed by global leaders: Microsoft, Google, NVIDIA, AWS.
Location & Work Culture:
● Location: HSR Layout, Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: High-intensity, high-integrity, engineering-first
● Team: Young, ambitious, technically strong
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
About the Company
EaseMyTrip is one of India’s leading online travel platforms, offering a wide range of travel services including flights, hotels, and holiday packages. We are driven by innovation, technology, and customer-centric solutions.
Location- Gurugram
Job Summary
We are looking for a passionate Full Stack Developer who is proficient in both front-end and back-end development. The ideal candidate will be responsible for building scalable web applications, improving performance, and collaborating with cross-functional teams to deliver seamless user experiences.
Key Responsibilities
- Develop and maintain scalable web applications from front-end to back-end
- Design responsive and user-friendly UI using modern frameworks
- Build robust APIs and integrate third-party services
- Optimize applications for maximum speed and scalability
- Collaborate with product managers, designers, and other developers
- Troubleshoot, debug, and upgrade existing systems
- Ensure code quality, security, and best practices
Required Skills & Technologies
- Frontend: HTML, CSS, JavaScript, React.js / Angular / Vue.js
- Backend: Node.js / Java / Python
- Database: MySQL / PostgreSQL / MongoDB
- Version Control: Git
- Experience with RESTful APIs and microservices architecture
- Familiarity with cloud platforms like AWS / Azure (preferred)
Job Title: AI/ML Engineer
Work Location: U.S Complex, Adjacent to Jasola Apollo Metro Station, Mathura Road New Delhi-110076
We, Infinity Assurance Solutions specialize in Warranty Service Administration, Extended Warranty, Accidental Damage Protection and a wide range of service products under our own brand “InfyShield.”
Our offerings cover Mobile Phones, Home Appliances, Consumer Electronics, Kitchen Appliances, IT Equipment, Office Automation, AV Solutions, Classroom and Conference Room Technologies, and more.
· We have a very extensive Enterprise grade End to End Business Management Software Application that is unmatched in the industry.
· The application has multiple Sub-applications and functionalities including Sales, Insurance Claims, Warranty Claims, Payments, Collections, Approvals, Billing / Invoicing, Payment / Tax / Bank Reconciliations, Partners Management, HRMS, Client Management etc. to suite end to end business needs of any enterprise.
· The application also has multiple integrations for Payment gateways, Voice calls, Video Calls, SMS, emails, WhatsApp, client applications, courier, Maps and databases, etc.
· To fuel our growth, we are inviting Computer Vision Engineer as we are building our software development team to execute new business growth plans and a fresh product roadmap.
· This position requires talents who are multi-skilled with hands-on experience; to work independently as well as in teams.
· Ideal candidates will be responsible to design, modify, develop, write, and implement software applications and components.
· Our technology processes documents and images across warranty, insurance, claims, and identity workflows—where trust, precision, and fraud prevention are paramount.
Detailed Role Description
· Assist in developing a secure, AI-powered platform for verification and assurance.
· Contribute to developing and improving computer vision models for image forgery detection, replay detection, and advance fraud analysis.
· Implement and experiment with image processing techniques such as noise analysis, Error Level Analysis (ELA), blur detection, and frequency-domain features.
· Support OCR pipelines using tools like PaddleOCR or Azure AI Vision to extract text from photos of identity documents.
· To Help prepare and clean real-world image datasets, including handling low-quality, noisy, and partially occluded images.
· Integrate trained models into Python-based APIs (FastAPI) for internal testing and production use.
· Collaborate with senior engineers to test, debug, and optimize model performance.
· Document experiments, findings, and implementation details clearly.
Required Skills
· 2-4 years experience in Machine Learning, computer vision, image processing, and AI
· Bachelor’s or Master’s degree in AI & Machine Learning /Computer Science/Data Science or related fields.
· Proficiency in Machine Learning, Python
· Core experience with OpenCV, NumPy and core image processing concepts
· Handson experience with PyTorch, TensorFlow
· Understanding of Convolutional Neural Networks (CNNs) fundamentals
· Hands-on experience REST APIs or FastAPI
· Exposure to OCR, document processing, or facial analysis
Desired Candidate Profile
· Perior experience related image forensics, fraud detection, and biometrics
· Comfortable working with imperfect real-world data
· Good communication skills and team-oriented mindset
Important Notes & Perks:
· Attractive pay structure as per the Market Standards
· Huge career growth opportunity
· Preference will be given to candidates who can join early
· Should have worked in small teams with multi-skilled resources
· This is a full-time, work-from-office opportunity (Preference will be given to candidates who are interested in Monday to Saturday; 6 days a week)
· Applications may be submitted via google form as per the link: https://forms.gle/TC8kypz3SwN256sP6
About us:
We, Infinity Assurance Solutions, Private Limited, a New Delhi-based portfolio company of Indian Angel Network, Aegis Centre for Entrepreneurship, Artha Venture Fund, eVista Venture and other marquee industry veterans; specialize in Warranty Service Administration, Extended Warranty, Accidental Damage Protection and various other service products for wide range of Mobile Phones, Home Appliances, Consumer Electronics, AV Solutions, Classroom / Conference-room Solutions, Kitchen Appliances, IT, Office automation, Personal Gadgets etc.
Incorporated in January 2014; as a debt-free, operationally profitable with positive net retained earnings, we have grown rapidly. Going forward, we are looking to grow multi-fold with newer areas of business expansion.
Our success is attributed to a very agile and technologically driven unique service delivery model, loyal long-term clients, in-house application, and lean organization structure.
More about us:
https://www.infinityassurance.com
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
We are a forward-thinking company Hookux seeking a skilled Full Stack Developer to join our team. You will work on a variety of exciting projects that require problem-solving, innovation, and scalability. One such project is, a stock market and crypto investing simulation platform that teaches children financial skills through gamified competition.
Key Responsibilities:
- Develop and maintain robust, scalable, and efficient front-end and back-end systems.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Design and implement API endpoints and server-side logic.
- Work closely with the design and product teams to ensure the technical feasibility of UI/UX designs.
- Optimize the application for maximum speed and scalability.
- Write well-documented, clean code.
- Troubleshoot and debug applications.
- Stay up-to-date with emerging technologies and industry trends.
Technical Skills & Experience:
- Proficient in JavaScript/TypeScript, with expertise in React.js for front-end development.
- Strong experience with Node.js, Express.js, or other backend technologies.
- Familiarity with database technologies such as MongoDB, PostgreSQL, or MySQL.
- Experience with RESTful APIs and third-party integrations.
- Knowledge of cloud platforms like AWS, Azure, or Google Cloud.
- Proficient in version control (e.g., Git) and collaboration tools.
- Experience with agile methodologies and continuous integration/deployment (CI/CD).
Bonus Skills:
- Experience with React Native for mobile app development.
- Familiarity with blockchain technology or cryptocurrency-related platforms.
- Experience with containerization (e.g., Docker, Kubernetes).
- Knowledge of testing frameworks and tools.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- years of experience in full stack development.
- Ability to manage multiple priorities and work independently as well as in a team environment.
Benefits:
- Competitive salary and performance bonuses.
- Opportunities for career growth and learning.
- Flexible working hours and remote working options.
mail me your CV and portfolio at hr @ hookux.com
We are hiring for a Python Developer at Wissen Technology!
📍 Location: Pune (Hybrid)
💼 Experience: 3–6 Years
⏱️ Notice Period: Immediate / 15 days preferred
🔧 Key Skills:
• Strong experience in Python
• Hands-on with Pandas & NumPy
• Experience with AWS (S3, Lambda preferred)
• Good understanding of data processing & APIs
• SQL knowledge
🏢 About Wissen Technology:
Wissen Technology, part of the Wissen Group (est. 2000), is a fast-growing technology company specializing in high-end consulting across Banking, Finance, Telecom, and Healthcare domains.
✔️ Global presence – US, India, UK, Australia, Mexico & Canada
✔️ Certified Great Place to Work®
✔️ Trusted by Fortune 500 clients like Morgan Stanley, Goldman Sachs, and more
✔️ Strong growth with 400% revenue increase in recent years
🌐 Website: www.wissen.com
🔗 LinkedIn: https://www.linkedin.com/company/wissen-technology/
If you’re interested or have relevant candidates, please share your resume at [your email].
#Hiring #PythonDeveloper #PuneJobs #AWS #ImmediateJoiner
While you may already know about Wissen and the company history, here is a quick rundown for you.
About Wissen Technology:
· The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
· Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
· Our workforce has highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
· Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
· Globally present with offices US, India, UK, Australia, Mexico, and Canada.
· We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
· Wissen Technology has been certified as a Great Place to Work®.
· Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
· Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
· We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, Goldman Sachs, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
De
Job Title: Application Development Engineer (Python – Backtesting & Index Platforms)
Role Overview
Key Responsibilities
Engine Development: Design and implement modular, reusable Python components for index construction, rebalancing, and backtesting.
Large-Scale Simulation: Use Pandas, NumPy, and PySpark to run historical calculations across long time horizons and multiple index variants.
Workflow Integration: Integrate engines with orchestrators such as Airflow or Temporal using parameterized, config-driven execution.
Reference Data Consumption: Query and utilize pricing, security master, and corporate action data from Snowflake.
Quality & Reconciliation: Build automated test harnesses to validate outputs, compare against benchmarks, and guarantee reproducibility.
Performance Optimization: Improve runtime efficiency through vectorization, caching, and distributed computing patterns.
Cross-Team Collaboration: Partner with Business, Index Ops, and Platform teams to accelerate research-to-production onboarding.
Required Technical Capabilities
Python Expertise: Strong proficiency in Python application development with emphasis on clean architecture and maintainable design.
Data & Numerical Libraries: Deep experience with Pandas and NumPy; working knowledge of PySpark for distributed workloads.
Financial Computation: Ability to implement portfolio mathematics, weighting algorithms, and time-series transformations.
Config-Driven Systems: Experience building rule-based or metadata-driven processing frameworks.
Database Skills: Strong SQL and experience consuming structured data from Snowflake.
Testing Discipline: Expertise in unit testing, regression testing, and deterministic replay of calculations.
Orchestration Integration: Familiarity with Airflow, Temporal, or similar workflow engines.
Cloud Infrastructure: Solid understanding of AWS ecosystem services (S3, Lambda, IAM) and how they integrate with the Snowflake Data Cloud.
Department
Product & Technology
Location
On-site | Prabhat Road, Pune
Experience
3-5 Years in a Data Engineering or Analytics Role
Domain
Fintech / Wealth Management — non-negotiable
Compensation
11-12 LPA Fixed + Performance Bonus
Growth
Title upgrade + salary revision at 12–18 months for strong performers
Why this role is different from most Data Engineer postings
You will work directly with the founding team on a live wealth management platform used by HNI and NRI clients. You will not spend years in a queue waiting to matter your work ships to production, your analysis influences product decisions, and you will guide junior teammates from day one. If you perform, a raise and title upgrade are on the table within 1218 months. This is the kind of early-team role that defines careers.
About Cambridge Wealth
Cambridge Wealth is a fast-growing, award-winning Financial Services and Fintech firm obsessed with quality and exceptional client service. We serve a high-profile clientele NRI, Mass Affluent, HNI, and ultra-HNI professionals and have received multiple awards from major Mutual Fund houses and BSE. We are past the zero-to-one stage and now focused on scaling our features and intelligence layer. You will be joining at exactly the right time.
What You Will Be Doing
This is a central, hands-on data engineering role at the intersection of financial analytics and applied ML. You will own the data pipelines and analytical models that power investment insights for wealth management clients transforming transaction data and portfolio information into measurable, actionable intelligence.
We are not looking for someone who just keeps the lights on. We want someone who looks at a working system and immediately sees how to make it 10x faster, cleaner, and smarter using AI and automation wherever possible.
Key Responsibilities:
Data Engineering & Pipelines
- Build and optimize PostgreSQL-based pipelines to process large volumes of investment transaction data.
- Design and maintain database schemas, foreign tables, and analytical structures for performance at scale.
- Write advanced SQL — window functions, stored procedures, query optimization, index design.
- Build Python automation scripts for data ingestion, transformation, and scheduled pipeline runs.
- Monitor AWS RDS workloads and troubleshoot performance issues proactively.
Financial Analytics & Modelling
- Develop analytical frameworks to evaluate client portfolios against benchmarks and category averages.
- Build data models covering mutual fund schemes, SIPs, redemptions, switches, and transfer lifecycles.
- Create materialized views and derived tables optimized for dashboards and internal reporting tools.
- Analyse client transaction history to surface patterns in investment behaviour and financial discipline.
Applied ML & AI-Driven Development
- Use Python (Pandas, NumPy, Scikit-learn) for trend analysis, forecasting, and predictive modelling.
- Implement classification or regression models to support financial pattern detection.
- Use AI tools — LLMs, Copilots — to accelerate ETL development, code quality, and data cleaning.
- Identify opportunities to automate repetitive data tasks and advocate for smarter tooling.
Data Quality & Governance
- Own data integrity end-to-end in a live, high-stakes financial environment.
- Build and maintain validation and cleaning protocols across all financial datasets.
- Maintain Excel models, Power Query workflows, and structured reporting outputs.
Collaboration & Junior Mentorship
- Work directly with Product, Investment Research, and Wealth Advisory teams.
- Translate open-ended business questions into structured queries and measurable outputs.
- Guide 1–2 junior trainees — review their work, set code quality standards, and help them grow.
- Present findings clearly to non-technical stakeholders — no jargon, just clarity.
Skills — What We Need vs. What Helps
Skill / Tool
Requirement
Must-Haves:
SQL & PostgreSQL (window functions, stored procedures, optimization)
Python — Pandas, NumPy for data processing and automation
ML fundamentals — classification or regression (Scikit-learn)
AWS RDS or equivalent cloud database experience
Financial domain knowledge — mutual funds, SIPs, portfolio concepts
Python data visualization — Matplotlib, Seaborn, or Plotly
Strong Advantage
Excel — Power Query, advanced modelling
Materialized views, query planning, index optimization
Experience with BI/dashboard tools
Good to Have
NoSQL databases
Prior fintech or wealth management startup experience
Financial Domain — Non-Negotiable
This is a wealth management platform. You must come in with a working understanding of:
- Mutual fund structures, scheme types, and NAV-based transactions
- Investment lifecycle — SIPs, Lump Sum, Redemptions, Switches, and STPs
- Portfolio allocation and benchmarking against indices (e.g. Nifty 50, category averages)
- How HNI/NRI clients interact with financial products differently from retail investors
You do not need to be a CFA. But if mutual funds and portfolio analytics are completely new territory, this role is not the right fit right now.
The Culture Fit — Read This Carefully
We are a small, fast-moving team. This is not a place where you wait for a ticket to arrive in your queue. The right person for this role:
- Has worked at a small startup before and is used to wearing multiple hats
- Finds broken or slow data systems genuinely irritating and fixes them without being asked
- Reaches for Python or an LLM when there is a repetitive task — automating is instinctive
- Is comfortable saying 'I don't know but I'll find out' and follows through independently
- Wants visibility and ownership, not just a well-defined job description
- Is looking for a role where strong performance is directly visible and rewarded
Growth Path — What Happens If You Perform
This is not a vague 'growth opportunity' pitch.
If you hit the bar in your first 12–18 months, you will receive a salary revision and a title upgrade to Senior Data Engineer or Lead Data Engineer depending on team expansion. As we scale our Data and AI team, this role is the natural stepping stone to a team lead position. You will also gain direct exposure to founding-team decision-making — the kind of access that is hard to get at larger companies.
Preferred Background
- 2–4 years in a data engineering or analytics role at a startup or small Fintech
- Experience in a live product environment where data errors have real consequences
- Exposure to portfolio analytics, investment research, or wealth management platforms
- Has mentored or reviewed code for at least one junior team member
Hiring Process
We respect your time. The process is direct and moves fast.
- Screening Questions — 5 minutes online
- Online Challenge — MCQ(Data, SQL, AWS, etc), and one applied ML or analytics problem, Communication Skills and Personality (focused, not trick questions)
- People Round — 30-minute video call, culture and communication
- Technical Deep-Dive — 1 hour in person, live financial data problems and your past work
- Founder's Interview — 1 hour in person, growth conversation and mutual fit
- Offer & Background Verification
Job Title: Software Developer (Contractor)
Location: Remote, Up to 1-year contract
Compensation: Hourly
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Key Responsibilities:
- Development & Customization: Develop and support client-specific customizations, integration, and automation under guidance.
- Ownership: Deliver assigned development tasks with quality, within estimated effort and timelines
- Established Tools and Processes: Follow established tools, coding standards, SDLC, CI/CD, and security practices.
- Collaboration: Partner effectively across a global team, including Team Lead/Senior Developers, consultants, project managers, Deltek partners and subcontractors, and cloud operations.
- Quality Assurance: Follow established security, quality, and testing protocols. Support testing activities, fix defects and rework items under guidance to maintain customer satisfaction and governance standards.
- Leverage AI-first methodology throughout the project lifecycle: use AI-powered tools to design, develop, and maintain scalable technical solutions.
- Continuous Improvement: Actively engage in learning new tools, technologies, and Deltek product capabilities.
Qualifications :
- Required Skills:
- Academic qualification: Bachelor’s degree (2025/2026 Pass out) in Computer Science/IT & E&C/ MCA. Minimum 70% & above in academics throughout.
- Job Location: Only Bangalore Candidates
- Project experience: Entry‑level experience through academic projects, internships, labs, or personal/open‑source projects.
- Development & Engineering Practices: Knowledge of object-oriented programming, core software development principles, and computer science fundamentals such as data structures, algorithms, and logical problem solving.
- Analytical and Problem‑Solving Skills: Strong analytical and problem‑solving skills, with the ability to learn and apply new concepts quickly
- Communication Skills: Good verbal and written communication skills in English, with the ability to participate in technical discussions and explain ideas clearly.
- Learning Mindset: Strong analytical and problem‑solving skills, with the ability to learn and apply new concepts quickly
- Technical Skills
- Programming Fundamentals: Basic proficiency in at least one programming language such as Python, JavaScript (Node.js preferred), Java, or C/C++, with understanding of object‑oriented programming concepts.
- Computer Science Foundations: Knowledge of data structures, algorithms, and basic software design principles gained through academic or project work.
- Web & Integration (Exposure): Introductory experience with web applications, APIs, integrations, or automation through coursework or hands‑on projects.
- Testing & Debugging: Basic Understanding of unit testing, debugging, and defect fixing as part of the development lifecycle.
- Tools & Platforms (Exposure): Familiarity with development tools such as IDEs, version control (Git), and basic build or deployment concepts.
- AI Tools (Plus): Hands‑on experience or foundational knowledge of AI/LLM‑based tools (such as AI assistants or copilots) and prompt engineering.
- Success Criteria for the Role
- Requirement Clarity: Quickly grasp and clarify assigned requirements or technical specifications, ensuring tasks are well-defined and minimizing the need for rework.
- Execution: Consistently completes development tasks and project assignments within agreed timelines, proactively communicating risks or blockers to avoid delays or scope drift.
- Quality: Delivers code with low defect rates by following coding standards and thorough testing, leading to successful QA/UAT outcomes with minimal rework or iterations.
- Collaboration & Communication: Receives positive feedback from team leads/Senior developer, peers, and stakeholders for clear communication, teamwork, and reliable technical contributions.
- AI Adoption: Demonstrate efficiency gains through AI usage including faster specification writing, improved code quality, automated testing.
- Why Join Deltek?
- At Deltek, you'll be part of a forward-thinking team dedicated to delivering innovative ERP solutions that empower organizations to achieve their goals. Our culture values collaboration, professional growth, and flexibility, providing you with opportunities to work on impactful projects and advance your career. You'll benefit from our commitment to leveraging cutting-edge AI capabilities, enabling you to design more innovative, more efficient solutions for our clients. Join us to make a difference in a supportive environment where your expertise is valued and your contributions drive real business success.
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
AI Lead (Backend Systems & Architecture)
This is not a feature-delivery role. This is an architecture, ownership, and AI systems leadership role.
At Techjays, we build production-grade AI platforms for global clients. We are looking for an AI Lead with strong backend engineering expertise—someone who can design, scale, and take complete ownership of intelligent systems end-to-end.
You will operate at the intersection of backend engineering, distributed systems, and applied AI, driving both technical direction and execution.
What You’ll Do
- Architect and scale backend systems powering AI-driven applications
- Design and implement AI workflows such as RAG pipelines, agents, and LLM integrations
- Own systems end-to-end: architecture, development, deployment, and scaling
- Build reliable, high-performance distributed systems
- Integrate and optimize LLMs (Claude, GPT, etc.) for real-world use cases
- Lead backend and AI initiatives with strong technical ownership
- Ensure performance, scalability, observability, and cost efficiency
- Mentor engineers and raise the technical bar across teams
- Collaborate with product and AI teams to build AI-native solutions
What We’re Looking For
- Proven experience in architecting and scaling backend systems end-to-end
- Strong expertise in Python (Django / Flask / FastAPI)
- Deep understanding of distributed systems and system design
- Hands-on experience with AWS or GCP in production environments
- Solid experience working with LLMs (Claude, GPT, etc.)
- Strong knowledge of:
- Retrieval-Augmented Generation (RAG)
- Vector databases (Pinecone, FAISS, Weaviate, etc.)
- Experience in building and managing microservices architectures
- Ability to lead teams, mentor engineers, and drive technical excellence
- Strong problem-solving skills with an ownership mindset
Nice to Have
- Experience building AI agents or autonomous systems
- Familiarity with real-time data systems or streaming (Kafka, etc.)
- Understanding of MLOps and AI system lifecycle
- Experience optimizing AI systems for latency, cost, and scalability
Who You Are
- You think in systems, not just features
- You take full ownership of what you build
- You are comfortable in fast-moving, ambiguous environments
- You stay updated with the latest advancements in AI and backend technologies
This role is ideal for someone who wants to lead, build, and scale AI-powered backend systems in production while driving real-world impact.
Job Summary
We are seeking a highly motivated Technical Product Manager with strong exposure to software development to drive product strategy, roadmap execution, and cross-functional collaboration. The ideal candidate will bridge the gap between business stakeholders and engineering teams, ensuring delivery of scalable, high-quality software products.
Key Responsibilities.
prior exposure to Software Development.
Collaborate closely with engineering, design, QA, and business teams to deliver end-to-end product solutions.
Translate business requirements into technical specifications and user stories.
Work with development teams to ensure timely delivery of features and releases.
Prioritize product backlog based on business value, customer needs, and technical feasibility.
Participate in Agile/Scrum ceremonies such as sprint planning, stand-ups, and retrospectives.
Analyze product performance and user feedback to drive continuous improvement.
Ensure product scalability, performance, and technical feasibility.
Coordinate with stakeholders for product launches and go-to-market strategies.
- Maintain product documentation including PRDs, technical documents, and release notes.
Required Skills
Strong understanding of software development lifecycle (SDLC).
Hands-on experience or exposure to programming languages (Python preferred most )
Experience working with Agile/Scrum methodologies.
Strong knowledge of APIs, microservices, and system design concepts.
Ability to work closely with engineering teams and understand technical challenges.
Excellent analytical, problem-solving, and communication skills.
- Experience with product management tools (JIRA, Confluence, etc.).
Job Title: Software Developer
Location: Remote
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Blue Owls Solutions is looking for a mid-level Azure Data Engineer with approximately 4 years of hands-on experience to join our growing data team. In this role, you will design, build, and maintain scalable data pipelines and architectures that power business-critical analytics and reporting. You'll work closely with cross-functional teams to transform raw data into reliable, high-quality datasets that drive decision-making across the organization.
Required Skills
- 4+ years of professional experience as a Data Engineer or in a similar data-focused role
- Strong proficiency in SQL for data manipulation, querying, and performance optimization
- Hands-on experience with PySpark for large-scale data processing and transformation
- Solid working knowledge of the Microsoft Azure ecosystem (Azure Data Factory, Azure Data Lake, Azure Synapse, etc.)
- Experience with Microsoft Fabric for end-to-end data analytics workflows
- Ability to design and implement robust data architectures including data warehouses, lakehouses, and ETL/ELT frameworks
- Strong coding and scripting skills with Python
- Proven problem-solving ability with a knack for debugging complex data issues and optimizing pipeline performance
- Understanding of data modeling concepts, dimensional modeling, and data governance best practices
Interview Process
- Take-Home Assessment
- 60-Minute Technical Interview
- Culture Fit Round
Preferred Skills & Certifications
- Microsoft Certified: Fabric Analytics Engineer Associate (DP-600)
- Microsoft Certified: Fabric Data Engineer Associate (DP-700)
- Experience with CI/CD practices for data pipelines
- Familiarity with version control systems such as Git
- Exposure to real-time streaming data solutions
- Experience working in Agile or Scrum environments
- Strong communication skills with the ability to translate technical concepts for non-technical stakeholders
What We Offer
- Competitive salary and performance-based bonuses
- Flexible hybrid options
- Opportunities for professional development, training, and certification sponsorship
- A collaborative, innovation-driven team culture
- Paid time off and company holidays
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
JOB DESRIPTION: C++ Developer
Experience : 4 –7 Years
Location : Pune
No of Position : 1
We are seeking an experienced C++ Developer with 4–7 years of experience to work in financial
systems. The role involves working on mission-critical applications such as trading platforms,
market data systems, risk engines, or payment processing systems, where performance, stability,
and correctness are paramount.
1. General Req -
•4–7 years of professional C++ experience in performance-critical systems
•Expert knowledge of modern C++ (C++11/14/17)
•Strong understanding of data structures, algorithms, and memory models
•Deep experience with multithreading, atomics, lock-free programming, and CPU cache
behavior
•Excellent knowledge of Linux internals and system-level programming
•Experience with low-level debugging and profiling (gdb, perf, valgrind, flamegraphs)
•Proficiency with CMake/Make and Git
2. Trading Systems Experience (Highly Preferred)
•Hands-on experience with order management systems (OMS) and execution engines
•Knowledge of exchange protocols: FIX, ITCH, OUCH, FAST
•Experience handling market data feeds (L1/L2, multicast, UDP)
•Understanding of latency measurement, clock synchronization, and time stamping
Backend Engineer III – Senior Python Developer (LLM & AI)
Location: Gurgaon, India
Positions: 1
Experience: 6 to 9 Years
Gurgaon Hybrid
About the Role
We are seeking an experienced Backend Engineer III / Senior Python Developer to join our AI engineering team and play a critical role in building scalable, secure, and high-performance backend platforms for LLM and AI-driven applications. You will work as a hands-on individual contributor while collaborating closely with Machine Learning Engineers, Data Scientists, Product Managers, and Cloud/DevOps teams to deliver innovative, production-grade AI solutions.
Key Responsibilities
- Design, develop, and maintain scalable backend systems and services using Python to support LLM and AI-based applications
- Build and maintain RESTful APIs and microservices that serve machine learning models and AI components
- Write clean, modular, efficient, and testable code following industry best practices and coding standards
- Participate actively in code reviews, ensuring high quality, security, and maintainability of the codebase
- Debug, profile, and optimize applications to improve performance, reliability, and scalability
- Identify and resolve performance bottlenecks in AI/ML pipelines and backend services
- Collaborate with ML engineers, data scientists, and product teams to translate business and technical requirements into robust backend solutions
- Mentor and support junior developers, promoting a culture of technical excellence and continuous learning
- Design and implement CI/CD pipelines and automate deployment workflows to ensure consistent and reliable releases
- Stay up to date with emerging trends in Python, cloud-native development, and LLM/AI engineering practices and apply them to improve systems and processes
Required Skills & Experience
- 6 to 9 years of strong hands-on experience in Python development
- Solid understanding of Python software design, architecture patterns, and testing best practices
- Proven experience working on AI, Machine Learning, or LLM-based projects
- Strong experience in building and consuming RESTful APIs and microservices architectures
- Hands-on experience with FastAPI, Flask, or similar model-serving frameworks
- Strong debugging, performance profiling, and optimization skills
- Experience with CI/CD tools and workflows (e.g., GitHub Actions, Azure DevOps, Jenkins, etc.)
- Working knowledge of Docker and Kubernetes is a strong plus
- Excellent analytical, problem-solving, and communication skills
- Ability to work independently in a fast-paced, evolving AI/ML environment while mentoring junior team members
Education & Certifications
- Bachelor’s degree in Computer Science, Software Engineering, or a related technical field
- AWS or other relevant cloud certifications are preferred but not mandatory
Why Join Us?
- Work on cutting-edge AI and LLM platforms
- Collaborate with top-tier engineering and data science teams
- Opportunity to influence system architecture and technical direction
- Competitive compensation and career growth opportunities
























