50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.
Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are looking for a Software Developer Intern (Zoho Ecosystem) to join our HR Tech and Automation team at MyOperator’s Noida office. This role is ideal for passionate coders who are eager to explore the Zoho platform and learn how to automate business workflows, integrate APIs, and build internal tools that enhance organizational efficiency.
You will work directly with our Zoho Developer and Engineering Operations team, gaining hands-on experience in Deluge scripting, API integrations, and system automation within one of the fastest-growing SaaS environments.
Key Responsibilities
- Develop and test API integrations between Zoho applications and third-party platforms.
- Learn and apply Deluge scripting (Zoho’s proprietary language) to automate workflows.
- Assist in creating custom functions, dashboards, and workflow logic within Zoho apps.
- Debug and document automation setups to ensure smooth internal operations.
- Collaborate with HR Tech and cross-functional teams to bring automation ideas to life.
- Support ongoing enhancement and optimization of existing Zoho systems.
Required Skills & Qualifications
- Strong understanding of at least one programming language (JavaScript or Python).
- Basic knowledge of APIs, JSON, and REST.
- Logical and analytical problem-solving mindset.
- Eagerness to explore Zoho applications (People, Recruit, Creator, CRM, etc.).
- Excellent communication and documentation skills.
Good to Have (Optional)
- Exposure to HTML, CSS, or SQL.
- Experience with workflow automation or no-code platforms.
- Familiarity with SaaS ecosystems or business process automation tools.
Internship Details
- Location: 91Springboard, Plot No. D-107, Sector 2, Noida, Uttar Pradesh – 201301
- Duration: 6 Months (Full-time, Office-based)
- Working Days: Monday to Friday
- Conversion: Strong possibility of a Full-Time Offer based on performance
Why Join Us
At MyOperator, you’ll gain hands-on experience with one of the largest SaaS ecosystems, working on real-world automations, API integrations, and workflow engineering. You’ll learn directly from experienced developers, gain exposure to internal business systems, and contribute to automating operations for a fast-scaling AI-led company.
This internship provides a strong foundation to grow into roles such as Zoho Developer, Automation Engineer, or Internal Tools Engineer, along with an opportunity for full-time employment upon successful completion.

We are seeking a talented Full Stack Developer to design, build, and maintain scalable web and mobile applications. The ideal candidate should have hands-on experience in frontend (React.js, Flutter), backend (Node.js, Express), databases (PostgreSQL, MongoDB), and Python for AI/ML integration. You will work closely with the engineering team to deliver secure, high-performance, and user-friendly products.
Key Responsibilities
- Develop responsive and dynamic web applications using React.js and modern UI frameworks.
- Build and optimize REST APIs and backend services with Node.js and Express.js.
- Design and manage PostgreSQL and MongoDB databases, ensuring optimized queries and data modeling.
- Implement state management using Redux/Context API.
- Ensure API security with JWT, OAuth2, Helmet.js, and rate-limiting.
- Integrate Google Cloud services (GCP) for hosting, storage, and serverless functions.
- Deploy and maintain applications using CI/CD pipelines, Docker, and Kubernetes.
- Use Redis for caching, sessions, and job queues.
- Optimize frontend performance (lazy loading, code splitting, caching strategies).
- Collaborate with design, QA, and product teams to deliver high-quality features.
- Maintain clear documentation and follow coding standards.

We are looking for a highly skilled Senior Full Stack Developer / Tech Lead to drive end-to-end development of scalable, secure, and high-performance applications. The ideal candidate will have strong expertise in React.js, Node.js, PostgreSQL, MongoDB, Python, AI/ML, and Google Cloud platforms (GCP). You will play a key role in architecture design, mentoring developers, ensuring best coding practices, and integrating AI/ML solutions into our products.
This role requires a balance of hands-on coding, system design, cloud deployment, and leadership.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using React.js, Node.js, PostgreSQL, and MongoDB.
- Build, consume, and optimize REST APIs and GraphQL services.
- Develop AI/ML models with Python and integrate them into production systems.
- Implement CI/CD pipelines, containerization (Docker, Kubernetes), and cloud deployments (GCP/AWS).
- Manage security, authentication (JWT, OAuth2), and performance optimization.
- Use Redis for caching, session management, and queue handling.
- Lead and mentor junior developers, conduct code reviews, and enforce coding standards.
- Collaborate with cross-functional teams (product, design, QA) for feature delivery.
- Monitor and optimize system performance, scalability, and cost-efficiency.
- Own technical decisions and contribute to long-term architecture strategy.
About Verix
Verix is a platform for verification, engagement, and trust in the age of AI. Powered by blockchain and agentic AI, Verix enables global organizations—such as Netflix, Amdocs, The Stevie Awards, Room to Read, and UNICEF—to seamlessly design, issue, and manage digital credentials for learning, enterprise skilling, continuing education, compliance, membership, and events.
With dynamic credentials that reflect recipient growth over time, modern design templates, and attached rewards, Verix empowers enterprises to drive engagement while building trust and community.
Founded by industry veterans Kirthiga Reddy (ex-Meta, MD Facebook India) and Saurabh Doshi (ex-Meta, ex-Viacom), Verix is backed by Polygon Ventures, Micron Ventures, FalconX, and leading angels including Randi Zuckerberg and Harsh Jain.
What is OptimizeGEO?
OptimizeGEO is Verix’s flagship product that helps brands stay visible and discoverable in AI-powered answers.
Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes AEO/GEO principles ensuring brands are mentioned, cited, and trusted by generative systems (ChatGPT, Gemini, Perplexity, Claude, etc.) and answer engines (featured snippets, voice search, and AI answer boxes).
Role Overview
We are hiring a Backend Engineer to build the data and services layer that powers OptimizeGEO’s analytics, scoring, and reporting.
This role partners closely with our SEO/AEO domain experts and data teams to translate frameworks—gap analysis, share-of-voice, entity/knowledge-graph coverage, trust signals—into scalable backend systems and APIs.
You will design secure, reliable, and observable services that ingest heterogeneous web and third-party data, compute metrics, and surface actionable insights to customers via dashboards and reports.
Key Responsibilities
- Own backend services for data ingestion, processing, and aggregation across crawlers, public APIs, search consoles, analytics tools, and third-party datasets.
- Operationalize GEO/AEO metrics (visibility scores, coverage maps, entity health, citation/trust signals, competitor benchmarks) as versioned, testable algorithms.
- Design & implement APIs for internal use (data science, frontend) and external consumption (partner/export endpoints), with clear SLAs and quotas.
- Data pipelines & orchestration: batch and incremental jobs, queueing, retries/backoff, idempotency, and cost-aware scaling.
- Storage & modeling: choose fit-for-purpose datastores (OLTP/OLAP), schema design, indexing/partitioning, lineage, and retention.
- Observability & reliability: logging, tracing, metrics, alerting; SLOs for freshness and accuracy; incident response playbooks.
- Security & compliance: authN/authZ, secrets management, encryption, PII governance, vendor integrations.
- Collaborate cross-functionally with domain experts to convert research into productized features and executive-grade reports.
Minimum Qualifications
- 4–8 years of experience building backend systems in production (startups or high-growth product teams preferred).
- Proficiency in one or more of: Python, Node.js/TypeScript, Go, or Java.
- Experience with cloud platforms (AWS/GCP/Azure) and containerized deployment (Docker, Kubernetes).
- Hands-on with data pipelines (Airflow/Prefect, Kafka/PubSub, Spark/Flink or equivalent) and REST/GraphQL API design.
- Strong grounding in systems design, scalability, reliability, and cost/performance trade-offs.
Preferred Qualifications (Nice to Have)
- Familiarity with technical SEO artifacts: schema.org/structured data, E-E-A-T, entity/knowledge-graph concepts, and crawl budgets.
- Exposure to AEO/GEO and how LLMs weigh sources, citations, and trust; awareness of hallucination risks and mitigation.
- Experience integrating SEO/analytics tools (Google Search Console, Ahrefs, SEMrush, Similarweb, Screaming Frog) and interpreting their data models.
- Background in digital PR/reputation signals and local/international SEO considerations.
- Comfort working with analysts to co-define KPIs and build executive-level reporting.
What Success Looks Like (First 6 Months)
- Ship a reliable data ingestion and scoring service with clear SLAs and automated validation.
- Stand up share-of-voice and entity-coverage metrics that correlate with customer outcomes.
- Deliver exportable executive reports and dashboard APIs consumed by the product team.
- Establish observability baselines (dashboards & alerts) and a lightweight on-call rotation.
Tooling & Stack (Illustrative)
- Runtime: Python / TypeScript / Go
- Data: Postgres / BigQuery + object storage (S3 / GCS)
- Pipelines: Airflow / Prefect, Kafka / PubSub
- Infra: AWS / GCP, Docker, Kubernetes, Terraform
- Observability: OpenTelemetry, Prometheus / Grafana, ELK / Cloud Logging
- Collab: GitHub, Linear / Jira, Notion, Looker / Metabase
Working Model
- Hybrid-remote within India with periodic in-person collaboration (Bengaluru or mutually agreed hubs).
- Startup velocity with pragmatic processes; bias to shipping, measurement, and iteration.
Equal Opportunity
Verix is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Join us to reimagine how businesses integrate data and automate processes – with AI at the core.
About FloData
FloData is re-imagining the iPaaS and Business Process Automation (BPA) space for a new era - one where business teams, not just IT, can integrate data, run automations, and solve ops bottlenecks using intuitive, AI-driven interfaces. We're a small, hands-on team with a deep technical foundation and strong industry connections. Backed by real-world learnings from our earlier platform version, we're now going all-in on building a generative AI-first experience.
The Opportunity
We’re looking for an GenAI Engineer to help build the intelligence layer of our new platform. From designing LLM-powered orchestration flows with LangGraph to building frameworks for evaluation and monitoring with LangSmith, you’ll shape how AI powers real-world enterprise workflows.
If you thrive on working at the frontier of LLM systems engineering, enjoy scaling prototypes into production-grade systems, and want to make AI reliable, explainable, and enterprise-ready - this is your chance to define a category-defining product.
What You'll Do
- Spend ~70% of your time architecting, prototyping, and productionizing AI systems (LLM orchestration, agents, evaluation, observability)
- Develop AI frameworks: orchestration (LangGraph), evaluation/monitoring (LangSmith), vector/graph DBs, and other GenAI infra
- Work with product engineers to seamlessly integrate AI services into frontend and backend workflows
- Build systems for AI evaluation, monitoring, and reliability to ensure trustworthy performance at scale
- Translate product needs into AI-first solutions, balancing rapid prototyping with enterprise-grade robustness
- Stay ahead of the curve by exploring emerging GenAI frameworks, tools, and research for practical application
Must Have
- 3–5 years of engineering experience, with at least 1-2 years in GenAI systems
- Hands-on experience with LangGraph, LangSmith, LangChain, or similar frameworks for orchestration/evaluation
- Deep understanding of LLM workflows: prompt engineering, fine-tuning, RAG, evaluation, monitoring, and observability
- A strong product mindset—comfortable bridging research-level concepts with production-ready business use cases
- Startup mindset: resourceful, pragmatic, and outcome-driven
Good To Have
- Experience integrating AI pipelines with enterprise applications and hybrid infra setups (AWS, on-prem, VPCs)
- Experience building AI-native user experiences (assistants, copilots, intelligent automation flows)
- Familiarity with enterprise SaaS/IT ecosystems (Salesforce, Oracle ERP, Netsuite, etc.)
Why Join Us
- Own the AI backbone of a generational product at the intersection of AI, automation, and enterprise data
- Work closely with founders and leadership with no layers of bureaucracy
- End-to-end ownership of AI systems you design and ship
- Be a thought partner in setting AI-first principles for both tech and culture
- Onsite in Hyderabad, with flexibility when needed
Sounds like you?
We'd love to talk. Apply now or reach out directly to explore this opportunity.

MANDATORY:
- Super Quality Data Architect, Data Engineering Manager / Director Profile
- Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
- Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
- Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
- Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
- Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
- Product Companies (Prefers high-scale, data-heavy companies)
PREFERRED:
- Must be from Tier - 1 Colleges, preferred IIT
- Candidates must have spent a minimum 3 yrs in each company.
- Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company
ROLES & RESPONSIBILITIES:
- Lead and mentor a team of data engineers, ensuring high performance and career growth.
- Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
- Drive the development and implementation of data governance frameworks and best practices.
- Work closely with cross-functional teams to define and execute a data roadmap.
- Optimize data processing workflows for performance and cost efficiency.
- Ensure data security, compliance, and quality across all data platforms.
- Foster a culture of innovation and technical excellence within the data team.
IDEAL CANDIDATE:
- 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
- Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
- Proficiency in SQL, Python, and Scala for data processing and analytics.
- Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
- Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
- Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
- Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
- Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
- Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
- Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
- Proven ability to drive technical strategy and align it with business objectives.
- Strong leadership, communication, and stakeholder management skills.
PREFERRED QUALIFICATIONS:
- Experience in machine learning infrastructure or MLOps is a plus.
- Exposure to real-time data processing and analytics.
- Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
- Prior experience in a SaaS or high-growth tech company.

Job Title: Full Stack Engineer (Real-Time Audio Systems) – Voice AI
Location: Remote
Experience: 4+ Years
Employment Type: Full-time
About the Role
We’re looking for an experienced engineer to lead the development of a real-time Voice AI platform. This role blends deep expertise in conversational AI, audio infrastructure, and full-stack systems, making you a key contributor to building natural, low-latency voice-driven agents for complex healthcare workflows and beyond.
You’ll work directly with the founding team to build and deploy production-grade voice AI systems.
If you love working with WebRTC, WebSockets, and streaming pipelines, this is the place to build something impactful.
Key Responsibilities
- Build and optimize voice-driven AI systems integrating ASR (speech recognition), TTS (speech synthesis), and LLM inference with WebRTC and WebSocket infrastructure.
- Orchestrate multi-turn conversations using frameworks like Pipecat with memory and context management.
- Develop scalable backends and APIs to support streaming audio pipelines, stateful agents, and secure healthcare workflows.
- Implement real-time communication features with low-latency audio streaming pipelines.
- Collaborate closely with research, engineering, and product teams to ship experiments and deploy into production rapidly.
- Monitor, optimize, and maintain deployed voice agents for high reliability, safety, and performance.
- Translate experimental AI audio models into production-ready services.
Requirements
- 4+ years of software engineering experience with a focus on real-time systems, streaming, or conversational AI.
- Proven experience building and deploying voice AI, audio/video, or low-latency communication systems.
- Strong proficiency in Python (FastAPI, async frameworks, LangChain or similar).
- Working knowledge of modern front-end frameworks like Next.js (preferred).
- Hands-on experience with WebRTC, WebSockets, Redis, Kafka, Docker, and AWS.
- Exposure to healthcare tech, RCM, or regulated environments (highly valued).
Bonus Points
- Contributions to open-source audio/media projects.
- Experience with DSP, live streaming, or media infrastructure.
- Familiarity with observability tools (e.g., Grafana, Prometheus).
- Passion for reading research papers and discussing the future of voice communication.


Key Responsibilities
- Design, develop, and maintain scalable microservices and RESTful APIs using Python (Flask, FastAPI, or Django).
- Architect data models for SQL and NoSQL databases (PostgreSQL, ClickHouse, MongoDB, DynamoDB) to optimize performance and reliability.
- Implement efficient and secure data access layers, caching, and indexing strategies.
- Collaborate closely with product and frontend teams to deliver seamless user experiences.
- Build responsive UI components using HTML, CSS, JavaScript, and frameworks like React or Angular.
- Ensure system reliability, observability, and fault tolerance across services.
- Lead code reviews, mentor junior engineers, and promote engineering best practices.
- Contribute to DevOps and CI/CD workflows for smooth deployments and testing automation.
Required Skills & Experience
- 10+ years of professional software development experience.
- Strong proficiency in Python, with deep understanding of OOP, asynchronous programming, and performance optimization.
- Proven expertise in building FAST API based microservices architectures.
- Solid understanding of SQL and NoSQL data modeling, query optimization, and schema design.
- Excellent hands on proficiency in frontend proficiency with HTML, CSS, JavaScript, and a modern framework (React, Angular, or Vue).
- Experience working with cloud platforms (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes).
- Familiarity with distributed systems, event-driven architectures, and messaging queues (Kafka, RabbitMQ).
- Excellent problem-solving, communication, and system design skills.

Required Skills:
· 8+ years of being a practitioner in data engineering or a related field.
· Proficiency in programming skills in Python
· Experience with data processing frameworks like Apache Spark or Hadoop.
· Experience working on Databricks.
· Familiarity with cloud platforms (AWS, Azure) and their data services.
· Experience with data warehousing concepts and technologies.
· Experience with message queues and streaming platforms (e.g., Kafka).
· Excellent communication and collaboration skills.
· Ability to work independently and as part of a geographically distributed team.

Role Overview:
We’re looking for an exceptionally passionate, logical, and smart Backend Developer to join our core tech team. This role goes beyond writing code — you’ll help shape the architecture, lead entire backend team if needed, and be deeply involved in designing scalable systems almost from scratch.
This is a high-impact opportunity to work directly with the founders and play a pivotal role in building the core product. If you’re looking to grow alongside a fast-growing startup, take complete ownership, and influence the direction of the technology and product, this role is made for you.
Why Clink?
Clink is a fast-growing product startup building innovative solutions in the food-tech space. We’re on a mission to revolutionize how restaurants connect with customers and manage offers seamlessly. Our platform is a customer-facing app that needs to scale rapidly as we grow. We also aim to leverage Generative AI to enhance user experiences and drive personalization at scale.
Key Responsibilities:
- Design, develop, and completely own high-performance backend systems.
- Architect scalable, secure, and efficient system designs.
- Own database schema design and optimization for performance and reliability.
- Collaborate closely with frontend engineers, product managers, and designers.
- Guide and mentor junior team members .
- Explore and experiment with Generative AI capabilities for product innovation.
- Participate in code reviews and ensure high engineering standards.
Must-Have Skills:
- 2–5 years of experience in backend development at a product-based company.
- Strong expertise in database design and system architecture.
- Hands-on experience building multiple production-grade applications.
- Solid programming fundamentals and logical problem-solving skills.
- Experience with Python or Ruby on Rails (one is mandatory).
- Experience integrating third-party APIs and services.
Good-to-Have Skills:
- Familiarity with Generative AI tools, APIs, or projects.
- Contributions to open-source projects or personal side projects.
- Exposure to frontend basics (React, Vue, or similar) is a plus.
- Exposure to containerization, cloud deployment, or CI/CD pipelines.
What We’re Looking For:
- Extremely high aptitude and ability to solve tough technical problems.
- Passion for building products from scratch and shipping fast.
- A hacker mindset — someone who builds cool stuff even in their spare time.
- Team player who can lead when required and work independently when needed.

Profile: Big Data Engineer (System Design)
Experience: 5+ years
Location: Bangalore
Work Mode: Hybrid
About the Role
We're looking for an experienced Big Data Engineer with system design expertise to architect and build scalable data pipelines and optimize big data solutions.
Key Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using Python, Hive, and Spark
- Architect scalable big data solutions with strong system design principles
- Build and optimize workflows using Apache Airflow
- Implement data modeling, integration, and warehousing solutions
- Collaborate with cross-functional teams to deliver data solutions
Must-Have Skills
- 5+ years as a Data Engineer with Python, Hive, and Spark
- Strong hands-on experience with Java
- Advanced SQL and Hadoop experience
- Expertise in Apache Airflow
- Strong understanding of data modeling, integration, and warehousing
- Experience with relational databases (PostgreSQL, MySQL)
- System design knowledge
- Excellent problem-solving and communication skills
Good to Have
- Docker and containerization experience
- Knowledge of Apache Beam, Apache Flink, or similar frameworks
- Cloud platform experience.

A global technology-driven performance apparel retailer

Core Focus:
- Operate with a full DevOps mindset, owning the software lifecycle from development through production support.
- Participate in Agile ceremonies and global team collaboration, including on-call support.
Mandatory/Strong Technical Skills (6–8+ years of relevant experience required):
- Java: 4.5 to 6.5 years experience
- AWS: Strong knowledge and working experience with Cloud technologies minimum 2 years.
- Kafka: 2 years of Strong knowledge and working experience with data integration technologies
- Databases: Experience with SQL/NoSQL databases (e.g., Postgres, MongoDB).
Other Key Technologies & Practices:
- Python, Spring Boot, and API-based system design.
- Containers/Orchestration (Kubernetes).
- CI/CD tools (Gitlab, Splunk, Datadog).
- Familiarity with Terraform and Airflow.
- Experience in Agile methodology (Jira, Confluence).

Supercharge Your Career as a Technical Lead - Python at Technoidentity!
Are you ready to solve people challenges that fuel business growth? At Technoidentity, we’re a Data+AI product engineering company building cutting-edge solutions in the FinTech domain for over 13 years—and we’re expanding globally. It’s the perfect time to join our
team of tech innovators and leave your mark!
At Technoidentity, we’re a Data + AI product engineering company trusted to deliver scalable and modern enterprise solutions. Join us as a Senior Python Developer and Technical Lead, where you'll guide high-performing engineering teams, design complex systems, and deliver
clean, scalable backend solutions using Python and modern data technologies. Your leadership will directly shape the architecture and execution of enterprise projects, with added strength in understanding database logic including PL/SQL and PostgreSQL/AlloyDB.
What’s in it for You?
• Modern Python Stack – Python 3.x, FastAPI, Pandas, NumPy, SQLAlchemy, PostgreSQL/AlloyDB, PL/pgSQL.
• Tech Leadership – Drive technical decision-making, mentor developers, and ensure code quality and scalability.
• Scalable Projects – Architect and optimize data-intensive backend services for highthroughput and distributed systems.
• Engineering Best Practices – Enforce clean architecture, code reviews, testing strategies, and SDLC alignment.
• Cross-Functional Collaboration – Lead conversations across engineering, QA, product, and DevOps to ensure delivery excellence.
What Will You Be Doing?
Technical Leadership
• Lead a team of developers through design, code reviews, and technical mentorship.
• Set architectural direction and ensure scalability, modularity, and code quality.
• Work with stakeholders to translate business goals into robust technical solutions.
Backend Development & Data Engineering
• Design and build clean, high-performance backend services using FastAPI and Python
best practices.
• Handle row- and column-level data transformation using Pandas and NumPy.
• Apply data wrangling, cleansing, and preprocessing techniques across microservices and pipelines.
Database & Performance Optimization
• Write performant queries, procedures, and triggers using PostgreSQL and PL/pgSQL.
• Understand legacy logic in PL/SQL and participate in rewriting or modernizing it for PostgreSQL-based systems.
• Tune both backend and database performance, including memory, indexing, and query optimization.
Parallelism & Communication
• Implement multithreading, multiprocessing, and parallel data flows in Python.
• Integrate Kafka, RabbitMQ, or Pub/Sub systems for real-time and async message
processing.
Engineering Excellence
• Drive adherence to Agile, Git-based workflows, CI/CD, and DevOps pipelines.
• Promote testing (unit/integration), monitoring, and observability for all backend systems.
• Stay current with Python ecosystem evolution and introduce tools that improve productivity and performance.
What Makes You the Perfect Fit?
• 6–10 years of proven experience in Python development, with strong expertise in designing and delivering scalable backend solutions

Key Responsibilities
- Develop and maintain Python-based applications.
- Design and optimize SQL queries and databases.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Write clean, maintainable, and efficient code.
- Troubleshoot and debug applications.
- Participate in code reviews and contribute to team knowledge sharing.
Qualifications and Required Skills
- Strong proficiency in Python programming.
- Experience with SQL and database management.
- Experience with web frameworks such as Django or Flask.
- Knowledge of front-end technologies like HTML, CSS, and JavaScript.
- Familiarity with version control systems like Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
Good to Have Skills
- Experience with cloud platforms like AWS or Azure.
- Knowledge of containerization technologies like Docker.
- Familiarity with continuous integration and continuous deployment (CI/CD) pipelines

Senior Data Engineer
Experience: 5+ years
Chennai/ Trichy (Hybrid)
Type: Fulltime
Skills: GCP + Airflow + Bigquery + Python + Docker
The Role
As a Senior Data Engineer, you own new initiatives, design and build world-class platforms to measure and optimize ad performance. You ensure industry-leading scalability and reliability of mission-critical systems processing billions of real-time transactions a day. You apply state-of-the-art technologies, frameworks, and strategies to address complex challenges with Big Data processing and analytics. You work closely with the talented engineers across different time zones in building industry-first solutions to measure and optimize ad performance.
What you’ll do
● Write solid code with a focus on high performance for services supporting high throughput and low latency
● Architect, design, and build big data processing platforms handling tens of TBs/Day, serve thousands of clients, and support advanced analytic workloads
● Providing meaningful and relevant feedback to junior developers and staying up-to-date with system changes
● Explore the technological landscape for new ways of producing, processing, and analyzing data to gain insights into both our users and our product features
● Design, develop, and test data-driven products, features, and APIs that scale
● Continuously improve the quality of deliverables and SDLC processes
● Operate production environments, investigate issues, assess their impact, and develop feasible solutions.
● Understand business needs and work with product owners to establish priorities
● Bridge the gap between Business / Product requirements and technical details
● Work in multi-functional agile teams with end-to-end responsibility for product development and delivery
Who you are
● 3-5+ years of programming experience in coding, object-oriented design, and/or functional programming, including Python or a related language
● Love what you do and are passionate about crafting clean code, and have a steady foundation.
● Deep understanding of distributed system technologies, standards, and protocols, and have 2+ years of experience working in distributed systems like Airflow, BigQuery, Spark, Kafka Eco System ( Kafka Connect, Kafka Streams, or Kinesis), and building data pipelines at scale.
● Excellent SQL, DBT query writing abilities, and data understanding
● Care about agile software processes, data-driven development, reliability, and responsible experimentation
● Genuine desire to automate decision-making, processes, and workflows
● Experience working with orchestration tools like Airflow
● Good understanding of semantic layers and experience in tools like LookerML, Kube
● Excellent communication skills and a team player
● Google BigQuery or Snowflake
● Cloud environment, Google Cloud Platform
● Container technologies - Docker / Kubernetes
● Ad-serving technologies and standards
● Familiarity with AI tools like Cursor AI, CoPilot.

Job Title : Full Stack Engineer (Real-Time Audio Systems) – Voice AI
Experience : 4+ Years
Location : Gurgaon (Hybrid)
About the Role :
We’re looking for a Voice AI / Full Stack Engineer to build our real-time Voice AI platform for low-latency, intelligent voice-driven agents in healthcare and beyond.
You’ll work closely with the founding team, combining audio infrastructure, AI, and full stack development to deliver natural, production-grade voice experiences.
Hands-on experience with WebRTC, WebSocket, and streaming services is required.
Experience with TTS (Text-to-Speech) and STT (Speech-to-Text) modules is a strong plus.
Mandatory Skills :
Python (FastAPI, Async frameworks, LangChain), WebRTC, WebSockets, Redis, Kafka, Docker, AWS, real-time streaming systems, TTS/STT modules.
Responsibilities :
- Build and optimize voice-driven AI systems integrating ASR, TTS, and LLM inference with WebRTC & WebSocket.
- Develop scalable backend APIs and streaming pipelines for real-time communication.
- Translate AI audio models into reliable, production-ready services.
- Collaborate across teams for rapid prototyping and deployment.
- Monitor and improve system performance, latency, and reliability.
Requirements :
- 4+ years of experience in real-time systems, streaming, or conversational AI.
- Strong in Python (FastAPI, Async frameworks, LangChain).
- Hands-on with WebRTC, WebSockets, Redis, Kafka, Docker, AWS.
- Familiarity with Next.js or similar frontend frameworks is a plus.
- Experience in healthcare tech or regulated domains preferred.
Bonus Skills :
- Contributions to open-source audio/media projects.
- Background in DSP, live streaming, or media infrastructure.
- Familiarity with Grafana, Prometheus, or other observability tools.
Why Join Us :
Be part of a team working at the intersection of AI research and product engineering, shaping next-gen voice intelligence for real-world applications.
Own your systems end-to-end, innovate fast, and make a direct impact in healthcare AI.
Interview Process :
- Screening & Technical Task
- Technical Discussion
- HR/Leadership Round

Big companies are like giant boats with a thousand rowers — you can’t feel your pull move the boat. Shoppin isn’t that boat. We’re a 10-person crew rowing like our lives depend on it — each one the best at what they do, each stroke moving the product forward every single day. If you believe small, fast, obsessive teams can beat giants, read on.
🔍 What You’ll Do
- Build and optimize Shoppin’s vibe, image, and inspiration search, powering both text and image-based discovery.
- Work on vector embeddings, retrieval pipelines, and semantic search using ElasticSearch, Redis caching, and LLM APIs.
- Design and ship high-performance Python microservices that move fast and scale beautifully.
- Experiment with prompt engineering, ranking models, and multimodal retrieval.
- Collaborate directly with the founder — moving from idea → prototype → production in hours, not weeks.
⚙️ Tech You’ll Work With
- Languages & Frameworks: Python, FastAPI
- Search & Infra: ElasticSearch, Redis, PostgreSQL
- AI Stack: Vector Databases, Embeddings, LLM APIs (OpenAI, Gemini, etc.)
- Dev Tools: Cursor, Docker, Kubernetes
- Infra: AWS / GCP
📈 What We’re Looking For
- Strong mathematical intuition — you understand cosine similarity, normalization, and ranking functions.
- Experience or deep curiosity in text + image search.
- Comfort with Python, data structures, and system design.
- Speed-obsessed — you optimize for velocity, not bureaucracy.
- Hungry to go all-in, ship hard things, and make a dent.
🧩 Bonus Points
- Experience with LLM prompting or orchestration.
- Exposure to recommendation systems, fashion/culture AI, or multimodal embeddings.
- You’ve built or scaled something end-to-end yourself.

🚀 We’re Hiring: AI Engineer (2+ Yrs, Generative AI/LLMs)
🌍 Remote | ⚡ Immediate Joiner - 15days
Work on cutting-edge GenAI apps, LLM fine-tuning & integrations (OpenAI, Gemini, Anthropic).
Exciting projects across industries in a service-company environment.
🔹 Skills: Python | LLMs | Fine-tuning | Vector DBs | GenAI Apps
🔸Apply: https://lnkd.in/dVQwSMBD
✨ Let’s build the future of AI together!
#AIJobs #Hiring #GenAI #LLM #Python #RemoteJobs


Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.


We are looking for a Full Stack Developer with strong experience in TypeScript-based frontend frameworks (Svelte/React/Angular) and at least two backend stacks (FastAPI, Python, PHP, Java). You’ll work across the full development cycle, from designing architecture to deploying scalable applications.
Responsibilities:
- Collaborate with product managers and engineers to design and build scalable solutions
- Build robust, responsive front-end applications in TypeScript
- Develop well-structured back-end services and APIs
- Manage databases and integrations for performance and security
- Troubleshoot, debug, and optimize applications
- Ensure mobile responsiveness and data protection standards
- Document code and processes clearly
Technical Skills:
- Proficiency with TypeScript and modern frontend frameworks (Svelte, React, Angular)
- Hands-on experience with any 2 backend stacks (FastAPI, Python, PHP, Java)
- Familiarity with databases (PostgreSQL, MySQL, MongoDB) and web servers (Apache)
- Experience developing APIs and integrating with third-party services
Experience & Education:
- B.Tech/BE in Computer Science or related field
- Minimum 2 years of experience as a Full Stack Developer
Soft Skills:
- Strong problem-solving and analytical skills
- Clear communication and teamwork abilities
- Attention to detail and an ownership mindset


Position: Senior Data Engineer
Overview:
We are seeking an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and infrastructure to support cross-functional teams and next-generation data initiatives. The ideal candidate is a hands-on data expert with strong technical proficiency in Big Data technologies and a passion for developing efficient, reliable, and future-ready data systems.
Reporting: Reports to the CEO or designated Lead as assigned by management.
Employment Type: Full-time, Permanent
Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Key Responsibilities:
- Design and develop scalable data pipeline architectures for data extraction, transformation, and loading (ETL) using modern Big Data frameworks.
- Identify and implement process improvements such as automation, optimization, and infrastructure re-design for scalability and performance.
- Collaborate closely with Engineering, Product, Data Science, and Design teams to resolve data-related challenges and meet infrastructure needs.
- Partner with machine learning and analytics experts to enhance system accuracy, functionality, and innovation.
- Maintain and extend robust data workflows and ensure consistent delivery across multiple products and systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 10+ years of hands-on experience in Data Engineering.
- 5+ years of recent experience with Apache Spark, with a strong grasp of distributed systems and Big Data fundamentals.
- Proficiency in Scala, Python, Java, or similar languages, with the ability to work across multiple programming environments.
- Strong SQL expertise and experience working with relational databases such as PostgreSQL or MySQL.
- Proven experience with Databricks and cloud-based data ecosystems.
- Familiarity with diverse data formats such as Delta Tables, Parquet, CSV, and JSON.
- Skilled in Linux environments and shell scripting for automation and system tasks.
- Experience working within Agile teams.
- Knowledge of Machine Learning concepts is an added advantage.
- Demonstrated ability to work independently and deliver efficient, stable, and reliable software solutions.
- Excellent communication and collaboration skills in English.
About the Organization:
We are a leading B2B data and intelligence platform specializing in high-accuracy contact and company data to empower revenue teams. Our technology combines human verification and automation to ensure exceptional data quality and scalability, helping businesses make informed, data-driven decisions.
What We Offer:
Our workplace embraces diversity, inclusion, and continuous learning. With a fast-paced and evolving environment, we provide opportunities for growth through competitive benefits including:
- Paid Holidays and Leaves
- Performance Bonuses and Incentives
- Comprehensive Medical Policy
- Company-Sponsored Training Programs
We are an Equal Opportunity Employer, committed to maintaining a workplace free from discrimination and harassment. All employment decisions are made based on merit, competence, and business needs.

Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2-4 years of relevant experience as a Zoho Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively


Focus Areas:
- Build applications and solutions that process and analyze large-scale data.
- Develop data-driven applications and analytical tools.
- Implement business logic, algorithms, and backend services.
- Design and build APIs for secure and efficient data exchange.
Key Responsibilities:
- Develop and maintain data processing applications using Apache Spark and Hadoop.
- Write MapReduce jobs and complex data transformation logic.
- Implement machine learning models and analytics solutions for business use cases.
- Optimize code for performance and scalability; perform debugging and troubleshooting.
- Work hands-on with Databricks for data engineering and analysis.
- Design and manage Airflow DAGs for orchestration and automation.
- Integrate and maintain CI/CD pipelines (preferably using Jenkins).
Primary Skills & Qualifications:
- Strong programming skills in Scala and Python.
- Expertise in Apache Spark for large-scale data processing.
- Solid understanding of data structures and algorithms.
- Proven experience in application development and software engineering best practices.
- Experience working in agile and collaborative environments.



Key Responsibilities:
· Develop responsive and dynamic web applications using React.js (components, hooks, state management).
· Build and maintain backend APIs and services using Python (Flask/Django/FastAPI).
· Integrate frontend and backend systems with RESTful APIs.
· Work with databases (PostgreSQL/MySQL/NoSQL) for data modeling and query optimization.
Required Skills:
· 3–5 years of experience in full-stack development.
· Proficiency in React.js (functional components, hooks, Redux/Context API).
· Strong backend development experience with Python (Flask/Django/FastAPI).
· Solid understanding of REST APIs, JSON, and web security principles.
· Hands-on experience with SQL/NoSQL databases.
· Familiarity with Git, CI/CD pipelines, and containerization (Docker).
· Strong problem-solving and communication skills.
Budget : 18 – 24 LPA
If interested kindly share your resume at 82008 31681

Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Job Description:
Please find below details:
Experience - 6+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resources’s key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have

Job Title: Gen AI / Azure Technical Lead & Architect
Experience: 8–12 years of total IT experience
Location: Hybrid
Employment Type: Permanent
Department: Technology / Engineering
Role Overview
We are seeking a highly skilled Gen AI / Azure Technical Lead & Architect with strong expertise in leading technical teams and designing scalable, cloud-based AI solutions. The ideal candidate will possess hands-on experience in programming, cloud architecture, and GenAI solution development using LLMs, along with proven leadership capabilities.
Key Responsibilities
- Lead, mentor, and manage a team of developers to deliver complex IT solutions.
- Design, architect, and implement cloud-based AI applications leveraging Azure and related technologies.
- Collaborate with cross-functional teams and customers to understand business requirements and translate them into technical designs.
- Develop, integrate, and deploy GenAI solutions using LLMs and Retrieval-Augmented Generation (RAG).
- Troubleshoot, debug, and optimize applications for performance and scalability.
- Ensure timely delivery of high-quality, maintainable, and scalable software solutions.
- Provide technical direction and thought leadership to both the development team and stakeholders.
Required Skills and Experience
- Total Experience: 8–12 years in IT, with at least 2+ years as a Technical Lead or Architect.
- Programming Expertise in one of the following technology areas:
- Python: Experience with FastAPI or Flask, NumPy, and Pandas.
- .NET: Experience with ASP.NET and Web API development.
- Cloud Expertise: Hands-on experience with Microsoft Azure, including but not limited to:
- Azure App Service / Azure Functions
- Azure Storage
- AI Foundry
- Azure OpenAI
- Database Experience: Proficiency in at least one of the following:
- Azure SQL, SQL Server, Cosmos DB, MySQL, PostgreSQL, or MongoDB.
- Experience in integration with LLMs and building RAG-based architectures.
- Minimum of 3 months of hands-on experience in developing and deploying GenAI solutions on cloud platforms.
- Strong problem-solving and analytical skills with attention to detail.
- Excellent communication and interpersonal skills to engage effectively with team members and customers.
Preferred Qualifications
- Azure certifications (e.g., Azure Solutions Architect, Azure AI Engineer) are a plus.
- Prior experience in leading AI-focused projects or enterprise-scale cloud implementations.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.


Job Position: Senior Technical Lead / Architect
Desired Skills: Python, Django, Flask, MySQL, PostgreSQL, Amazon Web Services, JavaScript, Identity Security, IGA, OAuth
Experience Range: 7 – 10 Years
Type: Full Time
Location: Pune, India
Job Description:
Tech Prescient is looking for an experienced and proven Technical Lead / Architect (Python/Django/Flask/FastAPI, React, and AWS/Azure Cloud) who has worked across the modern full stack to deliver scalable, secure software products and solutions. The ideal candidate should have experience leading from the front — handling customer interactions, mentoring teams, owning technical delivery, and ensuring the highest quality standards.
Key Responsibilities:
- Lead the end-to-end design and development of applications using the Python stack (Django, Flask, FastAPI).
- Architect and implement secure, scalable, and cloud-native solutions on AWS or Azure.
- Drive technical discussions, architecture reviews, and ensure adherence to design and code quality standards.
- Work closely with customers to translate business requirements into robust technical solutions.
- Oversee development teams, manage delivery timelines, and guide sprint execution.
- Design and implement microservices-based architectures and serverless deployments.
- Build and integrate RESTful APIs and backend services; experience with Django Rest Framework (DRF) is a plus.
- Responsible for infrastructure planning, deployment, and automation on AWS (ECS, Lambda, EC2, S3, RDS, CloudFormation, etc.).
- Collaborate with cross-functional teams to ensure seamless delivery and continuous improvement.
- Champion best practices in software security, CI/CD, and DevOps.
- Provide technical mentorship to developers and lead project communications with clients and internal stakeholders.
Identity & Security Expertise:
- Strong understanding of Identity and Access Management (IAM) principles and best practices.
- Experience in implementing Identity Governance and Administration (IGA) solutions for user lifecycle management, access provisioning, and compliance.
- Hands-on experience with OAuth 2.0, OpenID Connect, SAML, and related identity protocols for securing APIs and services.
- Experience integrating authentication and authorization mechanisms within web and cloud applications.
- Familiarity with Single Sign-On (SSO), MFA, and role-based access control (RBAC).
- Exposure to AWS IAM, Cognito, or other cloud-based identity providers.
- Ability to assess and enhance application security posture, ensuring compliance with enterprise identity and security standards.
Skills and Experience:
- 7–10 years of hands-on experience in software design, development, and delivery.
- Strong foundation in Python and related frameworks (Django, Flask, FastAPI).
- Experience designing secure, scalable microservices and API architectures.
- Good understanding of relational databases (MySQL, PostgreSQL).
- Proven leadership, communication, and customer engagement skills.
- Knowledge of Kubernetes is an added advantage.
- Excellent problem-solving skills and passion for learning new technologies.



JOB DESCRIPTION/PREFERRED QUALIFICATIONS:
REQUIRED SKILLS/COMPETENCIES:
Programming Languages:
- Strong in Python, data structures, and algorithms.
- Hands-on with NumPy, Pandas, Scikit-learn for ML prototyping.
Machine Learning Frameworks:
- Understanding of supervised/unsupervised learning, regularization, feature engineering, model selection, cross-validation, ensemble methods (XGBoost, LightGBM).
Deep Learning Techniques:
- Proficiency with PyTorch or TensorFlow/Keras
- Knowledge of CNNs, RNNs, LSTMs, Transformers, Attention mechanisms.
- Familiarity with optimization (Adam, SGD), dropout, batch norm.
LLMs & RAG:
- Hugging Face Transformers (tokenizers, embeddings, model fine-tuning).
- Vector databases (Milvus, FAISS, Pinecone, ElasticSearch).
- Prompt engineering, function/tool calling, JSON schema outputs.
Data & Tools:
- SQL fundamentals; exposure to data wrangling and pipelines.
- Git/GitHub, Jupyter, basic Docker.
WHAT ARE WE LOOKING FOR?
- Solid academic foundation with strong applied ML/DL exposure.
- Curiosity to learn cutting-edge AI and willingness to experiment.
- Clear communicator who can explain ML/LLM trade-offs simply.
- Strong problem-solving and ownership mindset.
MINIMUM QUALIFICATIONS:
- Doctorate (Academic) Degree and 2 years related work experience; Master's Level Degree and related work experience of 5 years; Bachelor's Level Degree and related work experience of 7 years in building AI systems/solutions with Machine Learning, Deep Learning, and LLMs.
MUST-HAVES:
- Education/qualification: Preferably from premier Institute like IIT, IISC, IIIT, NIT and BITS. Also regional tier 1 colleges.
- Doctorate (Academic) Degree and 2 years related work experience; or Master's Level Degree and related work experience of 5 years; or Bachelor's Level Degree and related work experience of 7 years
- Min 5 yrs experience in the Mandatory Skills: Python, Deep Learning, Machine Learning, Algorithm Development and Image Processing
- 3.5 to 4 yrs proficiency with PyTorch or TensorFlow/Keras
- Candidates from engineering product companies have higher chances of getting shortlisted (current company or past experience)
QUESTIONNAIRE:
Do you have at least 5 years of experience with Python, Deep Learning, Machine Learning, Algorithm Development, and Image Processing? Please mention the skills and years of experience:
Do you have experience with PyTorch or TensorFlow / Keras?
- PyTorch
- TensorFlow / Keras
- Both
How many years of experience do you have with PyTorch or TensorFlow / Keras?
- Less than 3 years
- 3 to 3.5 years
- 3.5 to 4 years
- More than 4 years
Is the candidate willing to relocate to Chennai?
- Ready to relocate
- Based in Chennai
What type of company have you worked for in your career?
- Service-based IT company
- Product company
- Semiconductor company
- Hardware manufacturing company
- None of the above


Role: Sr. Data Scientist
Exp: 4 -8 Years
CTC: up to 28 LPA
Technical Skills:
o Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
o Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
o Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
o Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
o Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
o Exposure to natural language processing (NLP) techniques is a plus.
Cloud & Infrastructure:
o Strong expertise in Azure cloud ecosystem,
o Experience working in UNIX/Linux environments and using command-line tools for automation and scripting.
If interested kindly share your updated resume at 82008 31681



Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred
If interested share your resume on 82008 31681


Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred



JOB DESCRIPTION/PREFERRED QUALIFICATIONS:
KEY RESPONSIBILITIES:
- Lead and mentor a team of algorithm engineers, providing guidance and support to ensure their professional growth and success.
- Develop and maintain the infrastructure required for the deployment and execution of algorithms at scale.
- Collaborate with data scientists, software engineers, and product managers to design and implement robust and scalable algorithmic solutions.
- Optimize algorithm performance and resource utilization to meet business objectives.
- Stay up to date with the latest advancements in algorithm engineering and infrastructure technologies and apply them to improve our systems.
- Drive continuous improvement in development processes, tools, and methodologies.
QUALIFICATIONS:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience in developing computer vision and image processing algorithms and ML/DL algorithms.
- Familiar with high performance computing, parallel programming and distributed systems.
- Strong leadership and team management skills, with a track record of successfully leading engineering teams.
- Proficiency in programming languages such as Python, C++ and CUDA.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
PREFERRED QUALIFICATIONS:
- Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
- Experience with GPU architecture and algo development toolkits like Docker, Apptainer.
MINIMUM QUALIFICATIONS:
- Bachelor's degree plus 8 + years of experience
- Master's degree plus 8 + years of experience
- Familiar with high performance computing, parallel programming and distributed systems.
MUST-HAVE SKILLS:
- Phd with 6 yrs industry exp or M.Tech + 8 yrs experience or B.Tech + 10 yrs experience.
- 14 yrs exp if an IC role.
- Minimum 1 yrs experience working as a Manager/Lead
- 8 years' experience in any of the programming languages such as Python/C++/CUDA.
- 8 years' experience in Machine learning, Artificial intelligence, Deep learning.
- 2 to 3 years exp in Image processing & Computer vision is a MUST
- Product / Semi-conductor / Hardware Manufacturing company experience is a MUST. Candidates should be from engineering product companies
- Candidates from Tier 1 colleges like (IIT, IIIT, VIT, NIT) (Preferred)
- Relocation to Chennai is mandatory
NICE TO HAVE SKILLS:
- Candidates from Semicon or manufacturing companies
- Candidates with more than 8 CPGA

Helping SaaS brands rank not just on Google, but AI tools

About Client
AI-first visibility agency. We take B2B SaaS brands beyond traditional SEO, optimising their presence so they’re the answer inside ChatGPT, Perplexity, Claude, Bing and every emerging discovery layer. We’re a lean team, profitable, and growing team with clients across the US, EU and India.
The Role
You’ll be the first dedicated Technical Writer, owning all external and internal docs that explain our LLM workflows, prompt libraries and API automations. Whether you’re a recent graduate who’s hacked together GPT agents or a seasoned tech writer ready to specialise in AI, if you can translate complexity into clarity, we want to talk.
Core focus areas
- LLM Workflows & Prompt Mapping: Document step-by-step guides, reference architectures and “why it works” explainers.
- API & Automation Docs: Turn raw JSON, Python and REST snippets into copy a SaaS PM (or CTO) instantly understands.
- Knowledge-base Ownership: Maintain a single source of truth (Markdown / docs-as-code) with version control, style guidelines and release notes.
- Continuous Improvement: Track engagement metrics, collect feedback from engineers & clients and iterate quickly, in days, not sprints.
You’ll Thrive Here If You
- Have a genuine knack for LLMs: you’ve played with prompt engineering, agent frameworks or fine-tuning models.
- Read code like a recipe and explain it like you’re teaching a friend.
- Prefer async work, minimal meetings and clear ownership.
- Enjoy early-stage pace: decisions in hours, experiments in days.
- Care about the details, terminology, consistency, edge cases.
Must-haves
- Solid written English with a bias for brevity and structure.
- Technical literacy (JSON, Python, REST basics).
- Portfolio or sample showing you can distil a hairy concept into a clean narrative.
Nice-to-haves (zero deal-breakers)
- Docs-as-code tooling (Docusaurus, Hugo, MkDocs, etc.).
- Prior SaaS, DevTools or API product experience.
- Familiarity with SEO or structured data for AI search.
What We Offer
- Hybrid Opportunity (2-3 days a week): Work from Bangalore office.
- Performance bonus tied to company growth: everyone shares the upside.
- Direct access to founders, zero bureaucracy, real impact from week one.

global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value. As part of our team, you will collaborate with top minds in the industry to deliver cutting-edge solutions that solve real-world challenges.

Requirements:
- MS or PhD. in Statistics, or a related field, with relevant work experience, preferably in a laboratory science environment.
- Demonstrated application of statistical methods, including descriptive and inferential statistics, general linear and mixed models and statistical design of experiments.
- Strong computational/programming skills: proficient in programming with html, R, Python, SQL or equivalent, knowledge of a variety of commercial statistical/computational packages, familiar with Linux, cloud computing and high performance computing environment.
- Independently develop and deploy statistical / computation applets using Python and R.
- Strong oral and written communication skills, with an ability to clearly explain statistical concepts in the context of experimental goals.
- Good team player with demonstrated ability to build effective working relationships.
- The ability to be independent and learn new skills quickly.
- Flexibility to manage multiple projects and stakeholders concurrently.
Desired:
- Coursework in a scientific discipline (chemistry, biology, engineering).
- Prior experience working in pharmaceutical research and development.
- Knowledge of data visualization tools such as Spotfire or Tableau.
- Working knowledge in applying statistics in process characterization, analytical method transfer, validation, stability and shelf life predictions
- Experience in database
--

Job Description: Data Engineer
Location: Ahmedabad
Experience: 7+ years
Employment Type: Full-Time
We are looking for a highly motivated and experienced Data Engineer to join our team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.
Responsibilities
● Design and optimize data pipelines for various data sources
● Design and implement efficient data storage and retrieval mechanisms
● Develop data modelling solutions and data validation mechanisms
● Troubleshoot data-related issues and recommend process improvements
● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
● Coach and mentor junior data engineers in the team
Skills Required:
● Minimum 5 years of experience in data engineering or related field
● Proficient in designing and optimizing data pipelines and data modeling
● Strong programming expertise in Python
● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive
● Extensive experience with cloud data services such as AWS, Azure, and GCP
● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing
● Knowledge of distributed computing and storage systems
● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage
● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities
Qualifications
● Bachelor's degree in Computer Science, Data Science, or a Computer related field

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.


Backend Developer
About the Role:
We’re looking for a skilled Backend Developer (2–6 years of experience) who can design and build scalable backend systems, ensure high-performance APIs, and contribute to the technology powering a fast-growing jewelry platform. You’ll work closely with product and frontend teams to deliver robust, scalable, and reliable solutions.
Key Responsibilities:
- Design, build, and maintain scalable backend services and APIs.
- Write clean, secure, and well-documented backend code.
- Work with relational databases, ensuring optimized queries and data integrity.
- Collaborate with frontend engineers and product managers to deliver end-to-end features.
- Debug, troubleshoot, and optimize backend systems for performance and reliability.
- Contribute to architectural discussions and technology choices as the platform scales.
Requirements:
- 3–5 years of professional experience as a Backend Developer.
- Strong proficiency in Python with frameworks like Django, FastAPI, or Flask.
- Solid understanding of relational databases, schema design, and query optimization.
- Experience building and consuming REST or GraphQL APIs.
- Knowledge of version control (Git).
- Familiarity with Docker.
- Understanding of microservices architecture (good to have).
- Knowledge of CI/CD pipelines (good to have).
- Strong problem-solving, debugging, and optimization skills.
- Experience working in e-commerce or marketplace environments.
What’s in It for You:
- High ownership role in a fast-paced environment.
- Opportunity to work closely with the founding team and passionate professionals.
- Competitive salary with fast career growth and appraisals.
- Dynamic, collaborative, and politics-free work culture.
- Health insurance coverage.
Additional Details:
- Early-stage startup environment where meaningful achievements require dedication and passion.
- 6-day work week.
- Location: HSR Layout, Bangalore.



Senior Full Stack Developer
About the Role :
Were looking for a skilled Full Stack Engineer (5-10 years of experience) who can design and build scalable web applications, own end-to-end features, and contribute to the technology powering Indias fastest-growing jewelry platform.
Youll work closely with product and design teams to bring seamless user experiences to life while ensuring robust backend systems.
Key Responsibilities :
- Design, develop, and maintain scalable web applications and APIs.
- Work across the stack - from front-end interfaces to backend logic and database layers.
- Collaborate with product managers and designers to translate requirements into high-quality code.
- Optimize application performance, security, and scalability.
- Drive code quality via reviews, best practices, and mentoring junior engineers.
- Improve system performance and reliability with monitoring, testing, and CI/CD practices.
- Contribute to architectural decisions and technology choices as the platform scales.
- Stay hands-on with coding while helping guide the technical direction of the team.
Qualifications :
- 5 - 8 years of professional experience as a Full Stack Developer.
- Strong expertise in JavaScript/TypeScript, with experience in frameworks like React.js or Next.js.
- Solid experience with Python (Django, FastAPI, or Flask).
- Solid understanding of relational databases (schema design, query optimization).
- Experience building and consuming REST/GraphQL APIs.
- Familiarity with Docker and containerized deployments.
- Hands-on experience with cloud platforms (AWS preferred).
- Understanding of microservices architecture.
- Knowledge of DevOps practices (CI/CD pipelines, monitoring, observability).
- Strong understanding of web performance, scalability, and security best practices.
- Excellent problem-solving, debugging, and optimization skills.
- Strong communication skills and ability to mentor other engineers.
Nice to Have :
- Background in Statistics, Computer Science, Economics, or related field.
- Exposure to Python or R for advanced analytics.
Whats in It for You :
- High ownership role in a fast-paced environment.
- Opportunity to work closely with the founding team and passionate professionals.
- Competitive salary with fast career growth and appraisals.
- Dynamic, collaborative, and politics-free work environment.
- Health insurance coverage.
Additional Details :
- Early-stage startup environment where achievements require effort, dedication, and passion.
- 6-day work week.
- Location : HSR Layout, Bangalore.



Job Title: AI / Machine Learning Engineer
Company: Apprication Pvt Ltd
Location: Goregaon East
Employment Type: Full-time
Experience: 4 Years
About the Role
We’re seeking a highly motivated AI / Machine Learning Engineer to join our growing engineering team. You will design, build, and deploy AI-powered solutions for web and application platforms, bringing cutting-edge machine learning research into real-world production systems.
This role blends applied machine learning, backend engineering, and cloud deployment, with opportunities to work on NLP, computer vision, generative AI, and intelligent automation across diverse industries.
Key Responsibilities
- Design, train, and deploy machine learning models for NLP, computer vision, recommendation systems, and other AI-driven use cases.
- Integrate ML models into production-ready web and mobile applications, ensuring scalability and reliability.
- Collaborate with data scientists to optimize algorithms, pipelines, and inference performance.
- Build APIs and microservices for model serving, monitoring, and scaling.
- Leverage cloud platforms (AWS, Azure, GCP) for ML workflows, containerization (Docker/Kubernetes), and CI/CD pipelines.
- Implement AI-powered features such as chatbots, personalization engines, predictive analytics, or automation systems.
- Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
- Ensure solutions meet security, compliance, and performance standards.
- Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.
Skills & Qualifications
- Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
- Proven experience of 4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
- Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
- Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
- Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
- Hands-on experience with cloud ML services (SageMaker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
- Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
- Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
- Strong understanding of data structures, algorithms, APIs, and distributed systems.
- Excellent problem-solving, analytical, and communication skills.


Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.
Are you looking to explore what is possible in a collaborative and innovative work environment? Is your goal to work with a team of talented professionals who are keenly focused on solving complex business problems and supporting product innovation with technology?
If so, you might be our next Senior DevOps Engineer, where you will be involved in building out systems for our rapidly expanding team, enabling the whole group to operate more effectively and iterate at top speed in an open, collaborative environment.
Systems management and automation are the name of the game here – in development, testing, staging, and production. If you are passionate about building innovative and complex software, are comfortable in an “all hands on deck” environment, and can thrive in an Insurtech culture, we want to meet you!
What We’re Looking For
- You will collaborate with our development team to support ongoing projects, manage software releases, and ensure smooth updates to QA and production environments. This includes handling configuration updates and meeting all release requirements.
- You will work closely with your team members to enhance the company’s engineering tools, systems, procedures, and data security practices.
- Provide technical guidance and educate team members and coworkers on development and operations.
- Monitor metrics and develop ways to improve.
- Conduct systems tests for security, performance, and availability.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
What You’ll Be Doing:
- You have a working knowledge of technologies like:
- Docker, Kubernetes
- oipJenkins (alt. Bamboo, TeamCity, Travis CI, BuildMaster)
- Ansible, Terraform, Pulumi
- Python
- You have experience with GitHub Actions, Version Control, CI/CD/CT, shell scripting, and database change management
- You have working experience with Microsoft Azure, Amazon AWS, Google Cloud, or other cloud providers
- You have experience with cloud security management
- You can configure assigned applications and troubleshoot most configuration issues without assistance
- You can write accurate, concise, and formatted documentation that can be reused and read by others
- You know scripting tools like bash or PowerShell
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
- Advocate DevOps best practices, automation, and continuous improvement



Job Title: AI / Machine Learning Engineer
Company: Apprication Pvt Ltd
Location: Goregaon East
Employment Type: Full-time
Experience: 4 Years
About the Role
We’re seeking a highly motivated AI / Machine Learning Engineer to join our growing engineering team. You will design, build, and deploy AI-powered solutions for web and application platforms, bringing cutting-edge machine learning research into real-world production systems.
This role blends applied machine learning, backend engineering, and cloud deployment, with opportunities to work on NLP, computer vision, generative AI, and intelligent automation across diverse industries.
Key Responsibilities
- Design, train, and deploy machine learning models for NLP, computer vision, recommendation systems, and other AI-driven use cases.
- Integrate ML models into production-ready web and mobile applications, ensuring scalability and reliability.
- Collaborate with data scientists to optimize algorithms, pipelines, and inference performance.
- Build APIs and microservices for model serving, monitoring, and scaling.
- Leverage cloud platforms (AWS, Azure, GCP) for ML workflows, containerization (Docker/Kubernetes), and CI/CD pipelines.
- Implement AI-powered features such as chatbots, personalization engines, predictive analytics, or automation systems.
- Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
- Ensure solutions meet security, compliance, and performance standards.
- Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.
Skills & Qualifications
- Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
- Proven experience of 4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
- Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
- Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
- Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
- Hands-on experience with cloud ML services (SageMaker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
- Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
- Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
- Strong understanding of data structures, algorithms, APIs, and distributed systems.
- Excellent problem-solving, analytical, and communication skills.

Wissen Technology is hiring for Data Engineer
About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.
Experience: 4-7 years
Notice Period: Immediate- 15 days
Location: Pune, Mumbai, Bangalore
Mode of Work: Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python and Pandas.
- Implement and manage workflows using Airflow.
- Utilize Azure Cloud Services for data storage and processing.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Optimize and scale data infrastructure to meet business needs.
Qualifications and Required Skills:
- Proficiency in Python (Must Have).
- Strong experience with Pandas (Must Have).
- Expertise in Airflow (Must Have).
- Experience with Azure Cloud Services.
- Good communication skills.
Good to Have Skills:
- Experience with Pyspark.
- Knowledge of Kubernetes.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/

About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one

SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.


Simstar.co is a billion-dollar diamond enterprise, a joint venture between Sim Gems Group and Star Gems Group.
We combine decades of expertise in jewelry manufacturing and trading with modern technology and data-driven operations.
Our 25-member Tech, Product, and ERP team is the engine behind digital innovation at Simstar. We’re now looking to expand this team with a Senior Software Engineer (SSE) who can help us scale faster.
Role Overview
As a Senior Software Engineer, you’ll be responsible for building and maintaining high-performance web applications across the stack. You’ll collaborate with product managers, designers, and business stakeholders to translate complex business needs into reliable digital systems.
Key Responsibilities
- Design, build, and maintain scalable web applications end-to-end.
- Work closely with product and design teams to deliver user-centric, high-performance interfaces.
- Develop and optimize backend APIs, database queries, and integrations.
- Write clean, maintainable, and testable code following best practices.
- Mentor junior developers and contribute to team-wide tech decisions.
Requirements
- Experience: 5+ years of hands-on full-stack development experience.
- Backend: Proficiency in Python (Java, RoR, or Node.js experience is fine if you can ramp up on Python quickly).
- Frontend: Experience with React, Angular, or Vue.js.
- Database: Strong knowledge of SQL databases (MySQL, PostgreSQL, or Oracle).
- Communication: Comfortable in English or Hindi.
- Location: Bangalore, 5 days a week (Work from Office).
- Availability: Immediate joiners preferred.
Why Join Us
- Be part of a fast-growing global diamond brand backed by two industry leaders.
- Collaborate with a sharp, experienced tech and product team solving real-world business challenges.
- Work at the intersection of luxury, data, and innovation — building systems that directly impact global operations.

Experience: 3–7 Years
Locations: Pune / Bangalore / Mumbai
Notice Period :Immediate joiner only
Employment Type: Full-time
🛠️ Key Skills (Mandatory):
- Python: Strong coding skills for data manipulation and automation.
- PySpark: Experience with distributed data processing using Spark.
- SQL: Proficient in writing complex queries for data extraction and transformation.
- Azure Databricks: Hands-on experience with notebooks, Delta Lake, and MLflow
Interested candidates please share resume with details below.
Total Experience -
Relevant Experience in Python,Pyspark,AQL,Azure Data bricks-
Current CTC -
Expected CTC -
Notice period -
Current Location -
Desired Location -

Experience - 7+Yrs
Must-Have:
o Python (Pandas, PySpark)
o Data engineering & workflow optimization
o Delta Tables, Parquet
· Good-to-Have:
o Databricks
o Apache Spark, DBT, Airflow
o Advanced Pandas optimizations
o PyTest/DBT testing frameworks
Interested candidates can revert back with detail below.
Total Experience -
Relevant Experience in Python,Pandas.DE,Workflow optimization,delta table.-
Current CTC -
Expected CTC -
Notice Period -LWD -
Current location -
Desired location -