50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
Job Details
- Job Title: Lead I - Data Engineering (Python, AWS Glue, Pyspark, Terraform)
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-7 years
- Employment Type: Full Time
- Job Location: Hyderabad
- CTC Range: Best in Industry
Job Description
Data Engineer with AWS, Python, Glue, Terraform, Step function and Spark
Skills: Python, AWS Glue, Pyspark, Terraform - All are mandatory
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
Job Description: Data Analyst Intern
Location: On-site, Bangalore
Duration: 6 months (Full-time)
About us:
- Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
- Our mission is to serve the underserved MSME businesses with their credit needs in India. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap through a phygital model (physical branches + digital decision-making). As a technology and data-first company, tech lovers and data enthusiasts play a crucial role in building the analytics & tech at Optimo that helps the company thrive.
What we offer:
- Join our dynamic startup team and play a crucial role in core data analytics projects involving credit risk, lending strategy, credit features analytics, collections, and portfolio management.
- The analytics team at Optimo works closely with the Credit & Risk departments, helping them make data-backed decisions.
- This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment.
- We believe that the freedom and accountability to make decisions in analytics and technology brings out the best in you and helps us build the best for the company.
- This environment offers you a steep learning curve and an opportunity to experience the direct impact of your analytics contributions. Along with this, we offer industry-standard compensation.
What we look for:
- We are looking for individuals with a strong analytical mindset, high levels of initiative / ownership, ability to drive tasks independently, clear communication and comfort working across teams.
- We value not only your skills but also your attitude and hunger to learn, grow, lead, and thrive, both individually and as part of a team.
- We encourage you to take on challenges, bring in new ideas, implement them, and build the best analytics systems.
Key Responsibilities:
- Conduct analytical deep-dives such as funnel analysis, cohort tracking, branch-wise performance reviews, TAT analysis, portfolio diagnostic, credit risk analytics that lead to clear actions.
- Work closely with stakeholders to convert business questions into measurable analyses and clearly communicated outputs.
- Support digital underwriting initiatives, including assisting in the development and analysis of underwriting APIs that enable decisioning on borrower eligibility (“whom to lend”) and exposure sizing (“how much to lend”).
- Develop and maintain periodic MIS and KPI reporting for key business functions (e.g., pipeline, disbursals, TAT, conversion, collections performance, portfolio trends).
- Use Python (pandas, numpy) to clean, transform, and analyse datasets; automate recurring reports and data workflows.
- Perform basic scripting to support data validation, extraction, and lightweight automation.
Required Skills and Qualifications:
- Strong proficiency in Excel, including pivots, lookup functions, data cleaning, and structured analysis.
- Strong working knowledge of SQL, including joins, aggregations, CTEs, and window functions.
- Proficiency in Python for data analysis (pandas, numpy); ability to write clean, maintainable scripts/notebooks.
- Strong logical reasoning and attention to detail, including the ability to identify errors and validate results rigorously.
- Ability to work with ambiguous requirements and imperfect datasets while maintaining output quality.
Preferred (Good to Have):
- REST APIs: A fundamental understanding of APIs and previous experience or projects related to API development/integrations.
- Familiarity with basic AWS tools/services: (S3, lambda, EC2, Glue Jobs).
- Experience with Git and basic engineering practices.
- Any experience with the lending/finance industry.
🚀 Job Title : Backend Engineer (Go / Python / Java)
Experience : 3+ Years
Location : Bangalore (Client Location – Work From Office)
Notice Period : Immediate to 15 Days
Open Positions : 4
Working Days : 6 Days a Week
🧠 Job Summary :
We are looking for a highly skilled Backend Engineer to build scalable, reliable, and high-performance systems in a fast-paced product environment.
You will own large features end-to-end — from design and development to deployment and monitoring — while collaborating closely with product, frontend, and infrastructure teams.
This role requires strong backend fundamentals, distributed systems exposure, and a mindset of operational ownership.
⭐ Mandatory Skills :
Strong backend development experience in Go / Python (FastAPI) / Java (Spring Boot) with hands-on expertise in Microservices, REST APIs, PostgreSQL, Redis, Kafka/SQS, AWS/GCP, Docker, Kubernetes, CI/CD, and strong DSA & System Design fundamentals.
🔧 Key Responsibilities :
- Design, develop, test, and deploy backend services end-to-end.
- Build scalable, modular, and production-grade microservices.
- Develop and maintain RESTful APIs.
- Architect reliable distributed systems with performance and fault tolerance in mind.
- Debug complex cross-system production issues.
- Implement secure development practices (authentication, authorization, data integrity).
- Work with monitoring dashboards, alerts, and performance metrics.
- Participate in code reviews and enforce engineering best practices.
- Contribute to CI/CD pipelines and release processes.
- Collaborate with product, frontend, and DevOps teams.
✅ Required Skills :
- Strong proficiency in Go OR Python (FastAPI) OR Java (Spring Boot).
- Hands-on experience building Microservices-based architectures.
- Strong understanding of REST APIs & distributed systems.
- Experience with PostgreSQL and Redis.
- Exposure to Kafka / SQS or other messaging systems.
- Hands-on experience with AWS or GCP.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD pipelines.
- Strong knowledge of Data Structures & System Design.
- Ability to independently own features and solve ambiguous engineering problems.
⭐ Preferred Background :
- Experience in product-based companies.
- Exposure to high-throughput or event-driven systems.
- Strong focus on code quality, observability, and reliability.
- Comfortable working in high-growth, fast-paced environments.
🧑💻 Interview Process :
- 1 Internal Screening Round
- HR Discussion (Project & Communication Evaluation)
- 3 Technical Rounds with Client
This is a fresh requirement, and interviews will be scheduled immediately.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.
Roles and Responsibilities:
● Design, develop, and maintain scalable web applications
● Build responsive and high-performance user interfaces
● Develop secure and efficient backend services and APIs
● Collaborate with product managers, designers, and QA teams to deliver features
● Write clean, maintainable, and testable code
● Participate in code reviews and contribute to engineering best practices
● Optimize applications for speed, performance, and scalability
● Troubleshoot and resolve production issues
● Contribute to architectural decisions and technical improvements.
Requirements:
● 3 to 5 years of experience in full-stack development
● Strong proficiency in frontend technologies such as React, Angular, or Vue
● Solid experience with backend technologies such as Node.js, .NET, Java, or Python
● Experience in building RESTful APIs and microservices
● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server
● Experience with version control systems like Git
● Familiarity with CI CD pipelines
● Good understanding of cloud platforms such as AWS, Azure, or GCP
● Strong understanding of software design principles and data structures
● Experience with containerization tools such as Docker
● Knowledge of automated testing frameworks
● Experience working in Agile environments
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
- Strong understanding of Core Python, data structures, OOPs, exception handling, and logical problem-solving.
- Experience in at least one Python framework (FastAPI preferred, Flask/Django acceptable).
- Good knowledge of REST API development and API authentication (JWT/OAuth).
- Experience with SQL databases (MySQL/PostgreSQL) & NoSQL databases (MongoDB/Firestore).
- Basic understanding of cloud platforms (GCP or AWS).
- Experience with Git, branching strategies, and code reviews.
- Solid understanding of performance optimization and writing clean, efficient code.
- Develop, test, and maintain high-quality Python applications using FastAPI (or Flask/Django).
- Design and implement RESTful APIs with strong understanding of request/response cycles, data validation, and authentication.
- Work with SQL (MySQL/PostgreSQL) and NoSQL (MongoDB/Firestore) databases, including schema design and query optimization.
- Experience with Google Cloud (BigQuery, Dataflow, Notebooks) will be a strong plus.
- Work with cloud environments (GCP/AWS) for deployments, storage, logging, etc.
- Use version control tools such as Git/BitBucket for collaborative development.
- Support and build data pipelines using Dataflow/Beam and BigQuery if required.
- Experience with GCP services like BigQuery, Dataflow (Apache Beam), Cloud Functions, Notebooks etc
- Good to have Exposure to microservices architecture.
- Familiarity with Redis, Elasticsearch, or message queues (Pub/Sub, RabbitMQ, Kafka).
Salesforce Developer
Location : ONSITE
LOCATION : MUMBAI AND BANGALORE
Resources should have banking domain experience.
1. Salesforce development Engineer (1 - 3 Years)
2. Salesforce development Engineer (3 - 5 Years)
3. Salesforce development Engineer (5 - 8 Years)
Job description.
----------------------------------------------------------------------------
Technical Skills:
Strong hands-on frontend development using JavaScript and LWC
Expertise in backend development using Apex, Flows, Async Apex
Understanding of Database concepts: SOQL, SOSL and SQL
Hands-on experience in API integration using SOAP, REST API, graphql
Experience with ETL tools , Data migration, and Data governance
Experience with Apex Design Patterns, Integration Patterns and Apex testing framework
Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab, bitbucket
Should have worked with at least one programming language - Java, python, c++ and have good understanding of data structures
Preferred qualifications
Graduate degree in engineering
Experience developing with India stack
Experience in fintech or banking domain
----------------------------------------------------------------------------
Skill details.
1. Salesforce Fundamentals
Strong understanding of Salesforce core architecture
Objects (Standard vs Custom)
Fields, relationships (Lookup, Master-Detail)
Data model basics and record lifecycle
Awareness of declarative vs programmatic capabilities and when to use each
2. Salesforce Security Model
End-to-end understanding of Salesforce security layers, especially:
Record visibility when a record is created
Org-Wide Defaults (OWD) and their impact
Role Hierarchy and how it enables upward data access
Difference between Profiles, Permission Sets, and Sharing Rules
Ability to explain how Salesforce ensures that records are not visible to unauthorized users by default and how access is extended
3. Apex Triggers
Clear distinction between:
Before Triggers (before insert, before update)
Use cases such as validation and field updates
After Triggers (after insert, after update)
Use cases such as related record updates or integrations
Understanding of trigger context variables and best practices (bulkification, avoiding recursion)
4. Platform Events / Event-Driven Architecture
Knowledge of Platform Events and their use in decoupled, event-driven solutions
Understanding of real-time or near real-time notification use cases (e.g., UI alerts, pop-up style notifications)
Ability to position Platform Events versus alternatives (Streaming API, Change Data Capture)
5. Lightning Data Access (Wire Method)
Understanding of the @wire mechanism in Lightning Web Components (LWC)
Discussion point:
Whether records (e.g., AppX records) can be updated using the wire method
Awareness that @wire is primarily read/reactive and updates typically require imperative Apex calls
Clear articulation of reactive vs imperative data handling
6. Integrations Experience
Ability to articulate hands-on integration experience, including:
REST/SOAP API integrations
Inbound vs outbound integrations
Authentication mechanisms (OAuth, Named Credentials)
Use of Apex callouts, Platform Events, or middleware
Clarity on integration patterns and error handling approaches
• Strong hands-on experience with AWS services.
• Expertise in Terraform and IaC principles.
• Experience building CI/CD pipelines and working with Git.
• Proficiency with Docker and Kubernetes.
• Solid understanding of Linux administration, networking fundamentals, and IAM.
• Familiarity with monitoring and observability tools (CloudWatch, Prometheus, Grafana, ELK, Datadog).
• Knowledge of security and compliance tools (Trivy, SonarQube, Checkov, Snyk).
• Scripting experience in Bash, Python, or PowerShell.
• Exposure to GCP, Azure, or multi-cloud architectures is a plus.
About TradeLab
TradeLab is a leading fintech technology provider, delivering cutting-edge solutions to brokers, banks, and fintech platforms. Our portfolio includes high-performance Order & Risk Management Systems (ORMS), seamless MetaTrader integrations, AI-driven customer
engagement platforms such as PULSE LLaVA, and compliance-grade risk management solutions. With a proven track record of successful deployments at top-tier brokerages and financial institutions, TradeLab combines scalability, regulatory alignment, and innovation to
redefine digital broking and empower clients in the capital markets ecosystem.
Key Responsibilities
• Design, develop, and execute detailed automation & manual test cases based on functional and technical requirements.
• Develop, maintain, and execute automated test scripts using industry-standard tools and frameworks.
• Identify, document, and track software defects, collaborating with developers to ensure timely resolution.
• Conduct regression, integration, performance, and security testing as needed.
• Participate in the planning and review of test strategies, test plans, and test scenarios.
• Ensure comprehensive test coverage and maintain accurate test documentation and reports.
• Integrate automated tests into CI/CD pipelines for continuous quality assurance.
• Collaborate with cross-functional teams to understand product requirements and deliver high-quality releases.
• Participate in code and test case reviews, providing feedback to improve quality standards.
• Stay updated with emerging testing tools, techniques, and best practices.
Must-Have Qualifications
• Proven experience in software testing.
• Strong knowledge of QA methodologies, SDLC, and STLC.
• Proficiency in at least one programming/scripting language used for automation (e.g., Java, Python, JavaScript).
• Experience with automation tools such as Selenium, Appium, or similar.
• Ability to write and execute complex SQL queries for data validation.
• Familiarity with Agile/Scrum methodologies.
• Excellent analytical, problem-solving, and communication skills.
• Experience with bug tracking and test management tools (e.g., JIRA, TestRail).
• Bachelor’s degree in computer science, Engineering, or related field.
Why Join TradeLab?
• Innovative Environment: Join a fast-growing fintech leader at the forefront of transforming the Indian and global brokerage ecosystem with cutting-edge technology.
• Ownership & Impact: Take full ownership of a high-potential territory (Western India) with direct visibility to senior leadership and the opportunity to shape regional growth.
• Cutting-Edge Solutions: Gain hands-on experience with next-generation trading infrastructure, AI-driven platforms, and compliance-focused solutions.
• Growth Opportunities: Thrive in an entrepreneurial role with significant learning potential, professional development, and a steep growth trajectory.
Responsibilities:
• End-to-end design, development, and deployment of enterprise-grade AI solutions leveraging Azure AI, Google Vertex AI, or comparable cloud platforms.
• Architect and implement advanced AI systems, including agentic workflows, LLM integrations, MCP-based solutions, RAG pipelines, and scalable microservices.
• Oversee the development of Python-based applications, RESTful APIs, data processing pipelines, and complex system integrations.
• Define and uphold engineering best practices, including CI/CD automation, testing frameworks, model evaluation procedures, observability, and operational monitoring.
• Partner closely with product owners and business stakeholders to translate requirements into actionable technical designs, delivery plans, and execution roadmaps.
• Provide hands-on technical leadership, conducting code reviews, offering architectural guidance, and ensuring adherence to security, governance, and compliance standards.
• Communicate technical decisions, delivery risks, and mitigation strategies effectively to senior leadership and cross-functional teams.
What you will be working on?
- Driving product implementation from conceptualisation to delivery. This would involve planning and breaking down projects, leading architectural discussions and decisions, building high quality documentation and architecture diagrams, and driving the execution end to end.
- Own the development practices, processes, and standards for your team
- Own the technical architecture, drive engineering design, and shoulder critical decisions
- Understand, prioritize and deliver the feature roadmap while chipping away at the technical debt
- Work effectively with a cross-functional team of product managers, designers, developers, and QA
- Own the communication of the team’s progress and perception of the team itself
- Collaborate with the Support team to keep track of and triage technical issues and track them through to resolution
- Collaborate with Talent Acquisition to drive sourcing, screening, interviewing, and recruitment of the right talent for your team
- Continuously improve the productivity of your team by identifying investments in technology, process, and continuous delivery
- Own the morale of your team, unblock them at critical junctures, and break ties in a timely manner
- Own the careers of your team members, deliver regular and timely feedback, represent your team for annual reviews and reward your performers
- You will nurture and grow the team in order to deliver path-breaking solutions, as outlined above, for the business in the coming years
What we are looking for?
- 7+ years of total relevant experience with a minimum of one year of actively managing and owning the delivery of a high-performing engineering team.
- Bachelor's Degree in a technical field
- Ability to work in a very fast-paced environment with high degrees of vagueness.
- Excellent database knowledge and data modeling skills
- Excellent leadership skills to manage and mentor teams
- Experience designing and implementing distributed systems
- Superior management skills to manage multi-engineer projects and experience in delivering high-quality projects on time
- Track record of individual technical achievement
- Excellent command in CS fundamentals and in at least one interpreted language (PHP / Python / RoR)
- Experience developing software in a commercial software product development environment
- Experience leading teams that built software products for scale
- Excellent communication skills, open, collaborative, and proven team player
- Experience working with global customers and experience with agile processes and Serverless Architecture is a plus
About the Job :
We are looking for a passionate and driven AI Intern to join our dynamic team. As an intern, you will have the opportunity to work on real-world projects, develop AI models, and collaborate with experienced professionals in the field. This internship is designed to provide hands-on experience in AI and machine learning, offering you the chance to contribute to impactful projects while enhancing your skills.
Job Description:
We are seeking a talented Artificial Intelligence Specialist to join our dynamic team. As an AI Specialist, you will be responsible for developing, implementing, and optimizing AI models and algorithms. You will collaborate closely with cross-functional teams to integrate AI capabilities into our products and services. The ideal candidate should have a strong background in machine learning, deep learning, and natural language processing, with a passion for applying AI to real-world problems.
Responsibilities:
- Design, develop, and deploy AI models and algorithms.
- Conduct data analysis and pre-processing to prepare data for modeling.
- Implement and optimize machine learning algorithms.
- Collaborate with software engineers to integrate AI models into production systems.
- Evaluate and improve the performance of existing AI models.
- Stay updated with the latest advancements in AI research and apply them to enhance our products.
- Provide technical guidance and mentorship to junior team members.
Requirements:
- Any Graduate / Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field; Master's degree preferred.
- Proven experience in developing and implementing machine learning models and algorithms.
- Strong programming skills in languages such as Python, R, or Java.
Benefits :
- Internship Certificate
- Letter of Recommendation
- Performance-Based Stipend
- Part-time work from home (2-3 hours per day)
- 5 days a week, fully flexible shift

This is for one of our Reputed Entertainment organisation
Key Responsibilities
· Advanced ML & Deep Learning: Design, develop, and deploy end-to-end Machine Learning models for Content Recommendation Engines, Churn Prediction, and Customer Lifetime Value (CLV).
· Generative AI Implementation: Prototype and integrate GenAI solutions (using LLMs like Gemini/GPT) for automated Metadata Tagging, Script Summarization, or AI-driven Chatbots for viewer engagement.
· Develop and maintain high-scale video processing pipelines using Python, OpenCV, and FFmpeg to automate scene detection, ad-break identification, and visual feature extraction for content enrichment
· Cloud Orchestration: Utilize GCP (Vertex AI, BigQuery, Dataflow) to build scalable data pipelines and manage the full ML lifecycle (MLOps).
· Business Intelligence & Storytelling: Create high-impact, automated dashboards in to track KPIs for data-driven decision making
· Cross-functional Collaboration: Work closely with Product, Design, Engineering, Content, and Marketing teams to translate "viewership data" into "strategic growth."
Preferred Qualifications
· Experience in Media/OTT: Prior experience working with large scale data from broadcast channels, videos, streaming platforms or digital ad-tech.
· Education: Master’s/Bachelor’s degree in a quantitative field (Computer Science, Statistics, Mathematics, or Data Science).
· Product Mindset: Ability to not just build a model, but to understand the business implications of the solution.
· Communication: Exceptional ability to explain "Neural Network outputs" to a "Creative Content Producer" in simple terms.
We are looking for a skilled and self-driven Data Engineer to strengthen our data platform and
improve overall data quality, reliability, and usability for customers and internal stakeholders.
This role is critical to building and maintaining scalable data pipelines, well-structured data
models, and analytics-ready systems. The ideal candidate has startup experience, enjoys
building systems from scratch, and takes ownership end to end.
Roles and Responsibilities
● Design, build, and maintain scalable data pipelines and ETL workflows to support
analytics and product use cases.
● Develop and manage data warehousing solutions using platforms like Snowflake,
Redshift, or ClickHouse.
● Ensure data quality, consistency, and reliability across all data sources and downstream
systems.
● Collaborate closely with product, analytics, and engineering teams to understand data
requirements.
● Build and optimize data models for reporting, dashboards, and analytics.
● Support and enable BI tools such as Power BI, Tableau, or Metabase.
● Monitor pipelines, troubleshoot issues, and continuously improve performance.
● Document data workflows, schemas, and processes for clarity and maintainability.
Skills and Qualifications
● Strong proficiency in SQL
● Experience with data warehousing platforms (i.e. Snowflake, Redshift, ClickHouse)
● Hands-on experience with ETL orchestration tools (i.e. Airflow, dbt, Dagster)
● Experience with dashboarding tools (Power BI, Tableau, Metabase)
● Strong programming skills in Python
● AWS experience is preferred
● Self-starter mindset with startup experience
● Strong problem-solving abilities
● Highly organized with a systems-thinking approach
● Ownership-driven and accountable
● Clear and effective communicator
AccioJob is conducting a Walk-In Hiring Drive with Sceniuz IT Pvt. Ltd. for the position of Data Engineer.
To apply, register and select your slot here: https://go.acciojob.com/kzxn79
Required Skills: Python, SQL, Azure
Eligibility:
Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
Branch: All
Graduation Year: All
CTC: ₹3 LPA to ₹6 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Technical Interview 1, HR Discussion
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/sUrMKd
👇 FAST SLOT BOOKING 👇
[ 📲 DOWNLOAD ACCIOJOB APP ]
AI Automation Engineer – Intelligent Systems
(AI Generalist – Automation & Intelligent Systems)
📍 Location: Bengaluru (Onsite)
🏢 Company: Learners Point Academy
📊 Reporting To: Head
🕒 Employment Type: Full-Time
🎯 Role Summary
Learners Point Academy is seeking a hands-on AI Automation Engineer to architect, deploy, and scale intelligent automation systems across Sales, Marketing, Academics, Operations, Finance, and Customer Experience.
🧠 What This Role Requires:
- A Systems Thinker
- A Hands-on Builder
- An Automation Architect
- An AI Deployment Specialist
Core Responsibilities
1️⃣ Operational Workflow Automation
- Automate CRM workflows (Bitrix24 / Zoho or similar)
- Build intelligent lead scoring systems
- Auto-generate proposals from structured CRM inputs
- Deploy WhatsApp automation with tiered logic
- Design cross-functional task routing systems
- Implement automated follow-up sequences
- Build cross-department reporting pipelines
2️⃣ AI Agents & Intelligence Systems
- Build internal AI Sales Assistant (copilot model)
- Develop Academic AI Assistant (summaries, grading support)
- Create AI-powered reporting dashboards
- Build centralized AI knowledge base
- Develop customer segmentation intelligence
- Implement predictive closure timeline models
3️⃣ LMS & Assessment Automation
- Design AI-powered quiz generation systems
- Implement auto-grading frameworks
- Integrate Zoom attendance with LMS tracking
- Automate certification workflows
- Build student performance dashboards
- Ensure seamless LMS–CRM synchronization
4️⃣ Revenue & Growth Intelligence
- Develop pipeline scoring engines
- Deploy sales copilot (email drafting, objection handling)
- Build AI-driven pricing optimization tools
- Design churn prediction logic
- Automate ad spend tracking systems
- Create performance intelligence dashboards
5️⃣ AI Architecture & Governance
- Define AI usage SOPs
- Maintain structured prompt libraries
- Document system architecture & workflows
- Ensure scalable, secure system design
- Build reusable frameworks — avoid patchwork automation
🔧 Required Technical Skills
Mandatory:
- Workflow Automation: Zapier / Make / n8n
- CRM Automation (Bitrix24 / Zoho / similar)
- LLM API Integration (OpenAI, Claude, etc.)
- REST APIs & Webhook Integrations
- Python or JavaScript scripting
- Google Workspace Automation
- Business Process Automation Design
Good to Have
- Lang Chain or AI Agent Frameworks
- Vector Databases & RAG Systems
- Whats App Business API Integration
- Workflow Orchestration Tools
- BI Tools (Power BI / Looker)
- LMS Integration Experience
🎓 Qualifications
- Bachelor’s / Master’s in Engineering, Computer Science, AI, or related field
- 3–6 years experience in AI deployment, automation, or systems integration
- Demonstrated experience implementing automation in business environments
- Portfolio of deployed AI systems (production-grade, not academic-only)
📈 Ideal Candidate Profile
You:
- Think in systems, not scripts
- Understand real-world business workflows
- Have deployed AI agents in production
- Can connect CRM + LMS + Communication tools seamlessly
- Can explain technical architecture clearly to leadership
- Prefer measurable business impact over experimental prototypes
🚫
This Role Is NOT For
- Pure ML researchers
- Academic AI model developers
- Candidates without business automation exposure
- Candidates without real deployment experience
Role & Responsibilities
As a Founding Engineer, you'll join the engineering team during an exciting growth phase, contributing to a platform that handles complex financial operations for B2B companies. You'll work on building scalable systems that automate billing, usage metering, revenue recognition, and financial reporting—directly impacting how businesses manage their revenue operations.
This role is ideal for someone who thrives in a dynamic startup environment where requirements evolve quickly and problems require creative solutions. You'll work on diverse technical challenges, from API development to external integrations, while collaborating with senior engineers, product managers, and customer success teams.
Key Responsibilities
- Build core platform features: Develop robust APIs, services, and integrations that power billing automation and revenue recognition capabilities.
- Work across the full stack: Contribute to backend services and frontend interfaces to ensure seamless user experiences.
- Implement critical integrations: Connect the platform with external systems including CRMs, data warehouses, ERPs, and payment processors.
- Optimize for scale: Design systems that handle complex pricing models, high-volume usage data, and real-time financial calculations.
- Drive quality and best practices: Write clean, maintainable code and participate in code reviews and architectural discussions.
- Solve complex problems: Debug issues across the stack and collaborate with cross-functional teams to address evolving client needs.
The Impact You'll Make
- Power business growth: Enable fast-growing B2B companies to scale billing and revenue operations efficiently.
- Build critical financial infrastructure: Contribute to systems handling high-value transactions with accuracy and compliance.
- Shape product direction: Join during a scaling phase where your contributions directly impact product evolution and customer success.
- Accelerate your expertise: Gain deep exposure to financial systems, B2B SaaS operations, and enterprise-grade software development.
- Drive the future of B2B commerce: Help build infrastructure supporting next-generation pricing models, from usage-based to value-based billing.
Ideal Candidate Profile
Experience
- 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems.
- Strong backend development experience using one or more frameworks: FastAPI / Django (Python), Spring (Java), or Express (Node.js).
- Deep understanding of relevant libraries, tools, and best practices within the chosen backend framework.
- Strong experience with databases (SQL & NoSQL), including efficient data modeling and performance optimization.
- Proven experience designing, building, and maintaining APIs, services, and backend systems with solid system design and clean code practices.
Domain
- Experience with financial systems, billing platforms, or fintech applications is highly preferred.
Company Background
- Experience working in product companies or startups (preferably Series A to Series D).
Education
- Candidates from Tier 1 engineering institutes (IITs, BITS, etc.) are highly preferred.
Job description
Job Title: React JS Developer - (Core Skill - React JS)
Core Skills -
- Minimum of 6 months of experience in frontend Dev using React JS (Excl any internship, Training programs)
The Company
Our mission is to enable and empower engineering teams to build world-class solutions, and release them faster than ever, we strongly believe engineers are the building block of a great society - we love building, and we love solving problems Talk about problem-solving and technical challenges. And unique problems faced by the Engineering Community. Our DNA of stems from Mohit’s passion for building technology products for solving problems which has a big impact.
We are a bootstrapped company largely and aspire to become the next household name in the engineering community and leave a signature on all the great technological products being built across the globe.
Who would be your customers - We, are going to shoulder the great responsibility of solving minute problems that you as an Engineer have faced over the years.
The Opportunity
An exciting opportunity to be part of a story, making an impact on How domain solutions will be built in years to come
Do you wish to lead the Engineering vertical, build your own fort, and shine through the journey of building the next-generation platform?
Blaash is looking to hire a problem solver with strong technical expertise in building large applications. You will build the next-generation AI solution for the Engineering Team - including backend and frontend.
Responsibility
Owning the front-end and back-end development in all aspects. Proposing high-level design solutions, and POCs to arrive at the right solution. Mentoring junior developers and interns.
What makes you an ideal team member we are eagerly waiting to meet - :
- Demonstrate strong architecture and design skills in building high-performance APIs using AWS services.
- Design and implement highly scalable, interactive web applications with high usability
- Collaborate with product teams to iterate ideas on data monetization products/services and define feasibility
- Rapidly iterate on product ideas, build prototypes, and participate in proof of concepts
- Collaborate with internal and external teams in troubleshooting functional and performance issues
- Work with DevOps Engineers to integrate any new code into existing CI/CD pipelines
- 6 months + of experience in frontend dev using React JS
- 6 moths + years of hands-on experience developing high-performance APIs & Web applications
Salary -
- The first 4 months of the Training and Probation period = 15K - 20K INR per month
- On successful completion of the Probation period = 3 - 3.5 LPA INR per month
- Equity Benefits for deserving candidates
How we will set you up for success You will work closely with the Founding team to understand what we are building.
You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well. You will be involved in a monthly one-on-one with the founders to discuss feedback
If you’ve made it this far, then maybe you’re interested in joining us to build something pivotal, carving a unique story for you - Get in touch with us, or apply now!
We are looking for a Machine Learning Engineer to design, build, and operate production-grade ML systems powering scalable, data-driven applications.
You will be responsible for developing end-to-end machine learning pipelines, ensuring seamless consistency between development and production environments while building reliable and scalable ML infrastructure.
This role focuses on production ML engineering, not experimentation-only data science. You will work closely with backend, data, and product teams to deploy and operate predictive systems at scale.
Requirements
- Strong coding skills in Python, with the ability to build reliable, production-quality systems.
- Experience developing end-to-end machine learning pipelines, ensuring consistency between development, training, and production environments.
- Ability to design and implement scalable ML architectures tailored to site traffic, system scale, and predictive feature complexity.
- Familiarity with model and data versioning, resource allocation, system scaling, and structured logging practices.
- Experience building systems that monitor, detect, and respond to failures across infrastructure resources, data pipelines, and model predictions.
- Hands-on expertise with MLOps tools and workflows for scalable, production-level model deployment and lifecycle management.
- Strong problem-solving abilities and comfort working in a fast-paced, high-ownership environment.
Hiring: Full Stack Developer (Next.js + Python/Node.js) – 4+ Years Experience
We are looking for a skilled Full Stack Developer with 4+ years of experience in building scalable web applications using Next.js and either Python or Node.js on the backend.
🔹 Key Responsibilities:
- Develop and maintain web applications using Next.js (React framework)
- Build and manage RESTful APIs using Node.js (Express/NestJS) or Python (Django/FastAPI)
- Work on end-to-end feature development (frontend + backend)
- Integrate third-party APIs and services
- Optimize applications for performance and scalability
- Collaborate with cross-functional teams in an agile environment
🔹 Required Skills:
- 4+ years of full-stack development experience
- Strong hands-on experience with Next.js
- Backend expertise in Node.js or Python
Job Description
We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.
What will you need to be successful in this role?
Core Data Science Skills
• Strong foundation in statistics, probability, and mathematical modeling
• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)
• Strong SQL skills for data extraction, transformation, and complex analytical queries
• Experience with exploratory data analysis (EDA) and statistical hypothesis testing
• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)
• Strong understanding of feature engineering and data preprocessing techniques
• Experience with A/B testing, experimental design, and causal inference
Machine Learning & Analytics
• Strong experience building and deploying ML models (regression, classification, clustering)
• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)
• Understanding of time series analysis and forecasting techniques
• Experience with model evaluation metrics and cross-validation strategies
• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)
• Understanding of bias-variance tradeoff and model interpretability
• Experience with hyperparameter tuning and model optimization
GenAI & Advanced Analytics
• Working knowledge of LLMs and their application to business problems
• Experience with prompt engineering for analytical tasks
• Understanding of embeddings and semantic similarity for analytics
• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)
• Experience integrating AI/ML models into analytical workflows
Data Platforms & Tools
• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)
• Proficiency in Jupyter notebooks and collaborative development environments
• Familiarity with version control (Git) and collaborative workflows
• Experience working with large datasets and distributed computing (Spark/PySpark)
• Understanding of data warehousing concepts and dimensional modeling
• Experience with cloud platforms (AWS, Azure, or GCP)
Business Acumen & Communication
• Strong ability to translate business problems into analytical frameworks
• Experience presenting complex analytical findings to non-technical stakeholders
• Ability to create compelling data stories and visualizations
• Track record of driving business decisions through data-driven insights
• Experience working with cross-functional teams (Product, Engineering, Business)
• Strong documentation skills for analytical methodologies and findings
Good to have
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)
• Knowledge of reinforcement learning and optimization techniques
• Familiarity with graph analytics and network analysis
• Experience with MLOps and model deployment pipelines
• Understanding of model monitoring and performance tracking in production
• Knowledge of AutoML tools and automated feature engineering
• Experience with real-time analytics and streaming data
• Familiarity with causal ML and uplift modeling
• Publications or contributions to data science community
• Kaggle competitions or open-source contributions
• Experience in specific domains (finance, healthcare, e-commerce)
We are looking for a visionary and hands-on Head of Data Science and AI with at least 6 years of experience to lead our data strategy and analytics initiatives. In this pivotal role, you will take full ownership of the end-to-end technology stack, driving a data-analytics-driven business roadmap that delivers tangible ROI. You will not only guide high-level strategy but also remain hands-on in model design and deployment, ensuring our data capabilities directly empower executive decision-making.
If you are passionate about leveraging AI and Data to transform financial services, we invite you to lead our data transformation journey.
Key Responsibilities
Strategic Leadership & Roadmap
- End-to-End Tech Stack Ownership: Define, own, and evolve the complete data science and analytics technology stack to ensure scalability and performance.
- Business Roadmap & ROI: Develop and execute a data analytics-driven business roadmap, ensuring every initiative is aligned with organizational goals and delivers measurable Return on Investment (ROI).
- Executive Decision Support: Create and present high-impact executive decision packs, providing actionable insights that drive key business strategies.
Model Design & Deployment (Hands-on)
- Hands-on Development: Lead by example with hands-on involvement in AI modeling, machine learning model design, and algorithm development using Python.
- Deployment & Ops: Oversee and execute the deployment of models into production environments, ensuring reliability, scalability, and seamless integration with existing systems.
- Leverage expert-level knowledge of Google Cloud Agentic AI, Vertex AI and BigQuery to build advanced predictive models and data pipelines.
- Develop business dashboards for various sales channels and drive data driven decision making to improve sales and reduce costs.
Governance & Quality
- Data Governance: Establish and enforce robust data governance frameworks, ensuring data accuracy, security, consistency, and compliance across the organization.
- Best Practices: Champion best practices in coding, testing, and documentation to build a world-class data engineering culture.
Collaboration & Innovation
- Work closely with Product, Engineering, and Business leadership to identify opportunities for AI/ML intervention.
- Stay ahead of industry trends in AI, Generative AI, and financial modeling to keep Bajaj Capital at the forefront of innovation.
Must-Have Skills & Experience
Experience:
- At least 7 years of industry experience in Data Science, Machine Learning, or a related field.
- Proven track record of applying AI and leading data science teams or initiatives that resulted in significant business impact.
Technical Proficiency:
- Core Languages: Proficiency in Python is mandatory, with strong capabilities in libraries such as Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
- Cloud Data Stack: Expert-level command of Google Cloud Platform (GCP), specifically Agentic AI, Vertex AI and BigQuery.
- AI & Analytics Stack: Deep understanding of the modern AI and Data Analytics stack, including data warehousing, ETL/ELT pipelines, and MLOps.
- Visualization: PowerBI in combination with custom web/mobile applications.
Leadership & Soft Skills:
- Ability to translate complex technical concepts into clear business value for stakeholders.
- Strong ownership mindset with the ability to manage end-to-end project lifecycles.
- Experience in creating governance structures and executive-level reporting.
Good-to-Have / Plus
- Domain Expertise: Prior experience in the BFSI domain (Wealth Management, Insurance, Mutual Funds, or Fintech).
- Certifications: Google Professional Data Engineer or Google Professional Machine Learning Engineer certifications.
- Advanced AI: Experience with Generative AI (LLMs), RAG architectures, and real-time analytics.
In this role, you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development.
Responsibilities
- Development of machine learning models
- Building and maintaining software development solutions
- Provide insights by applying data science methods
- Take ownership of delivering features and improvements on time
Must-have Qualifications
- 4 year's experience
- Senior data scientist preferable with knowledge of NLP
- Strong programming skills and extensive experience with Python
- Professional experience working with LLMs, transformers and open-source models from HuggingFace
- Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks
- Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc.).
- Experience using deep learning libraries and platforms, such as PyTorch
- Experience with frameworks such as Sklearn, Numpy, Pandas, Polars
- Excellent analytical and problem-solving skills
- Excellent oral and written communication skills
Extra Merit Qualifications
- Knowledge in at least one of the following: NLP, information retrieval, data mining
- Ability to do statistical modeling and building predictive models
- Programming skills and experience with Scala and/or Java
About CloudThat:-
At CloudThat, we are driven by our mission to empower professionals and businesses to harness the full potential of cloud technologies. As a leader in cloud training and consulting services in India, our core values guide every decision we make and every customer interaction we have.
Role Overview:-
We are looking for a passionate and experienced Technical Trainer to join our expert team and help drive knowledge adoption across our customers, partners, and internal teams.
Key Responsibilities:
• Deliver high-quality, engaging technical training sessions both in-person and virtually to customers, partners, and internal teams.
• Design and develop training content, labs, and assessments based on business and technology requirements.
• Collaborate with internal and external SMEs to draft course proposals aligned with customer needs and current market trends.
• Assist in training and onboarding of other trainers and subject matter experts to ensure quality delivery of training programs.
• Create immersive lab-based sessions using diagrams, real-world scenarios, videos, and interactive exercises.
• Develop instructor guides, certification frameworks, learner assessments, and delivery aids to support end-to-end training delivery.
• Integrate hands-on project-based learning into courses to simulate practical environments and deepen understanding.
• Support the interpersonal and facilitation aspects of training fostering an inclusive, engaging, and productive learning environment
Skills & Qualifications:
• Experience developing content for professional certifications or enterprise skilling programs.
• Familiarity with emerging technology areas such as cloud computing, AI/ML, DevOps, or data engineering.
Technical Competencies:
- Expertise in languages like C, C++, Python, Java
- Understanding of algorithms and data structures
- Expertise on SQL
Or Directly Apply-https://cloudthat.keka.com/careers/jobdetails/95441
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
Domain: Digital Health | EHR & Care Management Platforms
Professional Summary
Full Stack Software Engineer with hands-on experience in building and scaling healthcare technology platforms. Strong backend expertise in Python and Django, with mandatory proficiency in JavaScript and working knowledge of React for frontend development. Experienced in developing APIs, managing databases, and collaborating with cross functional teams to deliver reliable, production-grade solutions for patient care and clinical operations.
Technical Skills
Backend:
• Python, Django, Django-based frameworks (Zango or similar meta-frameworks)
• RESTful API development
• PostgreSQL and relational database design
Frontend (Mandatory):
• JavaScript (ES6+)
• HTML, CSS
• React.js (working knowledge)
• API integration with frontend components
Tools & Platforms:
• Git and version control
• CI/CD fundamentals
• Docker (basic exposure)
• Cloud platforms (AWS/GCP – exposure)
Professional Experience:
• Designed, developed, and maintained backend services using Python and Django
• Built and optimized RESTful APIs for patient management, scheduling, and care workflows
• Developed frontend components using JavaScript and integrated APIs with React-based interfaces
• Collaborated with product, clinical, and operations teams
• Integrated external systems such as labs and communication services • Contributed to data modeling aligned with clinical workflows
• Debugged production issues and improved platform performance
• Maintained internal documentation
Key Contributions:
• Enabled end-to-end patient journeys through backend and frontend integrations
• Improved operational efficiency via workflow automation
• Delivered production-ready features in a regulated healthcare environment
Shortened version:
- 5+ years experience in Python with strong core concepts (OOP, data structures, exception handling, problem-solving)
- Experience with FastAPI (preferred) or Flask/Django
- Strong REST API development & authentication (JWT/OAuth)
- SQL (MySQL/PostgreSQL) & NoSQL (MongoDB/Firestore) experience
- Basic cloud knowledge (GCP or AWS)
- Git, code reviews, clean coding & performance optimization
- Good communication and teamwork skills
Responsibilities:
- Build and maintain scalable backend apps and REST APIs
- Work with databases, third-party integrations, and cloud deployments
- Write clean, testable, optimized code
- Debug, troubleshoot, and improve performance
- Collaborate with team on technical solutions
Good to have:
- GCP (BigQuery, Dataflow, Cloud Functions)
- Microservices, Redis/Kafka/RabbitMQ
- Docker, CI/CD
- Basic Pandas/Numpy for data handling
Perks & Benefits
- 5 Days Working
- Family Health Insurance
- Relaxation Area
- Affordable Lunch
- Free Snacks & Drinks
- Open Work Culture
- Competitive Salary & Benefits
- Festival Celebrations
- International Exposure Opportunities
- 20 Paid Leaves per Year
- Marriage Leave & Parental Leave Policy
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
Python API Connector Developer
Peliqan is a highly scalable and secure cloud solution for data collaboration in the modern data stack. We are on a mission to reinvent how data is shared in companies.
We are looking for a Python Developer to join our team and help us build a robust and reliable Connectors that connect with existing REST APIs. The ideal candidate has a strong background in consuming APIs, working with REST APIs and GraphQL APIs, using Postman, and general development skills in Python.
In this role, you will be responsible for developing data connectors, these are Python wrappers that consume existing APIs from various data sources such as SaaS CRM systems, accounting software, ERP systems, eCommerce platforms etc. You will become an expert in handling APIs to perform ETL data extraction from these sources into the Peliqan data warehouse.
You will also maintain documentation, and provide technical support to our internal and external clients. Additionally, you will be required to collaborate with other teams such as Product, Engineering, and Design to ensure the successful implementation of our connectors.If you have an eye for detail and a passion for technology, we want to hear from you!
Your responsibilities
- As a Python API Connector Developer at Peliqan.io, you are responsible for developing high-quality ETL connectors to extract data from SaaS data sources by consuming REST APIs and GraphQL APIs.
- Develop and maintain Connector documentation, including code samples and usage guidelines
- Troubleshoot and debug complex technical problems related to APIs and connectors
- Collaborate with other developers to ensure the quality and performance of our connectors
What makes you a great candidate
- Expert knowledge of technologies such as RESTful APIs, GraphQL, JSON, XML, OAuth2 flows, HTTP, SSL/TLS, Webhooks, API authentication methods, rate limiting, paging strategies in APIs, headers, response codes
- Basic knowledge of web services technologies, including SOAP and WSDL
- Proficiency in database technologies such as MySQL, MongoDB, etc.
- Experience with designing, building, and maintaining public and private APIs
- Excellent understanding of REST APIs (consuming APIs in Postman)
- Coding in Python
- Experienced in working with JSON, JSON parsing in Python and JSON path
- Good understanding of SaaS software, CRM, Marketing Automation, Accounting, and ERP systems (as they will be the main source of data)
- Analytical mindset: you are capable of discussing technical requirements with customers and implementing these in the Peliqan platform
- Customer-driven, great communication skills
- You are motivated, proactive, you have eyes for details
About RapidClaims
RapidClaims is a leader in AI-driven revenue cycle management, transforming medical
coding and revenue operations with cutting-edge technology.
The company has raised $11 million in total funding from top investors, including Accel
and Together Fund.
Join us as we scale a cloud-native platform that runs transformer-based Large Language
Models rigorously fine-tuned on millions of clinical notes and claims every month. You’ll
engineer autonomous coding pipelines that parse ICD-10-CM, CPT® and HCPCS at
lightning speed, deliver reimbursement insights with sub-second latency and >99 %
accuracy, and tackle the deep-domain challenges that make clinical AI one of the
hardest—and most rewarding—problems in tech.
Engineering Manager- Job Overview
We are looking for an Engineering Manager who can lead a team of engineers while
staying deeply involved in technical decisions. This role requires a strong mix of people
leadership, system design expertise, and execution focus to deliver high-quality product
features at speed. You will work closely with Product and Leadership to translate
requirements into scalable technical solutions and build a high-performance engineering
culture.
Key Responsibilities:
● Lead, mentor, and grow a team of software engineers
● Drive end-to-end ownership of product features from design to deployment
● Work closely with Product to translate requirements into technical solutions
● Define architecture and ensure scalability, reliability, and performance
● Establish engineering best practices, code quality, and review standards
● Improve development velocity, sprint planning, and execution discipline
● Hire strong engineering talent and build a solid team
● Create a culture of accountability, ownership, and problem-solving
● Ensure timely releases without compromising quality
● Stay hands-on with critical technical decisions and reviews
.
Requirements:
● 5+ years of software engineering experience, with 2+ years in team leadership
● Strong experience in building and scaling backend systems and APIs
● Experience working in a product/startup environment
● Good understanding of system design, architecture, and databases
● Ability to manage engineers while remaining technically credible
● Strong problem-solving and decision-making skills
● Experience working closely with Product teams
● High ownership mindset and bias for action
Good to Have
● Experience in healthcare tech / automation / RPA / AI tools
● Experience building internal tools and workflow systems
● Exposure to cloud infrastructure (AWS/GCP/Azure)
Role Overview
We are hiring a Principal Datacenter Backend Developer to architect and build highly scalable, reliable backend platforms for modern data centers. This role owns control-plane and data-plane services powering orchestration, monitoring, automation, and operational intelligence across large-scale on-prem, hybrid, and cloud data center environments.
This is a hands-on principal IC role with strong architectural ownership and technical leadership responsibilities.
Key Responsibilities
- Own end-to-end backend architecture for datacenter platforms (orchestration, monitoring, DCIM, automation).
- Design and build high-availability distributed systems at scale.
- Develop backend services using Java (Spring Boot / Micronaut / Quarkus) and/or Python (FastAPI / Flask / Django).
- Build microservices for resource orchestration, telemetry ingestion, capacity and asset management.
- Design REST/gRPC APIs and event-driven systems.
- Drive performance optimization, scalability, and reliability best practices.
- Embed SRE principles, observability, and security-by-design.
- Mentor senior engineers and influence technical roadmap decisions.
Required Skills
- Strong hands-on experience in Java and/or Python.
- Deep understanding of distributed systems and microservices.
- Experience with Kubernetes, Docker, CI/CD, and cloud-native deployments.
- Databases: PostgreSQL/MySQL, NoSQL, time-series data.
- Messaging systems: Kafka / Pulsar / RabbitMQ.
- Observability tools: Prometheus, Grafana, ELK/OpenSearch.
- Secure backend design (OAuth2, RBAC, audit logging).
Nice to Have
- Experience with DCIM, NMS, or infrastructure automation platforms.
- Exposure to hyperscale or colocation data centers.
- AI/ML-based monitoring or capacity planning experience.
Why Join
- Architect mission-critical platforms for large-scale data centers.
- High-impact principal role with deep technical ownership.
- Work on complex, real-world distributed systems problems.
Title:TeamLead– Software Development
(Lead ateam of developers to deliver applications in line withproduct strategy and growth)
Experience:8– 10 years
Department:InformationTechnology
Classification: Full-Time
Location:HybridinHyderabad,India (3days onsiteand2days remote)
Job Description:
Lookingforafull-time Software Development Team Lead to lead our high-performing Information
Technology team. Thisperson will play a key rolein Clarity’s business by overseeing a development
team, focusingonexisting systems and long-term growth.Thisperson will serveas the technical leader,
able to discuss data structures, new technologies, and methods of achieving system goals. This person
will be crucialin facilitating collaborationamong team members and providing mentoring.
Reporting to the Director, SoftwareDevelopment,thispersonwillberesponsible for theday-to-day
operations of their team, be the first point of escalation and technical contactfor theteam.
JobResponsibilities:
Manages all activities oftheir software developmentteamand sets goals for each team
member to ensure timely project delivery.
Performcode reviews andwrite code if needed.
Collaborateswiththe InformationTechnologydepartmentand business management
team to establish priorities for the team’s plan and manage team performance.
Provide guidance on project requirements,developer processes, andend-user
documentation.
Supports anexcellent customer experience bybeingproactive in assessing escalations
and working with the team to respond appropriately.
Uses technical expertise to contribute towards building best-in-class products. Analyzes
business needs and develops a mix of internal and externalsoftware systems that work
well together.
Using Clarity platforms, writes, reviews, and revises product requirements and
specifications. Analyzes software requirements,implements design plans, andreviews
unit tests. Participates in other areas of the software developmentprocess.
RequiredSkills:
ABachelor’s degree inComputerScience,InformationTechnology, Engineering,or a
related discipline.
Excellentwritten and verbalcommunication skills.
Experiencewith .Net Framework,WebApplications,WindowsApplications, andWeb
Services
Experience in developing andmaintaining applications using C#.NetCore,ASP.NetMVC,
and Entity Framework
Experience in building responsive front-endusingReact.js,Angular.js,HTML5, CSS3 and
JavaScript.
Experience in creating andmanaging databases, stored procedures andcomplex queries
with SQL Server
Experiencewith Azure Cloud Infrastructure
8+years of experience indesigning andcoding software inabove technology stack.
3+years ofmanaging a teamwithin adevelopmentorganization.
3+years of experience in Agile methodologies.
Preferred Skills:
Experience in Python,WordPress,PHP
Experience in using AzureDevOps
Experience working with Salesforce, orany othercomparable ticketing system
Experience in insurance/consumer benefits/file processing (EDI).
About the role:
We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.
At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.
Key responsibilities:
- Own and drive reliability and infrastructure strategy across multiple products or client engagements
- Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
- Lead architecture discussions around observability, scalability, availability, and cost efficiency.
- Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
- Build and review production-grade CI/CD and IaC systems used across teams
- Act as an escalation point for complex production issues and incident retrospectives.
- Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
- Mentor young engineers through design reviews, technical guidance, and best practices.
- Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
- Help teams mature their on-call processes, reliability culture, and operational ownership.
- Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice
About you:
- 9+ years of experience in SRE, DevOps, or software engineering roles
- Strong experience designing and operating Kubernetes-based systems on AWS at scale
- Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
- Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
- Strong understanding of distributed systems, microservices, and containerized workloads.
- Ability to write and review production-quality code (Golang, Python, Java, or similar)
- Solid Linux fundamentals and experience debugging complex system-level issues
- Experience driving cross-team technical initiatives.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Nice to have:
- Experience working in consulting or multi-client environments.
- Exposure to cost optimization, or large-scale AWS account management
- Experience building internal platforms or shared infrastructure used by multiple teams.
- Prior experience influencing or defining engineering standards across organizations.
8+ years backend engineering experience in production systems
Proven experience architecting large-scale distributed systems (high throughput, low latency, high availability)
Deep expertise in system design including scalability, fault tolerance, and performance optimization
Experience leading cross-team technical initiatives in complex systems
Strong understanding of security, privacy, compliance, and secure coding practices
Python Software Engineer (3–5 Years Experience)
Location: [Pune]
Role Overview
We are seeking skilled Python engineers to join our core product team. You will work on backend services, API development, and system integrations, contributing to a codebase of over 250,000 Python lines and collaborating with frontend, DevOps, and native code teams.
Key Responsibilities
· Design, develop, and maintain scalable Python backend services and APIs
· Optimize performance and reliability of large, distributed systems
· Collaborate with frontend (JS/HTML/CSS) and native (C/C++/C#) teams
· Write unit/integration tests and participate in code reviews
· Troubleshoot production issues and implement robust solutions
Required Skills
· 3–5 years of professional Python development experience
· Strong understanding of OOP, design patterns, and modular code structure
· Experience with MongoDB (PyMongo), Mako, RESTful APIs, and asynchronous programming
· Familiarity with code quality tools (flake8, pylint) and test frameworks (pytest, unittest)
· Experience with Git and collaborative development workflows
· Ability to read and refactor large, multi-module codebases
Nice to Have
· Experience with web frameworks (web.py, Flask, Django)
· Knowledge of C/C++ or C# for cross-platform integrations
· Familiarity with CI/CD, Docker, and cloud deployment
· Exposure to security, encryption, or enterprise SaaS products
What We Offer
· Opportunity to work on a mission-critical, enterprise-scale product
· Collaborative, growth-oriented engineering culture
· Flexible work arrangements (remote/hybrid)
· Competitive compensation and benefits
We are seeking a mature, proactive, and highly capable Senior Full Stack Engineer with over 5 years of experience in Python, React, Cloud Services, and Generative AI (LLM, RAG, Agentic AI). The ideal candidate can handle multiple challenges independently, think smartly, and build scalable end-to-end applications while also owning architecture and deployment.
Must Have Skills
- Python (Fast API, Django REST Framework, Flask)
- React JS
- Cloud Services (VM, Storage, Auth and Auth, Functions and Deployments)
- Microservices , Serverless Architecture
- Docker, Container orchestration (Kubernetes)
- API Development & Integration
- Bitbucket or Git-based version control
- Agile/Kanban working model
- Familiarity with AI-powered coding assistants such as GitHub Copilot, Cursor AI, or Lovable AI.
- Basic understanding of Generative AI concepts and prompt engineering.
Good to Have Skills
- Experience with AI orchestration tools (Lang Chain, Llama Index, Semantic Kernel)
- Generative AI (LLMs, RAG Framework, Vector DB, AI Chatbots, Agentic AI)
- API Testing Tools (Postman)
- CI/CD Pipelines
- Advanced Cloud Networking & Security
- Automation Testing (Playwright, Selenium)
Preferred Personal Attributes
- Highly proactive and self-driven
- Smart problem solver with strong analytical ability
- Ability to work independently in ambiguous and complex scenarios
- Strong communication & stakeholder management skills
- Ownership mindset and willingness to handle multiple challenges at once
Key Responsibilities
Full Stack Development
- Build and maintain production-grade applications using Python (FastAPI/Django/Flask) and React / Next.js.
- Develop reusable frontend components and optimized backend services/microservices.
- Ensure clean architecture, maintainability, and code quality.
- Own development across the lifecycle—design, build, testing, deployment.
- Develop AI-driven applications using LLMs (OpenAI, LLaMA, Claude, Gemini, etc.).
- Build and optimize RAG pipelines, vector searches, embeddings, and agent workflows.
- Integrate vector databases: Pinecone, FAISS, Chroma, MongoDB Atlas Vector Search.
- Build AI chatbots, automation agents, and intelligent Assistants.
- Apply prompt engineering, fine-tuning, and model evaluation best practices.
- Deploy, manage, and monitor cloud workloads on AWS/Azure/GCP.
- Design and implement serverless architectures, microservices, event-driven flows.
- Use Docker, CI/CD, and best DevOps practices.
- Ensure scalability, security, cost optimization, and reliability.
Collaboration & Leadership
- Comfortably handle ambiguity, break down problems, and deliver with ownership.
- Lead technical initiatives and mentor junior team members.
- Work closely with cross-functional teams in Agile/Kanban environments.
Role Overview:
We are seeking a backend-focused Software Engineer with deep expertise in REST APIs,
real-time integrations, and cloud-based application services. The ideal candidate will build
scalable backend systems, integrate real-time data flows, and contribute to system design
and documentation. This is a hands-on role working with global teams in a fast-paced, Agile
environment.
Key Responsibilities:
• Design, develop, and maintain REST APIs and backend services using Python, FastAPI,
and SQLAlchemy.
• Build and support real-time integrations using AWS Lambda, API Gateway, and
EventBridge.
• Develop and maintain Operational Data Stores (ODS) for real-time data access.
• Write performant SQL queries and work with dimensional data models in PostgreSQL.
• Contribute to cloud-based application logic and data orchestration.
• Containerize services using Docker and deploy via CI/CD pipelines.
• Implement automated testing using pytest, pydantic, and related tools.
• Collaborate with cross-functional Agile teams using tools like Jira.
• Document technical workflows, APIs, and system integrations with clarity and
consistency.
• Should have experience in team management
Required Skills & Experience:
• 8+ years of backend or integrations engineering experience.
• Expert-level knowledge of REST API development and real-time system design.
• Strong experience with: Python (FastAPI preferred), SQLAlchemy.
• PostgreSQL and advanced SQL.
• AWS Lambda, API Gateway, EventBridge.
• Operational Data Stores (ODS) and distributed system integration.
• Experience with Docker, Git, CI/CD tools, and automated testing frameworks.
• Experience working in Agile environments and collaborating with cross-functional
teams.
• Comfortable producing and maintaining clear technical documentation.
• Working knowledge of React is acceptable but not a focus.
• Hands-on experience working with Databricks or similar data platforms.
Education & Certifications:
• Bachelor’s degree in Computer Science, Engineering, or a related field (required).
• Master’s degree is a plus.
• Certifications in AWS (e.g., Developer Associate, Solutions Architect) or Python
frameworks are highly preferred.
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
Principal Electrical & Electronics Engineer – Robotics
About the Role
Octobotics Tech is seeking a Principal Electrical & Electronics Engineer to lead the design, development, and validation of high-reliability electronic systems for next-generation robotic platforms. This is a core engineering leadership position for professionals with strong multidisciplinary experience in power electronics, embedded systems, EMI/EMC compliance, and safety-critical design, ideally within marine, industrial, or hazardous environments.
Key Responsibilities
· System Architecture & Leadership
· Architect and supervise electrical and electronic systems for autonomous and remotely operated robotic platforms, ensuring reliability under challenging industrial conditions.
· Lead end-to-end product development — from design, prototyping, and integration to validation and deployment.
· Develop modular, scalable architectures enabling payload integration, sensor fusion, and AI-based control.
· Collaborate with firmware, AI/ML, and mechanical teams to achieve seamless system-level integration.
· Power & Safety Systems
· Design robust, stable, and redundant power supply architectures for FPGA-based controllers, sensors, and high-value electronics.
· Engineer surge-protected, isolated, and fail-safe electrical systems compliant with MIL-STD, ISO, and IEC safety standards.
· Implement redundancy, grounding, and safety strategies for operations in Oil & Gas, Offshore, and Hazardous Zones.
· Compliance & Validation
· Ensure adherence to EMI/EMC, ISO 13485, IEC 60601, and industrial safety standards.
· Conduct design simulations, PCB design reviews, and validation testing using tools such as KiCAD, OrCAD, MATLAB/Simulink, and LabVIEW.
· Lead certification, quality, and documentation processes for mission-critical subsystems.
· Innovation & Mentorship
· Apply AI-enhanced signal processing, fuzzy control, and advanced filtering methods to embedded hardware.
· Mentor and guide junior engineers and technicians to foster a knowledge-driven, hands-on culture.
· Contribute to R&D strategy, product roadmaps, and technical proposals supporting innovation and fundraising.
Required Technical Skills
· Hardware & PCB Design: KiCAD, OrCAD, EAGLE; 2–6 layer mixed-signal and power boards.
· Embedded Systems: ARM Cortex A/M, STM32, ESP32, Nordic NRF52, NVIDIA Jetson, Raspberry Pi.
· Programming: Embedded C, Python, MATLAB, PyQt, C/C++.
· Simulation & Control: MATLAB/Simulink, LabVIEW; PID, fuzzy logic, and ML-based control.
· Compliance Expertise: EMI/EMC, MIL-STD, ISO 13485, IEC 60601, and industrial safety standards.
· Hands-On Skills: Soldering, circuit debugging, power testing, and system integration.
Qualifications
· Education: B.Tech/M.S. in Electrical & Electronics Engineering (NIT/IIT preferred).
· Experience: Minimum 5+ years in electronics system design, hardware-firmware integration, or robotics/industrial systems.
· Preferred Background: Experience in R&D leadership, system compliance, and end-to-end hardware development.
Compensation & Growth
CTC: Competitive and flexible for exceptional candidates (aligned with ₹12–20 LPA range).
Engagement: Full-time, 5.5-day work week with fast-paced project timelines.
Rewards: Accelerated growth opportunities in a deep-tech robotics environment driving innovation in inspection and NDT automation.
About the Role
We are seeking a hands-on Tech Lead to design, build, and integrate AI-driven systems that automate and enhance real-world business workflows. This is a high-impact role for someone who enjoys full-stack ownership — from backend AI architecture to frontend user experiences — and can align engineering decisions with measurable product outcomes.
You will begin as a strong individual contributor, independently architecting and deploying AI-powered solutions. As the product portfolio scales, you will lead a distributed team across India and Australia, acting as a System Integrator to align engineering, data, and AI contributions into cohesive production systems.
Example Project
Design and deploy a multi-agent AI system to automate critical stages of a company’s sales cycle, including:
- Generating client proposals using historical SharePoint data and CRM insights
- Summarizing meeting transcripts
- Drafting follow-up communications
- Feeding structured insights into dashboards and workflow tools
The solution will combine RAG pipelines, LLM reasoning, and React-based interfaces to deliver measurable productivity gains.
Key Responsibilities
- Architect and implement AI workflows using LLMs, vector databases, and automation frameworks
- Act as a System Integrator, coordinating deliverables across distributed engineering and AI teams
- Develop frontend interfaces using React/JavaScript to enable seamless human-AI collaboration
- Design APIs and microservices integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure)
- Drive architecture decisions balancing scalability, performance, and security
- Collaborate with product managers, clients, and data teams to translate business use cases into production-ready systems
- Mentor junior engineers and evolve into a broader leadership role as the team grows
Ideal Candidate Profile
Experience Requirements
- 5+ years in full-stack development (Python backend + React/JavaScript frontend)
- Strong experience in API and microservice integration
- 2+ years leading technical teams and coordinating distributed engineering efforts
- 1+ year of hands-on AI project experience (LLMs, Transformers, LangChain, OpenAI/Azure AI frameworks)
- Prior experience in B2B SaaS environments, particularly in AI, automation, or enterprise productivity solutions
Technical Expertise
- Designing and implementing AI workflows including RAG pipelines, vector databases, and prompt orchestration
- Ensuring backend and AI systems are scalable, reliable, observable, and secure
- Familiarity with enterprise integrations (SharePoint, Teams, Databricks, Azure)
- Experience building production-grade AI systems within enterprise SaaS ecosystems
Database Programmer / Developer (SQL, Python, Healthcare)
Job Summary
We are seeking a skilled and experienced Database Programmer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining our database systems, with a strong focus on data integrity, performance, and security. The role requires expertise in SQL, strong programming skills in Python, and prior experience working within the healthcare domain to handle sensitive data and complex regulatory requirements.
Key Responsibilities
- Design, implement, and maintain scalable and efficient database schemas and systems.
- Develop and optimize complex SQL queries, stored procedures, and triggers for data manipulation and reporting.
- Write and maintain Python scripts to automate data pipelines, ETL processes, and database tasks.
- Collaborate with data analysts, software developers, and other stakeholders to understand data requirements and deliver robust solutions.
- Ensure data quality, integrity, and security, adhering to industry standards and regulations such as HIPAA.
- Troubleshoot and resolve database performance issues, including query tuning and indexing.
- Create and maintain technical documentation for database architecture, processes, and applications.
Required Qualifications
- Experience:
- Proven experience as a Database Programmer, SQL Developer, or a similar role.
- Demonstrable experience working with database systems, including data modeling and design.
- Strong background in developing and maintaining applications and scripts using Python.
- Direct experience within the healthcare domain is mandatory, including familiarity with medical data (e.g., patient records, claims data) and related regulatory compliance (e.g., HIPAA).
- Technical Skills:
- Expert-level proficiency in Structured Query Language (SQL) and relational databases (e.g., SQL Server, PostgreSQL, MySQL).
- Solid programming skills in Python, including experience with relevant libraries for data handling (e.g., Pandas, SQLAlchemy).
- Experience with data warehousing concepts and ETL (Extract, Transform, Load) processes.
- Familiarity with version control systems, such as Git.
Preferred Qualifications
- Experience with NoSQL databases (e.g., MongoDB, Cassandra).
- Knowledge of cloud-based data platforms (e.g., AWS, GCP, Azure).
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Familiarity with other programming languages relevant to data science or application development.
Education
- Bachelor’s degree in computer science, Information Technology, or a related field.
To process your resume for the next process, please fill out the Google form with your updated resume.
Hi,
Greetings from Ampera!
we are looking for a Data Scientist with strong Python & Forecasting experience.
Title : Data Scientist – Python & Forecasting
Experience : 4 to 7 Yrs
Location : Chennai/Bengaluru
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Working hours : 09:00 a.m. to 06:00 p.m.
Workdays : Mon - Fri
Job Description:
We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.
Key Responsibilities
- Develop and implement forecasting models (time-series and machine learning based).
- Perform exploratory data analysis (EDA), feature engineering, and model validation.
- Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
- Design, train, validate, and optimize machine learning models for real-world business use cases.
- Apply appropriate ML algorithms based on business problems and data characteristics
- Write clean, modular, and production-ready Python code.
- Work extensively with Python Packages & libraries for data processing and modelling.
- Collaborate with Data Engineers and stakeholders to deploy models into production.
- Monitor model performance and improve accuracy through continuous tuning.
- Document methodologies, assumptions, and results clearly for business teams.
Technical Skills Required:
Programming
- Strong proficiency in Python
- Experience with Pandas, NumPy, Scikit-learn
Forecasting & Modelling
- Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
- Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
- Understanding of seasonality, trend decomposition, and statistical modeling
Data & Deployment
- Experience handling structured and large datasets
- SQL proficiency
- Exposure to model deployment (API-based deployment preferred)
- Knowledge of MLOps concepts is an added advantage
Tools (Preferred)
- TensorFlow / PyTorch (optional)
- Airflow / MLflow
- Cloud platforms (AWS / Azure / GCP)
Educational Qualification
- Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.
Key Competencies
- Strong analytical and problem-solving skills
- Ability to communicate insights to technical and non-technical stakeholders
- Experience working in agile or fast-paced environments
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Strong AI & Full-Stack Tech Lead
Mandatory (Experience 1): Must have 5+ years of experience in full-stack development, including Python for backend development and React/JavaScript for frontend, along with API/microservice integration.
Mandatory (Experience 2): Must have 2+ years of experience in leading technical teams, coordinating engineers, and acting as a system integrator across distributed teams.
Mandatory (Experience 3): Must have 1+ year of hands-on experience in AI projects, including LLMs, Transformers, LangChain, or OpenAI/Azure AI frameworks.
Mandatory (Tech Skills 1): Must have experience in designing and implementing AI workflows, including RAG pipelines, vector databases, and prompt orchestration.
Mandatory (Tech Skills 2): Must ensure backend and AI system scalability, reliability, observability, and security best practices.
Mandatory (Company): Must have experience working in B2B SaaS companies delivering AI, automation, or enterprise productivity solutions
Tech Skills (Familiarity): Should be familiar with integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure) and enterprise SaaS environmentsx
Mandatory (Note): Both founders are based out of Australia, design (2) and developer (4) team in India. Indian shift timings.
Position: Assistant Professor
Department: CSE / IT
Experience: 0 – 15 Years
Joining: Immediate / Within 1 Month
Salary: As per norms and experience
🎓 Qualification:
ME / M.Tech in Computer Science Engineering / Information Technology
Ph.D. (Preferred but not mandatory)
First Class in UG & PG as per AICTE norms
🔍 Roles & Responsibilities:
Deliver high-quality lectures for UG / PG programs
Prepare lesson plans, course materials, and academic content
Guide student projects and internships
Participate in curriculum development and academic planning
Conduct internal assessments, evaluations, and result analysis
Mentor students for academic and career growth
Participate in departmental research activities
Publish research papers in reputed journals (Scopus/SCI preferred)
Attend Faculty Development Programs (FDPs), workshops, and conferences
Contribute to NAAC / NBA accreditation processes
Support institutional administrative responsibilities
💡 Required Skills:
Strong subject knowledge in CSE / IT domains
Programming proficiency (Python, Java, C++, Data Structures, AI/ML, Cloud, etc.)
Excellent communication and presentation skills
Research orientation and academic enthusiasm
Team collaboration and mentoring ability
* Python (3 to 6 years): Strong expertise in data workflows and automation
* Spark (PySpark): Hands-on experience with large-scale data processing
* Pandas: For detailed data analysis and validation
* Delta Lake: Managing structured and semi-structured datasets at scale
* SQL: Querying and performing operations on Delta tables
* Azure Cloud: Compute and storage services
* Orchestrator: Good experience with either ADF or Airflow
🚀 Hiring: C++ Content Writer Intern
📍 Remote | ⏳ 3 Months | 💼 Internship
We’re looking for someone who has strong proficiency in C++, DSA and maths (probability, statistics).
You should be comfortable with:
1. Modern C++ (RAII, memory management, move semantics)
2. Concurrency & low-latency concepts (threads, atomics, cache behavior)
3. OS fundamentals (threads vs processes, virtual memory)
4. Strong Maths (probability, stats)
5. Writing, Reading and explaining real code
What you’ll do:
1. Write deep technical content on C++, coding.
2. Break down core computer science, HFT-style, low-latency concepts
3. Create articles, code deep dives, and explainers
What you get:
1. Good Pay as per industry standards
2. Exposure to real C++ applied in quant engineering
3. Mentorship from top engineering minds.
4. A strong public technical portfolio
5. Clear signal for Quant Developer / SDE/ Low-latency C++ roles.
Role Overview:
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana:
Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
About Us:
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Job Summary:
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities:
- ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
- Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
- Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
- Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
- API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
- Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
- Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
- Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills:
- Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
- Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
- Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
- Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
- Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
- Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
- Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
- Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
- Experience with data validation techniques and tools.
- Familiarity with CI/CD practices and the ability to work in an Agile framework.
- Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
- Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
- Familiarity with similar large-scale public dataset integration initiatives.
- Experience with multilingual data integration.
About the job:
Job Title: QA Lead
Location: Teynampet, Chennai
Job Type: Full-time
Experience Level: 8+ Years
Company: Gigadesk Technologies Pvt. Ltd. [Greatify]
Website: www.greatify.ai
Company Description:
At Greatify.ai, we lead the transformation of educational institutions with state-of-the-art AI-driven solutions. Serving 100+ institutions globally, our mission is to unlock their full potential by enhancing learning experiences and streamlining operations. Join us to empower the future of education with innovative technology.
Job Description:
We are looking for a QA Lead to own and drive the quality strategy across our product suite. This role combines hands-on automation expertise with team leadership, process ownership, and cross-functional collaboration.
As a QA Lead, you will define testing standards, guide the QA team, ensure high test coverage across web and mobile platforms, and act as the quality gatekeeper for all releases.
Key Responsibilities:
● Leadership & Ownership
- Lead and mentor the QA team, including automation and manual testers
- Define QA strategy, test plans, and quality benchmarks across products
- Own release quality and provide go/no-go decisions for deployments
- Collaborate closely with Engineering, Product, and DevOps teams
● Automation & Testing
- Oversee and contribute to automation using Playwright (Python) for web applications
- Guide mobile automation efforts using Appium (iOS & Android)
- Ensure comprehensive functional, regression, integration, and smoke test coverage
- Review automation code for scalability, maintainability, and best practices
● Process & Quality Improvement
- Establish and improve QA processes, documentation, and reporting
- Drive shift-left testing and early defect detection
- Ensure API testing coverage and support performance/load testing initiatives
- Track QA metrics, defect trends, and release health
● Stakeholder Collaboration
- Work with Product Managers to understand requirements and define acceptance criteria
- Communicate quality risks, timelines, and test results clearly to stakeholders
- Act as the single point of accountability for QA deliverables
Skills & Qualifications:
● Required Skills
- Strong experience in QA leadership or senior QA roles
- Proficiency in Python-based test automation
- Hands-on experience with Playwright for web automation
- Experience with Appium for mobile automation
- Strong understanding of REST API testing (Postman / automation scripts)
- Experience integrating tests into CI/CD pipelines (GitLab CI, Jenkins, etc.)
- Solid understanding of SDLC, STLC, and Agile methodologies
● Good to Have
- Exposure to performance/load testing tools (Locust, JMeter, k6)
- Experience in EdTech or large-scale transactional systems
- Knowledge of cloud-based environments and release workflows.





















