50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!


Only candidates currently in Bihar or Open to relocate to Bihar, please apply:
Job Description:
This is an exciting opportunity for an experienced industry professional with strong analytical and technical skills to join and add value to a dedicated and friendly team. We are looking for a Data Analyst who is driven by data-driven decision-making and insights. As a core member of the Analytics Team, the candidate will take ownership of data analysis projects by working independently with little supervision.
The ideal candidate is a highly resourceful and innovative professional with extensive experience in data analysis, statistical modeling, and data visualization. The candidate must have a strong command of data analysis tools like SAS/SPSS, Power BI/Tableau, or R, along with expertise in MS Excel and MS PowerPoint. The role requires optimizing data collection procedures, generating reports, and applying statistical techniques for hypothesis testing and data interpretation.
Key Responsibilities:
• Perform data analysis using tools like SAS, SPSS, Power BI, Tableau, or R.
• Optimize data collection procedures and generate reports on a weekly, monthly, and quarterly basis.
• Utilize statistical techniques for hypothesis testing to validate data and interpretations.
• Apply data mining techniques and OLAP methodologies for in-depth insights.
• Develop dashboards and data visualizations to present findings effectively.
• Collaborate with cross-functional teams to define, design, and execute data-driven strategies.
• Ensure the accuracy and integrity of data used for analysis and reporting.
• Utilize advanced Excel skills to manipulate and analyze large datasets.
• Prepare technical documentation and presentations for stakeholders.
Candidate Profile:
Required Qualifications:
• Qualification: MCA / Graduate / Post Graduate in Statistics or MCA or BE/B.Tech in Computer Science & Engineering, Information Technology, or Electronics.
• A minimum of 2 years' experience in data analysis using SAS/SPSS, Power BI/Tableau, or R.
• Proficiency in MS Office with expertise in MS Excel & MS PowerPoint.
• Strong analytical skills with attention to detail.
• Experience in data mining and OLAP methodologies.
• Ability to generate insights and reports based on data trends.
• Excellent communication and presentation skills.
Desired Qualifications:
• Experience in predictive analytics and machine learning techniques.
• Knowledge of SQL and database management.
• Familiarity with Python for data analysis.
• Experience in automating reporting processes.



General Summary:
The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.
Essential Job Functions:
- Design, develop, and maintain scalable applications using Python and Django.
- Build responsive and dynamic user interfaces using React and TypeScript.
- Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
- Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
- Develop and manage RESTful APIs for seamless integration with third-party services.
- Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
- Use version control systems (primarily Git) and follow collaborative workflows.
- Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
- Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
- Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products
Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.
Supportive Job Functions:
- Remain knowledgeable of new emerging technologies and their impact on internal systems.
- Available to work on call when needed.
- Perform other miscellaneous duties as assigned by management.
These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.
Skills
- The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
- TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
- The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
- Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
- Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
- The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
- In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
- Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
- Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
- Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
- Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.
Abilities:
- Ability to organize, prioritize, and handle multiple assignments on a daily basis.
- Strong and effective inter-personal and communication skills
- Ability to interact professionally with a diverse group of clients and staff.
- Must be able to work flexible hours on-site and remote.
- Must be able to coordinate with other staff and provide technological leadership.
- Ability to work in a complex, dynamic team environment with minimal supervision.
- Must possess good organizational skills.
Education, Experience, and Certification:
- Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
- 2+ years relevant experience, required.
- Experience using version control daily in a developer environment.
- Experience with Python, JavaScript, and React is required.
- Experience using rapid development frameworks like Django or Flask.
- Experience using front end build tools.
Scope of Job:
- No direct reports.
- No supervisory responsibility.
- Consistent work week with minimal travel
- Errors may be serious, costly, and difficult to discover.
- Contact with others inside and outside the company is regular and frequent.
- Some access to confidential data.

A cloud tech firm offering secure, scalable hybrid storage s

Role: Python Developer (Immediate Joiner)
Location: Gurugram, India (Onsite)
Experience: 4+ years
Company: Mizzle Cloud Pvt Ltd
Working Days: 6 Days( 5 days- WFO, Sat- WFH)
Job Summary
We are seeking a skilled Python Django Developer with expertise in building robust, scalable, and efficient web applications. Must have 3+ years of core work experience. The ideal candidate will have hands-on experience with RabbitMQ, Redis, Celery, and PostgreSQL to ensure seamless background task management, caching, and database performance.
Key Responsibilities
- Develop, maintain, and enhance Django-based web applications and APIs.
- Design and implement message broker solutions using RabbitMQ to manage asynchronous communication.
- Integrate Redis for caching and session storage to optimize performance.
- Implement and manage task queues using Celery for background processes.
- Work with PostgreSQL for database design, optimization, and query tuning.
- Collaborate with front-end developers, DevOps engineers, and stakeholders to deliver high-quality software solutions.
- Write clean, modular, and well-documented code following best practices and standards.
- Debug, troubleshoot, and resolve issues across the application stack.
- Participate in code reviews, system design discussions, and team meetings.
- Ensure scalability, reliability, and security of applications.
Technical Skills:
- Must have minimum 4+ years of relevant work experience.
- Strong proficiency in Python and Django framework.
- Experience with message brokers, particularly RabbitMQ.
- Familiarity with Redis for caching and session management.
- Hands-on experience with Celery for distributed task queues.
- Proficiency in PostgreSQL, including database design and optimization.
- Knowledge of RESTful API design and development.
- Understanding of Docker and containerized applications.
Preferred Skills:
- Experience with CI/CD pipelines.
- Familiarity with cloud platforms (AWS, GCP).
- Knowledge of Django ORM and its optimizations.
- Basic understanding of front-end technologies (HTML, CSS, JavaScript).
Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork abilities.
- Ability to work in an agile environment and adapt to changing requirements.
Educational Requirements
- Bachelor’s degree in Computer Science, Software Engineering, or a related field.

Job Title: Full Stack Developer (MERN + Python)
Location: Bangalore
Job Type: Full-time
Experience: 4–8 years
About Miror
Miror is a pioneering FemTech company transforming how midlife women navigate perimenopause and menopause. We offer medically-backed solutions, expert consultations, community engagement, and wellness products to empower women in their health journey. Join us to make a meaningful difference through technology.
Role Overview
· We are seeking a passionate and experienced Full Stack Developer skilled in MERN stack and Python (Django/Flask) to build and scale high-impact features across our web and mobile platforms. You will collaborate with cross-functional teams to deliver seamless user experiences and robust backend systems.
Key Responsibilities
· Design, develop, and maintain scalable web applications using MySQL/Postgres, MongoDB, Express.js, React.js, and Node.js
· Build and manage RESTful APIs and microservices using Python (Django/Flask/FastAPI)
· Integrate with third-party platforms like OpenAI, WhatsApp APIs (Whapi), Interakt, and Zoho
· Optimize performance across the frontend and backend
· Collaborate with product managers, designers, and other developers to deliver high-quality features
· Ensure security, scalability, and maintainability of code
· Write clean, reusable, and well-documented code
· Contribute to DevOps, CI/CD, and server deployment workflows (AWS/Lightsail)
· Participate in code reviews and mentor junior developers if needed
Required Skills
· Strong experience with MERN Stack: MongoDB, Express.js, React.js, Node.js
· Proficiency in Python and web frameworks like Django, Flask, or FastAPI
· Experience working with REST APIs, JWT/Auth, and WebSockets
· Good understanding of frontend design systems, state management (Redux/Context), and responsive UI
· Familiarity with database design and queries (MongoDB, PostgreSQL/MySQL)
· Experience with Git, Docker, and deployment pipelines
· Comfortable working in Linux-based environments (e.g., Ubuntu on AWS)
Bonus Skills
· Experience with AI integrations (e.g., OpenAI, LangChain)
· Familiarity with WooCommerce, WordPress APIs
· Experience in chatbot development or WhatsApp API integration
Who You Are
· You are a problem-solver with a product-first mindset
· You care about user experience and performance
· You enjoy working in a fast-paced, collaborative environment
· You have a growth mindset and are open to learning new technologies
Why Join Us?
· Work at the intersection of healthcare, community, and technology
· Directly impact the lives of women across India and beyond
· Flexible work environment and collaborative team
· Opportunity to grow with a purpose-driven startup
·
In you are interested please apply here and drop me a message here in cutshort.

Job Title : Python Developer (Immediate Joiner)
Experience Required : 3+ Years
Employment Type : Full-time
Location : Gurugram, India (Onsite)
Working Days : 6 Days (5 Days WFO + 1 Day WFH)
Job Summary :
We are seeking a talented and experienced Python Developer with a strong background in Django and a proven ability to build scalable, secure, and high-performance web applications. The ideal candidate will have hands-on experience with RabbitMQ, Redis, Celery, and PostgreSQL, and will play a key role in developing and maintaining robust backend systems. This is an onsite opportunity for immediate joiners.
Mandatory Skills : Python, Django, RabbitMQ, Redis, Celery, PostgreSQL, RESTful APIs, Docker.
Key Responsibilities :
- Design, develop, and maintain Django-based web applications and APIs.
- Implement asynchronous task handling using RabbitMQ and Celery.
- Optimize application performance using Redis for caching and session storage.
- Manage database operations, including schema design and query optimization, using PostgreSQL.
- Collaborate with front-end developers, DevOps teams, and stakeholders to deliver full-featured solutions.
- Write clean, modular, and well-documented code aligned with industry best practices.
- Troubleshoot, debug, and resolve issues across the application stack.
- Participate in architecture discussions, code reviews, and sprint planning.
- Ensure the scalability, performance, and security of backend services.
Required Technical Skills :
- Minimum 3 Years of experience in Python development.
- Strong hands-on experience with the Django framework.
- Proficiency in RabbitMQ for message brokering.
- Practical experience with Redis for caching and session management.
- Experience using Celery for background job/task queue management.
- Solid knowledge of PostgreSQL (database design, indexing, and optimization).
- Understanding of RESTful API development and integration.
- Familiarity with Docker and containerization.
Preferred Skills :
- Exposure to CI/CD tools and pipelines.
- Experience working with cloud platforms such as AWS or GCP.
- Knowledge of Django ORM optimization techniques.
- Basic familiarity with front-end technologies like HTML, CSS, and JavaScript.
Soft Skills :
- Strong analytical and problem-solving capabilities.
- Effective communication and interpersonal skills.
- Ability to thrive in an agile, fast-paced development environment.


About Us
DAITA is a German AI startup revolutionizing the global textile supply chain by digitizing factory-to-brand workflows. We are building cutting-edge AI-powered SaaS and Agentic Systems that automate order management, production tracking, and compliance — making the supply chain smarter, faster, and more transparent.
Fresh off a $500K pre-seed raise, our passionate team is on the ground in India, collaborating directly with factories and brands to build our MVP and create real-world impact. If you’re excited by the intersection of AI, SaaS, and supply chain innovation, join us to help reshape how textiles move from factory floors to global brands.
Role Overview
We’re seeking a versatile Full-Stack Engineer to join our growing engineering team. You’ll be instrumental in designing and building scalable, secure, and high-performance applications that power our AI-driven platform. Working closely with Founders, ML Engineers, and Pilot Customers, you’ll transform complex AI workflows into intuitive, production-ready features.
What You’ll Do
• Design, develop, and deploy backend services, APIs, and microservices powering our platform.
• Build responsive, user-friendly frontend applications tailored for factory and brand users.
• Integrate AI/ML models and agentic workflows into seamless production environments.
• Develop features supporting order parsing, supply chain tracking, compliance, and reporting.
• Collaborate cross-functionally to iterate rapidly, test with users, and deliver impactful releases.
• Optimize applications for performance, scalability, and cost-efficiency on cloud platforms.
• Establish and improve CI/CD pipelines, deployment processes, and engineering best practices.
• Write clear documentation and maintain clean, maintainable code.
Required Skills
• 3–5 years of professional Full-Stack development experience
• Strong backend skills with frameworks like Node.js, Python (FastAPI, Django), Go, or similar
• Frontend experience with React, Vue.js, Next.js, or similar modern frameworks
• Solid knowledge and experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis, Neon)
• Strong API design skills (REST mandatory; GraphQL a plus)
• Containerization expertise with Docker
• Container orchestration and management with Kubernetes (including experience with Helm charts, operators, or custom resource definitions)
• Cloud deployment and infrastructure experience on AWS, GCP or Azure
• Hands-on experience deploying AI/ML models in cloud-native environments (AWS, GCP or Azure) with scalable infrastructure and monitoring.
• Experience with managed AI/ML services like AWS SageMaker, GCP Vertex AI, Azure ML, Together.ai, or similar
• Experience with CI/CD pipelines and DevOps tools such as Jenkins, GitHub Actions, Terraform, Ansible, or ArgoCD
• Familiarity with monitoring, logging, and observability tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Helicone
Nice-to-have
• Experience with TypeScript for full-stack AI SaaS development
• Use of modern UI frameworks and tooling like Tailwind CSS
• Familiarity with modern AI-first SaaS concepts viz. vector databases for fast ML data retrieval, prompt engineering for LLM integration, integrating with OpenRouter or similar LLM orchestration frameworks etc.
• Knowledge of MLOps tools like Kubeflow, MLflow, or Seldon for model lifecycle management.
• Background in building data pipelines, real-time analytics, and predictive modeling.
• Knowledge of AI-driven security tools and best practices for SaaS compliance.
• Proficiency in cloud automation, cost optimization, and DevOps for AI workflows.
• Ability to design and implement hyper-personalized, adaptive user experiences.
What We Value
• Ownership: You take full responsibility for your work and ship high-quality solutions quickly.
• Bias for Action: You’re pragmatic, proactive, and focused on delivering results.
• Clear Communication: You articulate ideas, challenges, and solutions effectively across teams.
• Collaborative Spirit: You thrive in a cross-functional, distributed team environment.
• Customer Focus: You build with empathy for end users and real-world usability.
• Curiosity & Adaptability: You embrace learning, experimentation, and pivoting when needed.
• Quality Mindset: You write clean, maintainable, and well-tested code.
Why Join DAITA?
• Be part of a mission-driven startup transforming a $1+ Trillion global industry.
• Work closely with founders and AI experts on cutting-edge technology.
• Directly impact real-world supply chains and sustainability.
• Grow your skills in AI, SaaS, and supply chain tech in a fast-paced environment.

🔍 We’re Hiring: Solution Engineer (Observability)
📍 Location: ["Remote/Onsite"]
🏢 Company: Product-Based Client
🕒 Experience: 5+ Years
💼 Employment Type: Full-Time
About the Role:
We are hiring for a Solution Engineer – Observability role with one of our fast-scaling product-based clients. This position is ideal for someone with strong technical acumen and exceptional communication skills who enjoys working at the intersection of engineering and customer success.
As a Solution Engineer, you will lead technical conversations with a range of personas—from DevOps teams to C-suite executives—while delivering innovative observability solutions that showcase real value.
Key Responsibilities:
- 🤝 Collaborate closely with Account Executives on technical sales strategy and execution for complex deals.
- 🎤 Deliver engaging product demos and technical presentations tailored to various stakeholder levels.
- 🛠️ Manage technical sales activities including discovery, sizing, architecture planning, and Proof of Concepts (POCs).
- 🔧 Design and implement custom solutions to bridge product gaps and extend core functionality.
- 💡 Provide expert guidance on observability best practices, tools, and frameworks that align with customer needs.
- 📈 Stay current with industry trends, tools, and the evolving Observability ecosystem.
Requirements:
- ✅ 5+ years in a customer-facing technical role such as Pre-Sales Engineer, Solutions Architect, or Technical Consultant.
- ✅ Strong communication, interpersonal, and presentation skills—able to convey complex topics clearly and persuasively.
- ✅ Proven experience with technical integration, conducting POCs, and building tailored observability solutions.
- ✅ Proficiency in one or more programming languages: Java, Go, or Python.
- ✅ Solid understanding of Observability, Monitoring, Log Management, and SIEM tools and methodologies.
- ✅ Familiarity with observability-related platforms such as APM, RUM, and Log Analytics is desirable.
- ✅ Strong hands-on expertise in:
- Cloud platforms: AWS, Azure, GCP
- Containerization & Orchestration: Docker, Kubernetes
- Monitoring stacks: Prometheus, OpenTelemetry
Bonus Points For:
- 🧠 Previous experience in technical sales within APM, Logging, Monitoring, or SIEM platforms
Why Join?
- Work with a cutting-edge product solving complex observability challenges
- Be a key voice in the pre-sales and solutioning cycle
- Partner with cross-functional teams and engage directly with top-tier clients
- Enjoy a collaborative, high-growth environment focused on innovation and performance
DevOps Engineer
AiSensy
Gurugram, Haryana, India (On-site)
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
Employment type- Contract basis
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using PySpark and distributed computing frameworks.
- Implement ETL processes and integrate data from structured and unstructured sources into cloud data warehouses.
- Work across Azure or AWS cloud ecosystems to deploy and manage big data workflows.
- Optimize performance of SQL queries and develop stored procedures for data transformation and analytics.
- Collaborate with Data Scientists, Analysts, and Business teams to ensure reliable data availability and quality.
- Maintain documentation and implement best practices for data architecture, governance, and security.
⚙️ Required Skills
- Programming: Proficient in PySpark, Python, and SQL.
- Cloud Platforms: Hands-on experience with Azure Data Factory, Databricks, or AWS Glue/Redshift.
- Data Engineering Tools: Familiarity with Apache Spark, Kafka, Airflow, or similar tools.
- Data Warehousing: Strong knowledge of designing and working with data warehouses like Snowflake, BigQuery, Synapse, or Redshift.
- Data Modeling: Experience in dimensional modeling, star/snowflake schema, and data lake architecture.
- CI/CD & Version Control: Exposure to Git, Terraform, or other DevOps tools is a plus.
🧰 Preferred Qualifications
- Bachelor's or Master's in Computer Science, Engineering, or related field.
- Certifications in Azure/AWS are highly desirable.
- Knowledge of business intelligence tools (Power BI, Tableau) is a bonus.


· 3 to 5 years of full-stack development experience implementing applications using Python, React.js
· In-depth knowledge of Python – Data Analytics, NLP and Flask API
§ Experience working in SQL Databases (MySQL/Postgres – min. 2 years)
§ Ability to use Gen AI tools for Productivity.
§ Gen AI for Natural Language processing Use cases – Using ChatGPT 4/Gemini flash or other cutting-edge tools
§ Hands-on exposure in working with messaging systems like RabbitMQ
§ Experience in the end to end and unit testing frameworks (jest/cypress)
§ Experience working in NoSQL Databases like MongoDB
· Understanding differences between multiple delivery platforms, such as mobile vs. desktop, and optimizing output to match the specific platform.
§ Cloud architectural knowledge of Azure cloud.
§ Proficient understanding of code versioning tools, such as Git, SVN
§ Knowledge of CI/CD (Jenkins/Hudson)
§ Self-organizing & experience working in Agile/Scrum culture
Good to have.
§ Experience working in Angular, Elasticsearch and Redis
§ Understanding accessibility and security compliances
§ Understanding of UI/UX


We are seeking a passionate and knowledgeable Data Science and Data Analyst Trainer to deliver engaging and industry-relevant training programs. The trainer will be responsible for teaching core concepts in data analytics, machine learning, data visualization, and related tools and technologies. The ideal candidate will have hands-on experience in the data domain with 2-5 years and a flair for teaching and mentoring students or working professionals.


We are looking for a dynamic and skilled Data Science and Data Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in data analytics, data science, and business intelligence tools.


Proficient in Golang, Python, Java, C++, or Ruby (at least one)
Strong grasp of system design, data structures, and algorithms
Experience with RESTful APIs, relational and NoSQL databases
Proven ability to mentor developers and drive quality delivery
Track record of building high-performance, scalable systems
Excellent communication and problem-solving skills
Experience in consulting or contractor roles is a plus

Supply Wisdom: Full Stack Developer
Location: Hybrid Position based in Bangalore
Reporting to: Tech Lead Manager
Supply Wisdom is a global leader in transformative risk intelligence, offering real-time insights to drive business growth, reduce costs, enhance security and compliance, and identify revenue opportunities. Our AI-based SaaS products cover various risk domains, including financial, cyber, operational, ESG, and compliance. With a diverse workforce that is 57% female, our clients include Fortune 100 and Global 2000 firms in sectors like financial services, insurance, healthcare, and technology.
Objective: We are seeking a skilled Full Stack Developer to design and build scalable software solutions. You will be part of a cross-functional team responsible for the full software development life cycle, from conception to deployment.
As a Full Stack Developer, you should be proficient in both front-end and back-end technologies, development frameworks, and third-party libraries. We’re looking for a team player with strong problem-solving abilities, attention to visual design, and a focus on utility. Familiarity with Agile methodologies, including Scrum and Kanban, is essential.
Responsibilities
- Collaborate with the development team and product manager to ideate software solutions.
- Write effective and secure REST APIs.
- Integrate third-party libraries for product enhancement.
- Design and implement client-side and server-side architecture.
- Work with data scientists and analysts to enhance software using RPA and AI/ML techniques.
- Develop and manage well-functioning databases and applications.
- Ensure software responsiveness and efficiency through testing.
- Troubleshoot, debug, and upgrade software as needed.
- Implement security and data protection settings.
- Create features and applications with mobile-responsive design.
- Write clear, maintainable technical documentation.
- Build front-end applications with appealing, responsive visual design.
Requirements
- Degree in Computer Science (or related field) with 4+ years of hands-on experience in Python development, with strong expertise in the Django framework and Django REST Framework (DRF).
- Proven experience in designing and building RESTful APIs, with a solid understanding of API versioning, authentication (JWT/OAuth2), and best practices.
- Experience with relational databases such as PostgreSQL or MySQL; familiarity with query optimization and database migrations.
- Basic front-end development skills using HTML, CSS, and JavaScript; experience with any JavaScript framework (like React or Next Js) is a plus.
- Good understanding of Object-Oriented Programming (OOP) and design patterns in Python.
- Familiarity with Git and collaborative development workflows (e.g., GitHub, GitLab).
- Knowledge of Docker, CI/CD pipelines.
- Hands-on experience with AWS services, Nginx web server, RabbitMQ (or similar message brokers), event handling, and synchronization.
- Familiarity with Postgres, SSO implementation (desirable), and integration of third-party libraries.
- Experience with unit testing, debugging, and code reviews.
- Experience using tools like Jira and Confluence.
- Ability to work in Agile/Scrum teams with good communication and problem-solving skills.
Our Commitment to You:
We offer a competitive salary and generous benefits. In addition, we offer a vibrant work environment, a global team filled with passionate and fun-loving people coming from diverse cultures and backgrounds.
If you are looking to make an impact in delivering market-leading risk management solutions, empowering our clients, and making the world a better place, then Supply Wisdom is the place for you.
You can learn more at supplywisdom.com and on LinkedIn.
- 2+ years in a DevOps/SRE/System Engineering role
- Hands-on experience with Linux-based systems
- Experience with cloud platforms like AWS, GCP, or Azure
- Proficient in scripting (Bash, Python, or Go)
- Experience with monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.)
- Knowledge of containerization (Docker, Kubernetes)
- Familiarity with Git and CI/CD tools (Jenkins, GitLab CI, etc.)


Job Title: Senior SDET - Test
Location- Bangalore / Hybrid
Desired skills- Java / Python/ Javascript / Go, Docker, Kubernetes, Playwright, Jenkins, Postman, Jmeter, Rest API
Exp range- 10-12 yrs
Your responsibilities:
● Execute manual and automated tests throughout all stages of the product lifecycle.
● Work collaboratively within an Agile team and help define or improve processes as
needed.
● Build and maintain automation tools and utilities to improve testing efficiency and
coverage.
● Integrate automated tests into the CI/CD pipeline; triage test failures and support in
resolving production issues.
● Execute performance tests (load, stress, etc.)
● Continuously enhance your skills and actively support the growth of your teammates.
Key qualifications:
● Minimum 10-12+ years of experience as an SDET.
● Provide subject-matter expertise, best practices, and strategic direction for quality
assurance technology solutions in the commercial software/services space.
● Collaborate, architect, and execute on deploying industry standard quality testing,
metrics gathering and reporting capabilities for all commercially facing products.
● Plan, develop, and execute various quality tests that will shape understanding and
adjustments to the application portfolio.
● Ability to perform manual tests, create test documents and build a regression suite.
● Proven experience designing and implementing UI and API test automation
frameworks from scratch.
● Hands-on experience with tools and frameworks such as Playwright, Jenkins, GitHub,
Postman, JMeter, etc.
● Proficiency in at least one programming or scripting language, such as Python, Java,
Go, or JavaScript.
● Good grasp of OOP and SOLID principles to design reusable, modular, and
maintainable automated test code.
● Strong understanding of REST APIs, JSON, OAuth, and related web technologies.
● Ability to work effectively both as an individual contributor and within a collaborative
team environment.
● Excellent communication skills, with the ability to collaborate effectively across
cross-functional and distributed teams.
● Project management experience is a plus.

Company Description
I Vision Infotech is a full-fledged IT company delivering high-quality, cost-effective, and reliable web and e-commerce solutions. Founded in 2011 and based in India, the company serves both domestic and international clients from countries including the USA, Malaysia, Australia, Canada, and the United Kingdom. We specialize in web design, development, e-commerce, and mobile app services across various platforms such as Android, iOS, Windows, and Blackberry.
Role Description
This is a full-time role for an AI/ML Developer, based on-site in Ahmedabad. The AI/ML Developer will work on designing and implementing machine learning models and algorithms. Day-to-day tasks include developing software applications, conducting research on pattern recognition and neural networks, and performing tasks related to Natural Language Processing (NLP). The developer will collaborate closely with other team members to ensure the successful deployment and integration of AI/ML solutions.
Key Responsibilities:
- Develop and deploy machine learning models to solve real-world problems.
- Work on data preprocessing, feature engineering, and model optimization.
- Collaborate with cross-functional teams (UI/UX, Backend) to integrate AI/ML solutions.
- Conduct experiments and performance tuning of ML models.
- Stay updated with the latest research and development in AI/ML.
Required Skills & Experience:
- Proficiency in Python and ML libraries (TensorFlow, PyTorch, Scikit-learn).
- Strong knowledge of Supervised and Unsupervised Learning algorithms.
- Experience with data visualization and tools like Pandas, NumPy, Matplotlib.
- Experience in NLP, Computer Vision, or Recommendation Systems is a plus.
- Familiarity with Jupyter, Google Colab, or similar platforms.
- Understanding of model deployment and REST API integration is an advantage.
Qualifications:
- Bachelor’s or Master’s in Computer Science, IT, Data Science, or related field.
- Minimum 2 years of relevant AI/ML development experience.
- Strong problem-solving and analytical thinking.
- Good communication and team collaboration skills.
Perks:
- Fixed working hours
- Friendly team & work culture
- Exposure to real projects
- Growth in AI/ML domain
Note: This is an urgent requirement. Only serious and immediate joiners apply and local candidates will be considered for this role.

Position Overview
We are a UAE-based company looking for a talented AI Applications Engineer based in India on a work-from-home basis to join our team and help build cutting-edge AI-powered applications. You will be responsible for developing, implementing, and optimizing AI solutions using large language models and multi-agent systems to solve real-world business problems.
Key Responsibilities
- Design and develop AI applications using large language models (LLMs)
- Implement and manage AI agent systems using CrewAI framework
- Fine-tune language models for specific use cases and domain requirements
- Build robust APIs and backend services using Python and FastAPI
- Collaborate with cross-functional teams to integrate AI solutions into existing systems
- Optimize model performance and implement best practices for AI application deployment
- Research and evaluate new AI technologies and methodologies
- Monitor and maintain AI systems in production environments
- Document AI workflows, model architectures, and implementation details
Required Skills & Experience
- Strong proficiency in Python programming
- Hands-on experience with large language models (LLMs) and their practical applications
- Experience with model fine-tuning techniques and frameworks
- Knowledge of AI agent systems, particularly CrewAI framework
- Understanding of machine learning workflows and model lifecycle management
- Experience with API development and backend services
- Strong problem-solving skills and ability to work with complex AI systems
- Familiarity with AI/ML libraries and frameworks (transformers, langchain, etc.)
Nice to Have
- Experience with FastAPI for building high-performance APIs
- Docker containerization knowledge for AI application deployment
- Understanding of prompt engineering and optimization techniques
- Experience with vector databases and semantic search
- Knowledge of MLOps practices and tools
- Familiarity with cloud platforms (AWS, GCP, Azure) for AI workloads
- Experience with other AI agent frameworks and orchestration tools
What We Offer
- Opportunity to work on cutting-edge AI technologies
- Collaborative environment with experienced AI practitioners
- Access to latest AI tools and computational resources
- Competitive salary and comprehensive benefits
- Professional development and conference attendance opportunities
- Flexible work arrangements
How to Apply
Please submit your resume along with examples of AI projects you've worked on, particularly those involving LLMs, fine-tuning, or AI agents.


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
Requirements
Education- B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong Trouble shooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Dockers and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.


Job Title: Data Science Intern
Location: 6th Sector HSR Layout, Bangalore - Work from Office 5.5 Days
Duration: 3 Months | Stipend: Upto ₹12,000 per month
Post-Internship Offer (PPO): Available based on performance
🧑💻 About the Role
We are looking for a passionate and proactive Data Science Assistant Intern who is equally excited about mentoring learners and gaining hands-on experience with real-world data operations.
This is a 50% technical + 50% mentorship role that blends classroom support with practical data work. Ideal for those looking to build a career in EdTech and Applied Data Science.
🚀 What You'll Do
- Technical Responsibilities (50%)Create and manage dashboards using Python or BI tools like Power BI/Tableau
- Write and optimize SQL queries to extract and analyze backend data
- Support in data gathering, cleaning, and basic analysis
- Contribute to building data pipelines to assist internal decision-making and analytics
🚀Mentorship & Support (50%)
- Assist instructors during live Data Science sessions
- Solve doubts related to Python, Machine Learning, and Statistics
- Create and review quizzes, assignments, and other content
- Provide one-on-one academic support and mentoring
- Foster a positive and interactive learning environment
✅ Requirements
- Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field
- Strong knowledge of:
- Python (Data Structures, Functions, OOP, Debugging)
- Pandas, NumPy, Matplotlib
- Machine Learning algorithms (scikit-learn)
- SQL and basic data wrangling
- APIs, Web Scraping, and Time-Series basics
- Advanced Excel: Lookup & reference (VLOOKUP, INDEX+MATCH, XLOOKUP, SUMIF), Logical functions (IF, AND, OR), Statistical & Aggregate Functions: (COUNTIFS, STDEV, PERCENTILE), Text cleanup (TRIM, SUBSTITUTE), Time functions (DATEDIF, NETWORKDAYS), Pivot Tables, Power Query, Conditional Formatting, Data Validation, What-If Analysis, and dynamic dashboards using charts & slicers.
- Excellent communication and interpersonal skills
- Prior mentoring, teaching, or tutoring experience is a big plus
- Passion for helping others learn and grow


Job Title: Data Science Intern
Location: 6th Sector HSR Layout, Bangalore - Work from Office 5.5 Days
Duration: 3 Months | Stipend: Upto ₹12,000 per month
Post-Internship Offer (PPO): Available based on performance
🧑💻 About the Role
We are looking for a passionate and proactive Data Science Assistant Intern who is equally excited about mentoring learners and gaining hands-on experience with real-world data operations.
This is a 50% technical + 50% mentorship role that blends classroom support with practical data work. Ideal for those looking to build a career in EdTech and Applied Data Science.
🚀 What You'll Do
- Technical Responsibilities (50%)Create and manage dashboards using Python or BI tools like Power BI/Tableau
- Write and optimize SQL queries to extract and analyze backend data
- Support in data gathering, cleaning, and basic analysis
- Contribute to building data pipelines to assist internal decision-making and analytics
🚀Mentorship & Support (50%)
- Assist instructors during live Data Science sessions
- Solve doubts related to Python, Machine Learning, and Statistics
- Create and review quizzes, assignments, and other content
- Provide one-on-one academic support and mentoring
- Foster a positive and interactive learning environment
✅ Requirements
- Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field
- Strong knowledge of:
- Python (Data Structures, Functions, OOP, Debugging)
- Pandas, NumPy, Matplotlib
- Machine Learning algorithms (scikit-learn)
- SQL and basic data wrangling
- APIs, Web Scraping, and Time-Series basics
- Advanced Excel: Lookup & reference (VLOOKUP, INDEX+MATCH, XLOOKUP), Logical functions (IF, AND, OR), Statistical & Aggregate Functions: (COUNTIFS, STDEV, PERCENTILE), Text cleanup (TRIM, SUBSTITUTE), Time functions (DATEDIF, NETWORKDAYS), Pivot Tables, Power Query, Conditional Formatting, Data Validation, What-If Analysis, and dynamic dashboards using charts & slicers.
- Excellent communication and interpersonal skills
- Prior mentoring, teaching, or tutoring experience is a big plus
- Passion for helping others learn and grow

About the role
We are looking for a Senior Automation Engineer to architect and implement automated testing frameworks that validate the runtime behavior of code generated by our AI platform. This role is critical in ensuring that our platform's output performs correctly in production environments. You'll work at the intersection of AI and quality assurance, creating innovative testing solutions that can validate AI-generated applications during actual execution.
What Success Looks Like
- You architect and implement automated testing frameworks that validate the runtime behavior and performance of AI-generated applications
- You develop intelligent test suites that can automatically assess application functionality in production environments
- You create testing frameworks that can validate runtime behavior across multiple languages and frameworks
- You establish quality metrics and testing protocols that measure real-world performance of generated applications
- You build systems to automatically detect and flag runtime issues in deployed applications
- You collaborate with our AI team to improve the platform based on runtime performance data
- You implement automated integration and end-to-end testing that ensures generated applications work as intended in production
- You develop metrics and monitoring systems to track runtime performance across different customer deployments
Areas of Ownership
Our hiring process is designed for you to demonstrate deep expertise in automation testing with a focus on AI-powered systems.
Required Technical Experience:
- 4+ years of experience with Selenium and automated testing frameworks
- Strong expertise in Python (our primary automation language)
- Experience with CI/CD tools (Jenkins, CircleCI, or similar)
- Proficiency in version control systems (Git)
- Experience testing distributed systems
- Understanding of modern software development practices
- Experience working with cloud platforms (GCP preferred)
Ways to stand out
- Experience with runtime monitoring and testing of distributed systems
- Knowledge of performance testing and APM (Application Performance Monitoring)
- Experience with end-to-end testing of complex applications
- Background in developing testing systems for enterprise-grade applications
- Understanding of distributed tracing and monitoring
- Experience with chaos engineering and resilience testing
1. Software Development Engineer - Salesforce
What we ask for
We are looking for strong engineers to build best in class systems for commercial &
wholesale banking at Bank, using Salesforce service cloud. We seek experienced
developers who bring deep understanding of salesforce development practices, patterns,
anti-patterns, governor limits, sharing & security model that will allow us to architect &
develop robust applications.
You will work closely with business, product teams to build applications which provide end
users with intuitive, clean, minimalist, easy to navigate experience
Develop systems by implementing software development principles and clean code
practices scalable, secure, highly resilient, have low latency
Should be open to work in a start-up environment and have confidence to deal with complex
issues keeping focus on solutions and project objectives as your guiding North Star
Technical Skills:
● Strong hands-on frontend development using JavaScript and LWC
● Expertise in backend development using Apex, Flows, Async Apex
● Understanding of Database concepts: SOQL, SOSL and SQL
● Hands-on experience in API integration using SOAP, REST API, graphql
● Experience with ETL tools , Data migration, and Data governance
● Experience with Apex Design Patterns, Integration Patterns and Apex testing
framework
● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,
bitbucket
● Should have worked with at least one programming language - Java, python, c++
and have good understanding of data structures
Preferred qualifications
● Graduate degree in engineering
● Experience developing with India stack
● Experience in fintech or banking domain

About Us:
Heyo & MyOperator are India’s largest conversational platforms, delivering Call + WhatsApp engagement solutions to over 40,000+ businesses. Trusted by brands like Astrotalk, Lenskart, and Caratlane, we power customer engagement at scale. We support a hybrid work model, foster a collaborative environment, and offer fast-track growth opportunities.
Job Overview:
We are looking for a skilled Quality Analyst with 2-4 years of experience in software quality assurance. The ideal candidate should have a strong understanding of testing methodologies, automation tools, and defect tracking to ensure high-quality software products. This is a fully
remote role.
Key Responsibilities:
● Develop and execute test plans, test cases, and test scripts for software products.
● Conduct manual and automated testing to ensure reliability and performance.
● Identify, document, and collaborate with developers to resolve defects and issues.
● Report testing progress and results to stakeholders and management.
● Improve automation testing processes for efficiency and accuracy.
● Stay updated with the latest QA trends, tools, and best practices.
Requirements Skills:
● 2-4 years of experience in software quality assurance.
● Strong understanding of testing methodologies and automated testing.
● Proficiency in Selenium, Rest Assured, Java, and API Testing (mandatory).
● Familiarity with Appium, JMeter, TestNG, defect tracking, and version control tools.
● Strong problem-solving, analytical, and debugging skills.
● Excellent communication and collaboration abilities.
● Detail-oriented with a commitment to delivering high-quality results.
Why Join Us?
● Fully remote work with flexible hours.
● Exposure to industry-leading technologies and practices.
● Collaborative team culture with growth opportunities.
● Work with top brands and innovative projects.

We’re seeking a passionate and skilled Technical Trainer to deliver engaging, hands-on training in HTML, CSS, and Python-based front-end development. You’ll mentor learners, design curriculum, and guide them through real-world projects to build strong foundational and practical skills.


Brandzzy is a forward-thinking technology company dedicated to building innovative and scalable Software-as-a-Service (SaaS) solutions. We are a passionate team focused on creating products that solve real-world problems and deliver exceptional user experiences. Join us as we scale our platform to new heights.
Role Summary:
We are seeking an experienced and visionary Senior Full Stack Developer to lead the technical design and development of our core SaaS platform. In this role, you will be responsible for making critical architectural decisions, mentoring other engineers, and ensuring our application is built for massive scale and high performance. You are not just a coder; you are a technical leader who will shape the future of our product and drive our engineering culture forward.
Key Responsibilities:
- Lead the architecture and design of highly scalable, secure, and resilient full-stack web applications.
- Take ownership of major features and system components, from technical strategy through to deployment and long-term maintenance.
- Mentor and guide junior and mid-level developers, conducting code reviews and fostering a culture of technical excellence.
- Drive technical strategy and make key decisions on technology stacks, frameworks, and infrastructure.
- Engineer and implement solutions specifically for SaaS scalability, including microservices, containerization (Docker, Kubernetes), and efficient cloud resource management.
- Establish and champion best practices for code quality, automated testing, and robust CI/CD pipelines.
- Collaborate with product leadership to translate business requirements into concrete technical roadmaps.
Skills & Qualifications:
- 5+ years of professional experience in full-stack development, with a proven track record of building and launching complex SaaS products.
- Deep expertise in both front-end (React, Angular, Vue.js) and back-end (Node.js, Python, Java, Go) technologies.
- Expert-level knowledge of designing and scaling applications on a major cloud platform (AWS, Azure, or GCP).
- Proven, hands-on experience architecting for scale, including deep knowledge of microservices architecture, message queues, and database scaling strategies (e.g., sharding, replication).
- In-depth understanding of database technologies (both SQL and NoSQL) and how to choose the right one for the job.
- Expertise in implementing and managing CI/CD pipelines and advocating for DevOps principles.
- Strong leadership and communication skills, with the ability to articulate complex technical ideas to both technical and non-technical stakeholders.
- A passion for solving complex problems and a proactive, self-starter attitude.

We're seeking a Software Development Engineer in Test (SDET) to ensure product feature quality through meticulous test design, automation, and result analysis. Collaborate closely with developers to optimize test coverage, resolve bugs, and streamline project delivery.
Responsibilities:
Ensure the quality of product feature development.
Test Design: Understand the necessary functionalities and implementation strategies for straightforward feature development. Inspect code changes, identify key test scenarios and impact areas, and create a thorough test plan.
Test Automation: Work with developers to build reusable test scripts. Review unit/functional test scripts, and aim to maximize test coverage to minimize manual testing, using Python.
Test Execution and Analysis: Monitor test results and identify areas lacking in test coverage. Address these areas by creating additional test scripts and deliver transparent test metrics to the team.
Support & Bug Fixes: Handle issues reported by customers and aid in bug resolution.
Collaboration: Participate in project planning and execution with the team for efficient project delivery.
Requirements:
A Bachelor's degree in computer science, IT, engineering, or a related field, with a genuine interest in software quality assurance, issue detection, and analysis.
2-5 years of solid experience in software testing, with a focus on automation. Proficiency in using a defect tracking system, Code repositories & IDEs.
A good grasp of programming languages like Python/Java/Javascript. Must be able to understand and write code.
Familiarity with testing frameworks (e.g., Selenium, Appium, JUnit).
Good team player with a proactive approach to continuous learning.
Sound understanding of the Agile software development methodology.
Experience in a SaaS-based product company or a fast-paced startup environment is a plus.


Job Title : Senior Python Developer
Experience : 7+ Years
Location : Remote or Hybrid (Gurgaon / Coimbatore / Hyderabad)
Job Summary :
We are looking for a highly skilled and motivated Senior Python Developer to join our dynamic engineering team.
The ideal candidate will have a strong foundation in web application development using Python and related frameworks. A passion for writing clean, scalable code and solving complex technical challenges is essential for success in this role.
Mandatory Skills : Python (3.x), FastAPI or Flask, PostgreSQL or Oracle, ORM, API Microservices, Agile Methodologies, Clean Code Practices.
Required Skills and Qualifications :
- 7+ Years of hands-on experience in Python (3.x) development.
- Strong proficiency in FastAPI or Flask frameworks.
- Experience with relational databases like PostgreSQL, Oracle, or similar, along with ORM tools.
- Demonstrated experience in building and maintaining API-based microservices.
- Solid grasp of Agile development methodologies and version control practices.
- Strong analytical and problem-solving skills.
- Ability to write clean, maintainable, and well-documented code.
Nice to Have :
- Experience with Google Cloud Platform (GCP) or other cloud providers.
- Exposure to Kubernetes and container orchestration tools.


Desired Competencies (Technical/Behavioral Competency)
Must-Have 1. Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc.
2. Deep Learning Frameworks: PyTorch, spaCy, Keras
3. Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers
4. Experience in working with Image processing, computer vision is must
5. Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,kerasetc.,.) , LangChain, Flask,FastAPI, prompt engineering.
6. Programming experience in Python
7. Strong written and verbal communications
8. Excellent interpersonal and collaboration skills.
Good-to-Have 1. Experience working with vectored databases and graph representation of documents.
2. Experience with building or maintaining MLOpspipelines.
3. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is preferred.
4. Exposure to Docker, Kubernetes


Duties
About Us:
We are a UK-based conveyancing firm dedicated to transforming property transactions through cutting-edge artificial intelligence. We are seeking a talented Machine Learning Engineer with 1–2 years of experience to join our growing AI team. This role offers a unique opportunity to work on scalable ML systems and Generative AI applications in a dynamic and impactful environment.
Responsibilities:
Design, Build, and Deploy Scalable ML Models
You will be responsible for end-to-end development of machine learning and deep learning models that can be scaled to handle real-world data and use cases. This includes training, testing, validating, and deploying models efficiently in production environments.
Develop NLP-Based Automation Solutions
You'll create natural language processing pipelines that automate tasks such as document understanding, text classification, and summarisation, enabling intelligent handling of property-related documents.
Prototype and Implement Generative AI Tools
Work closely with AI researchers and developers to experiment with and implement Generative AI techniques for tasks like content generation, intelligent suggestions, and workflow automation.
Integrate ML Models with APIs and Tools
Integrate machine learning models with external APIs and internal systems to support business operations and enhance customer service workflows.
Maintain CI/CD for ML Features
Collaborate with DevOps teams to manage CI/CD pipelines that automate testing, validation, and deployment of ML features and updates.
Review, Debug, and Optimise Models
Participate in thorough code reviews and model debugging sessions. Continuously monitor and fine-tune deployed models to improve their performance and reliability.
Cross-Team Communication
Communicate technical concepts effectively across teams, translating complex ML ideas into actionable business value.
· Design, build, and deploy scalable ML and deep learning models for real-world applications.
· Develop NLP-based and Gen AI based solutions for automating document understanding, classification, and summarisation.
· Collaborate with AI researchers and developers to prototype and implement Generative AI tools.
· Integrate ML and Gen AI models with APIs and internal tools to support business operations.
· Work with CI/CD pipelines to ensure continuous delivery of ML features and updates.
· Participate in code reviews, debugging, and performance optimisation of deployed models.
· Communicate technical concepts effectively across cross-functional teams.
Essentials From Day 1:
Security and Compliance:
• Ensure ML systems are built with GDPR compliance in mind.
• Adhere to RBAC policies and maintain secure handling of personal and property data.
Sandboxing and Risk Management:
• Use sandboxed environments for testing new ML features.
• Conduct basic risk analysis for model performance and data bias.
• Use sandboxed environments for testing and development.
• Evaluate and mitigate potential risks in model behavior and data pipelines
Qualifications:
· 1–2 years of professional experience in Machine Learning and Deep Learning projects.
· Proficient in Python, Object-Oriented Programming (OOPs), and Data Structures & Algorithms (DSA).
· Strong understanding of NLP and its real-world applications.
· Exposure to building scalable ML systems and deploying models into production.
· Basic working knowledge of Generative AI techniques and frameworks.
· Familiarity with CI/CD tools and experience with API-based integration.
· Excellent analytical thinking and debugging capabilities.
· Strong interpersonal and communication skills for effective team collaboration.

Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.

We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).

Job Description:
- He / She candidate must possess a strong technology background with advanced knowledge of Java and Python based technology stack.
- Java, JEE, Spring MVC, Python, JPA, Spring Boot, REST API, Database, Playwright, CI/CD pipelines
- * At least 3 years of Hand-on Java EE and Core Java experience with strong leadership qualities.
- * Experience with Web Service development, REST and Services Oriented Architecture.
- * Expertise in Object Oriented Design, Design patterns, Architecture and Application Integration.
- * Working knowledge of Databases including Design, SOL proficiency.
- * Strong experience with frameworks used for development and automated testing like SpringBoot, Junit, BDD etc.
- * Experience with Unix/Linux Operating System and Basic Linux Commands.
- * Strong development skills with ability to understand technical design and translate the same into workable solution.
- * Basic knowledge of Python and Hand-on experience on Python scripting
- * Build, deploy, and monitor applications using CI/CD pipelines, * Experience with agile development methodology.
- Good to Have - Elastic Index Database, MongoDB. - No SQL Database Docker Deployments, Cloud Deployments Any Al ML. snowflake Experience

Job Overview:
We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and frameworks.
Required Skills & Qualifications:
- Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related field.
- 5+ years of hands-on experience in AI/ML solution development.
- Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT.
- Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG).
- Proficient in key AI libraries and frameworks:
- LLMs & GenAI: Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers
- NLP: SpaCy, NLTK.
- Vision: OpenCV, MMDetection, YOLOv5/v8, Detectron2
- MLOps: MLflow, FastAPI, Docker, Git
- Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation.
- Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure.
- Strong communication skills and ability to convert business problems into technical solutions.
Preferred Qualifications:
- Experience building multimodal systems (text + image, etc.)
- Practical experience with agent frameworks for autonomous or goal-directed AI.
- Familiarity with quantization, distillation, or knowledge transfer for efficient model deployment.
Key Responsibilities:
- Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications.
- Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality.
- Build NLP solutions for Q&A, summarization, information extraction, text classification, and more.
- Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks.
- Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines.
- Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features.
- Optimize models for performance, cost-efficiency, and low latency in production.
- Continuously evaluate new AI research, tools, and frameworks and apply them where relevant.
- Mentor junior AI engineers and contribute to internal AI best practices and documentation.


Job Title: Software Engineer (Node.js)
Experience: 4+ Years
Location:Pune
About the Role:
We are looking for a talented and experienced Node.js Developer with a minimum of 4 years of hands-on experience to join our dynamic team. In this role, you will design, develop, and maintain high-performance applications. You should be passionate about writing clean, efficient, and scalable code.
Key Responsbilities:
- Develop and maintain secure, scalable, and high-performance server-side applications.
- Implement authentication, authorization, and data protection measures across the backend.
- Follow backend best practices in code structure, error handling, and system design.
- Stay up to date with backend security trends and evolving best practices.
Mandatory skills:
- Strong hands-on experience in Node.js development (4+ years).
- Knowledge of security best practices in backend development (e.g., input validation and sanitize, secure data storage).
- Familiarity with authentication and authorization methods such as JWT, OAuth2, or session-based auth.
Good to Have Skills:
- Experience with React.js for building dynamic user interfaces.
- Working knowledge of Python for scripting or backend tasks.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
Required Soft Skills:
• Verbal Communication
• Written Communication
• Cooperation, Teamwork & Interpersonal Skills
• Customer Focus & Business Acumen
• Critical Thinking
• Initiative, Accountability & Result Orientation
• Learning and Continuous Improvement



Data Scientist
Job Id: QX003
About Us:
QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Position Overview:
We are seeking a collaborative and analytical Data Scientist who can bridge the gap between business needs and data science capabilities. In this role, you will lead and support projects that apply machine learning, AI, and statistical modeling to generate actionable insights and drive business value.
Key Responsibilities:
- Collaborate with stakeholders to define and translate business challenges into data science solutions.
- Conduct in-depth data analysis on structured and unstructured datasets.
- Build, validate, and deploy machine learning models to solve real-world problems.
- Develop clear visualizations and presentations to communicate insights.
- Drive end-to-end project delivery, from exploration to production.
- Contribute to team knowledge sharing and mentorship activities.
Must-Have Skills:
- 3+ years of progressive experience in data science, applied analytics, or a related quantitative role, demonstrating a proven track record of delivering impactful data-driven solutions.
- Exceptional programming proficiency in Python, including extensive experience with core libraries such as Pandas, NumPy, Scikit-learn, NLTK and XGBoost.
- Expert-level SQL skills for complex data extraction, transformation, and analysis from various relational databases.
- Deep understanding and practical application of statistical modeling and machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and dimensionality reduction.
- Proven expertise in end-to-end machine learning model development lifecycle, including robust feature engineering, rigorous model validation and evaluation (e.g., A/B testing), and model deployment strategies.
- Demonstrated ability to translate complex business problems into actionable analytical frameworks and data science solutions, driving measurable business outcomes.
- Proficiency in advanced data analysis techniques, including Exploratory Data Analysis (EDA), customer segmentation (e.g., RFM analysis), and cohort analysis, to uncover actionable insights.
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
Good-to-Have Skills:
- Experience with cloud platforms (Azure, AWS, GCP) and specific services like Azure ML, Synapse, Azure Kubernetes and Databricks.
- Familiarity with big data processing tools like Apache Spark or Hadoop.
- Exposure to MLOps tools and practices (e.g., MLflow, Docker, Kubeflow) for model lifecycle management.
- Knowledge of deep learning libraries (TensorFlow, PyTorch) or experience with Generative AI (GenAI) and Large Language Models (LLMs).
- Proficiency with business intelligence and data visualization tools such as Tableau, Power BI, or Plotly.
- Experience working within Agile project delivery methodologies.
Competencies:
· Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications.
· Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels.
· Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm.
· Customer Focus - Building strong customer relationships and delivering customer-centric solutions.
· Optimizes Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.

Role overview:
As a founding senior software engineer, you will play a key role in shaping our AI-powered visual search engine for fashion and e-commerce. Responsibilities include solving complex deep-tech challenges to build scalable AI/ML solutions, leading backend development for performance and scalability, and architecting and integrating software aligned with product strategy and innovation goals. You will collaborate with cross-functional teams to address real consumer problems and build robust AI/ML pipelines to drive product innovation.
What we’re looking for:
- 3–5 years of Python experience (Golang is a plus), with expertise in concurrency, FastAPI, restful APIs, and microservices.
- proficiency in PostgreSQL/MongoDB, cloud platforms (AWS/GCP/Azure), and containerization tools like Docker/Kubernetes.
- Strong experience in asynchronous programming, CI/CD pipelines, and version control (git).
- Excellent problem-solving and communication skills are essential.
What we offer:
- Competitive salary and ESOPs, along with hackerhouse living: live and work with a Gen-Z team in a 7bhk house on MG Road, Gurgaon.
- Hands-on experience in shipping world-class products, professional development opportunities, flexible hours, and a collaborative, supportive culture.

As a senior software engineer (AI/ML), you will play a key role in shaping our AI-powered visual search engine for fashion and e-commerce. Responsibilities include solving complex deep-tech challenges to build scalable AI/ML solutions, leading backend development for performance and scalability, and architecting and integrating software aligned with product strategy and innovation goals. You will collaborate with cross-functional teams to address real consumer problems and build robust AI/ML pipelines to drive product innovation.
What we’re looking for:
- Design and deploy advanced machine learning models in computer vision, including object detection & similarity matching
- Implement scalable data pipelines, optimize models for performance and accuracy, and ensure they are production-ready with MLOps
- 3–5 years of Python experience (Golang is a plus), with expertise in concurrency, FastAPI, restful APIs, and microservices.
- proficiency in PostgreSQL/MongoDB, cloud platforms (AWS/GCP/Azure), and containerization tools like Docker/Kubernetes.
- Take part in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices
- Strong experience in asynchronous programming, CI/CD pipelines, and version control (git). Excellent problem-solving and communication skills are essential.
What we offer:
- Competitive salary and ESOPs, along with HackerHouse living: live and work with a Gen Z team in a 7BHK house on MG Road, Gurgaon.
- hands-on experience in shipping world-class products, professional development opportunities, flexible hours, and a collaborative, supportive culture.



Role : AIML Engineer
Location : Madurai
Experience : 5 to 10 Yrs
Mandatory Skills : AIML, Python, SQL, ML Models, PyTorch, Pandas, Docker, AWS
Language: Python
DBs : SQL
Core Libraries:
Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet
SOTA ML : ML Models, Boosting & Ensemble models etc.
Explainability : Shap / Lime
Required skills:
- Deep Learning: PyTorch, PyTorch Forecasting,
- Data Processing: Pandas, NumPy, Polars (optional), PySpark
- Hyperparameter Tuning: Optuna, Amazon SageMaker Automatic Model Tuning
- Deployment & MLOps: Batch & Realtime with API endpoints, MLFlow
- Serving: TorchServe, Sagemaker endpoints / batch
- Containerization: Docker
- Orchestration & Pipelines: AWS Step Functions, AWS SageMaker Pipelines
AWS Services:
- SageMaker (Training, Inference, Tuning)
- S3 (Data Storage)
- CloudWatch (Monitoring)
- Lambda (Trigger-based Inference)
- ECR, ECS or Fargate (Container Hosting)

bout the Role:
We are looking for an experienced Senior Data Scientist for a short-term contract role to work on real-world problems in the autonomous driving and mobility domain. You will work on large-scale sensor and telemetry datasets from truck fleets to drive key insights, develop ML models, and improve operational intelligence.
Key Responsibilities:
- Analyze multi-source vehicle data: GPS, CAN, LiDAR, camera, IMU, radar.
- Build ML/statistical models for anomaly detection, predictive maintenance, and driver behaviour analysis.
- Develop scalable data pipelines for structured/unstructured fleet data.
- Apply time-series analysis, clustering, and predictive modeling on driving patterns.
- Collaborate with cross-functional teams to enhance autonomy performance.
- Visualize insights through dashboards (Tableau/PowerBI/Plotly) for business impact.
Required Skills:
- Strong Python skills with Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
- Experience in data engineering tools (Spark, AWS/GCP pipelines).
- Deep understanding of time-series and spatiotemporal data.
- Proficiency in SQL and data visualization tools.
- Background in automotive/connected vehicle data is a strong plus.
Nice to Have:
- Familiarity with vehicle dynamics, CAN decoding, or driver modeling.
- Exposure to sensor fusion, edge AI, or model optimization.
- Experience with Docker, version control, and CI/CD tools.
Why Join?
- Work on cutting-edge mobility and autonomy problems.
- Collaborate with a dynamic, data-driven engineering team.
- Flexibility to work remotely with impactful product teams.
💡 Note: Immediate joiners or candidates with <15 days' notice preferred.

Job Title: Software Development Intern
Company: KGIST Microcollege (MGC)
Location: [Insert Location]
Job Type: Internship
Job Description:
- Assist in developing software applications using various programming languages
- Collaborate with the development team to design, develop, and test software solutions
- Develop knowledge and skills in software development practices and procedures
Responsibilities:
- Develop software applications using JavaScript, Java, HTML, Python, and C++
- Assist in debugging and troubleshooting code
- Participate in code reviews and contribute to improving code quality
- Collaborate with the team to meet project deadlines and goals
Requirements:
- Currently pursuing a degree in Computer Science or a related field
- Strong knowledge of programming languages, including:
- JavaScript
- Java
- HTML
- Python
- C++
- Female candidates only
- Strong problem-solving skills and attention to detail
- Ability to work in a team environment
What We Offer:
- Opportunity to gain hands-on experience in software development
- Collaborative and dynamic work environment
- Flexible work hours and remote work options (if applicable)
- Chance to develop skills and knowledge in software development practices

Position Overview:
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
- Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
- Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.


We are building an advanced, AI-driven multi-agent software system designed to revolutionize task automation and code generation. This is a futuristic AI platform capable of:
✅ Real-time self-coding based on tasks
✅ Autonomous multi-agent collaboration
✅ AI-powered decision-making
✅ Cross-platform compatibility (Desktop, Web, Mobile)
We are hiring a highly skilled **AI Engineer & Full-Stack Developer** based in India, with a strong background in AI/ML, multi-agent architecture, and scalable, production-grade software development.
### Responsibilities:
- Build and maintain a multi-agent AI system (AutoGPT, BabyAGI, MetaGPT concepts)
- Integrate large language models (GPT-4o, Claude, open-source LLMs)
- Develop full-stack components (Backend: Python, FastAPI/Flask, Frontend: React/Next.js)
- Work on real-time task execution pipelines
- Build cross-platform apps using Electron or Flutter
- Implement Redis, Vector databases, scalable APIs
- Guide the architecture of autonomous, self-coding AI systems
### Must-Have Skills:
- Python (advanced, AI applications)
- AI/ML experience, including multi-agent orchestration
- LLM integration knowledge
- Full-stack development: React or Next.js
- Redis, Vector Databases (e.g., Pinecone, FAISS)
- Real-time applications (websockets, event-driven)
- Cloud deployment (AWS, GCP)
### Good to Have:
- Experience with code-generation AI models (Codex, GPT-4o coding abilities)
- Microservices and secure system design
- Knowledge of AI for workflow automation and productivity tools
Join us to work on cutting-edge AI technology that builds the future of autonomous software.
A BIT ABOUT US
Appknox is a leading mobile application security platform that helps enterprises automate security testing across their mobile apps, APIs, and DevSecOps pipelines. Trusted by global banks, fintechs, and government agencies, we enable secure mobile experiences with speed and confidence.
About the Role:
We're looking for a Jr. Technical Support Engineer to join our global support team and provide world-class assistance to customers in the US time zones from 8pm to 5am IST. You will troubleshoot, triage, and resolve technical issues related to Appknox’s mobile app security platform, working closely with Engineering, Product, and Customer Success teams.
Key Responsibilities:
- Respond to customer issues via email, chat, and voice/voip calls during US business hours.
- Diagnose, replicate, and resolve issues related to DAST, SAST, and API security modules.
- Troubleshoot integration issues across CI/CD pipelines, API connections, SDKs, and mobile app builds.
- Document known issues and solutions in the internal knowledge base and help center.
- Escalate critical bugs to engineering with full context, reproduction steps, and logs.
- Guide customers on secure implementation best practices and platform usage.
- Collaborate with product and QA teams to suggest feature improvements based on customer feedback.
- Participate in on-call support rotations if needed.
Requirements:
- 1–4 years of experience in technical support, Delivery or QA roles at a SaaS or cybersecurity company.
- Excellent communication and documentation skills in English.
- Comfortable working independently and handling complex technical conversations with customers.
- Basic understanding of mobile platforms (Android, iOS), REST APIs, Networking Architecture, and security concepts (OWASP, CVEs, etc.).
- Familiarity with command-line tools, mobile build systems (Gradle/Xcode), and HTTP proxies (Burp).
- Ability to work full-time within US time zones. Ensure that you have a stable internet connection and work station.
Good to have skills:
- Experience working in a product-led cybersecurity company.
- Knowledge of scripting languages (Python, Bash) or log analysis tools.
- Familiarity with CI/CD tools (Jenkins, GitHub Actions, GitLab CI) is a plus.
- Familiarity with ticketing and support tools like Freshdesk, Jira, Postman, and Slack.
Compensation
- As per Industry Standards
Interview Process:
- Application- Submit your resume and complete your application via our job portal.
- Screening-We’ll review your background and fit—typically invite you on cutshort for a Profile Evaluation call (15 mins)
- Assignment Round- You'll receive a real-world take-home task to complete within 48 hours.
- Panel Interview- Meet with a cross-functional interview panel to assess technical skills, problem-solving, and collaboration.
- Stakeholder Interview- A focused discussion with the Director to evaluate strategic alignment and high-level fit.
- HR Round- Final chat to discuss cultural fit, compensation, notice period, and next steps.
Personality Traits We Admir:
- A confident and dynamic working persona, which can bring fun to the team, and a sense of humour, is an added advantage.
- Great attitude to ask questions, learn and suggest process improvements.
- Has attention to details and helps identify edge cases.
- Highly motivated and coming up with fresh ideas and perspectives to help us move towards our goals faster.
- Follow timelines and absolute commitment to deadlines.
Why Join Us:
- Freedom & Responsibility: If you are a person who enjoys challenging work & pushing your boundaries, then this is the right place for you. We appreciate new ideas & ownership as well as flexibility with working hours.
- Great Salary & Equity: We keep up with the market standards & provide pay packages considering updated standards. Also as Appknox continues to grow, you’ll have a great opportunity to earn more & grow with us. Moreover, we also provide equity options for our top performers.
- Holistic Growth: We foster a culture of continuous learning and take a much more holistic approach to train and develop our assets: the employees. We shall also support you all on that journey of yours.
- Transparency: Being a part of a start-up is an amazing experience, one of the reasons being open communication & transparency at multiple levels. Working with Appknox will give you the opportunity to experience it all first-hand.


About the Role
Join our dynamic DTS team and collaborate with a top-tier global hedge fund on data-driven development initiatives. You'll focus on cloud migration, automation, application development, and implementing DevOps best practices, especially on Google Cloud Platform (GCP).
Key Responsibilities
✅ Design and develop scalable and secure software solutions
✅ Build and maintain apps using C#, MSSQL, Python, GCP/BigQuery
✅ Conduct code reviews and enforce best practices
✅ Troubleshoot issues and ensure smooth application performance
✅ Collaborate with cross-functional teams to drive project success
✅ Mentor and support junior developers
What We’re Looking For
- 3+ years of experience in C#, MSSQL, Python, and GCP/BigQuery
- Solid understanding of software development and DevOps in multi-cloud setups
- Strong problem-solving mindset with attention to detail
- Excellent communication and stakeholder management skills
- Proven ability to work collaboratively and guide junior team members
Good to Have
- Experience with DevOps pipelines/tools
- Exposure to agile methodologies


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML, and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Docker and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.
About Sun King
Sun King is the world’s leading off-grid solar energy company, delivering energy access to 1.8 billion people without reliable grid connections through innovative product design, fintech solutions, and field operations.
Key highlights:
- Connected over 20 million homes to solar power across Africa and Asia, adding 200,000 homes monthly.
- Affordable ‘pay-as-you-go’ financing model; after 1-2 years, customers own their solar equipment.
- Saved customers over $4 billion to date.
- Collect 650,000 daily payments via 28,000 field agents using mobile money systems.
- Products range from home lighting to high-energy appliances, with expansion into clean cooking, electric mobility, and entertainment.
With 2,800 staff across 12 countries, our team includes experts in various fields, all passionate about serving off-grid communities.
Diversity Commitment:
44% of our workforce are women, reflecting our commitment to gender diversity.
About the role:
Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments.
What you would be expected to do:
- Work with engineering, automation, and data teams to work on various infrastructure requirements.
- Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform.
- Managing AWS services for multiple teams.
- Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services.
- Deployment and management of Kubernetes resources.
- Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution.
- Set up incident response services and design effective processes.
- Deployment and management of critical platform services like OPA and Keycloak for IAM.
- Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines.
You might be a strong candidate if you have/are:
- Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks.
- Experience working with web servers (nginx, apache) and cloud providers (preferably AWS).
- Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments.
- Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters).
- Knowledge of web architecture, distributed systems, and single points of failure.
- Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck.
- Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls.
Good to have:
- Experience with backend development and setting up databases and performance tuning using parameter groups.
- Working experience in Kubernetes cluster administration and Kubernetes deployments.
- Experience working alongside sec ops engineers.
- Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing.
- Setup and usage of open telemetry, central logging, and monitoring systems.
What Sun King offers:
- Professional growth in a dynamic, rapidly expanding, high-social-impact industry.
- An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards profound impact on people and the planet.
- A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
- Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun King Center for Leadership.

We're hiring a skilled AI Developer to join our growing team at Geo Wave Pvt Ltd, based in Malé, Maldives. You’ll lead the development of smart automation tools, reporting dashboards, and AI systems to optimize our internal operations across fuel, logistics, and finance.
Responsibilities:
- Build and deploy AI-powered business tools
- Implement OCR/NLP solutions for document automation
- Create custom dashboards and backend APIs
- Automate workflows for reporting, inventory, and sales tracking
Must-Have Skills:
Python, Flask/FastAPI, TensorFlow/PyTorch, OCR/NLP, API development
Good-to-Have:
React.js, PostgreSQL, AWS/GCP, Docker

Job Overview
We are seeking an agile AI Engineer with a strong focus on both AI engineering and SaaS product development in a 0-1
product environment. This role is perfect for a candidate skilled in building and iterating quickly, embracing a fail fast
approach to bring innovative AI solutions to market rapidly. You will be responsible for designing, developing, and
deploying SaaS products using advanced Large Language Models (LLMs) such as Meta, Azure OpenAI, Claude, and Mistral,
while ensuring secure, scalable, and high-performance architecture. Your ability to adapt, iterate, and deliver in fast-
paced environments is critical.
Responsibilities
Lead the design, development, and deployment of SaaS products leveraging LLMs, including platforms
like Meta, Azure OpenAI, Claude, and Mistral.
Support product lifecycle, from conceptualization to deployment, ensuring seamless integration of AI
models with business requirements and user needs.
Build secure, scalable, and efficient SaaS products that embody robust data management and comply
with security and governance standards.
Collaborate closely with product management, and other stakeholders to align AI-driven SaaS solutions
with business strategies and customer expectations.
Fine-tune AI models using custom instructions to tailor them to specific use cases and optimize
performance through techniques like quantization and model tuning.
Architect AI deployment strategies using cloud-agnostic platforms (AWS, Azure, Google Cloud), ensuring
cost optimization while maintaining performance and scalability.
Apply retrieval-augmented generation (RAG) techniques to build AI models that provide contextually
accurate and relevant outputs.
Build the integration of APIs and third-party services into the SaaS ecosystem, ensuring robust and
flexible product architecture.
Monitor product performance post-launch, iterating and improving models and infrastructure to
enhance user experience and scalability.
Stay current with AI advancements, SaaS development trends, and cloud technology to apply innovative
solutions in product development.
Qualifications
Bachelor’s degree or equivalent in Information Systems, Computer Science, or related fields.
6+ years of experience in product development, with at least 2 years focused on AI-based SaaS
products.
Demonstrated experience in leading the development of SaaS products, from ideation to deployment,
with a focus on AI-driven features.
Hands-on experience with LLMs (Meta, Azure OpenAI, Claude, Mistral) and SaaS platforms.
Proven ability to build secure, scalable, and compliant SaaS solutions, integrating AI with cloud-based
services (AWS, Azure, Google Cloud).
Strong experience with RAG model techniques and fine-tuning AI models for business-specific needs.
Proficiency in AI engineering, including machine learning algorithms, deep learning architectures (e.g.,
CNNs, RNNs, Transformers), and integrating models into SaaS environments.
Solid understanding of SaaS product lifecycle management, including customer-focused design,
product-market fit, and post-launch optimization.
Excellent communication and collaboration skills, with the ability to work cross-functionally and drive
SaaS product success.
Knowledge of cost-optimized AI deployment and cloud infrastructure, focusing on scalability and
performance.