50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!




About the Role:
We are seeking a Technical Architect with proven expertise in full-stack web development, cloud infrastructure, and system design. You will lead the design and delivery of scalable enterprise applications, drive technical decision-making, and mentor a cross-functional development team. The ideal candidate has a strong foundation in .NET-based architecture, modern front-end frameworks, and cloud-native technologies.
Key Responsibilities:
- Lead the technical architecture, system design, and full-stack development of enterprise-grade web applications.
- Design and develop robust backend systems and APIs using .NET Core / C# / Python, following TDD/BDD principles.
- Build modern frontends using React.js, TypeScript, and optionally Angular, ensuring responsive and accessible UI.
- Architect scalable, secure, and highly available solutions using cloud platforms such as Azure, AWS, or GCP.
- Guide and review CI/CD pipeline creation and DevOps practices, leveraging tools like Azure DevOps, Git, Docker, etc.
- Oversee database design and optimization for relational and NoSQL systems like MSSQL, PostgreSQL, MongoDB, CosmosDB.
- Mentor developers and collaborate with cross-functional teams including Product Owners, QA, and DevOps.
- Ensure best practices in code quality, security, performance, and compliance.
- Lead application monitoring, error tracking, and infrastructure tuning for production-grade deployments.
- Required Skills:
- 10+ years of experience in software development, with 3+ years in architectural or technical leadership roles.
- Strong expertise in .NET Core, C#, React.js, TypeScript, HTML5, CSS3, and JavaScript.
- Good exposure to Python for backend services or data pipelines.
- Cloud platform experience in at least one or more: Azure, AWS, or Google Cloud Platform (GCP).
- Proficient in designing and consuming RESTful APIs, and working with metadata-driven and microservices architecture.
- Strong understanding of DevOps, CI/CD, and deployment strategies using tools like Git, Docker, Azure DevOps.
- Familiarity with frontend frameworks like Angular or Vue.js is a plus.
- Proficient with databases: MSSQL, PostgreSQL, MySQL, MongoDB, CosmosDB.
- Comfortable working on Linux/UNIX and Windows-based servers, along with web servers like Nginx, Apache, IIS.
- Good to Have:
- Experience in CRM, ERP, or E-commerce platforms.
- Familiarity with AI/ML integration and working with data science teams.
- Exposure to mobile development using React Native.
- Experience integrating third-party tools like Slack, Microsoft Teams, etc.
- Soft Skills:
- Strong problem-solving mindset with a proactive and innovative approach.
- Excellent communication and leadership abilities.
- Capability to mentor junior engineers and drive a high-performance team culture.
- Adaptability to work in fast-paced, Agile environments.
Educational Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related technical discipline.
- Microsoft / Cloud certifications are a plus.

Location: Bengaluru/Mangaluru
Experience required: 2-5 years
Key skills: Odoo Development, Python, Frontend Technologies
Designation: SE L1/L2/L3/ ATL
Job Summary:
We are seeking a skilled and proactive Odoo Developer to join our dynamic team. The ideal candidate will have hands-on experience in customizing, developing, and maintaining Odoo modules, with a deep understanding of Python and business processes. You will play a key role in requirement gathering, technical design, development, testing, and deployment.
Key Responsibilities:
- Develop, customize, and maintain Odoo modules as per business requirements.
- Analyze, design, and develop new modules and features in Odoo ERP.
- Troubleshoot, debug, and upgrade existing Odoo modules.
- Integrate Odoo with third-party platforms using APIs/web services.
- Provide technical support and training to end-users.
- Collaborate with functional consultants and stakeholders to gather requirements and deliver scalable ERP solutions.
- Write clean, reusable, and efficient Python code and maintain technical documentation.
Required Skills & Qualifications:
- 2-5 years of proven experience as an Odoo Developer.
- Strong knowledge of Python, PostgreSQL, and Odoo framework (ORM, QWeb, XML).
- Experience in Odoo custom module development and Odoo standard modules
- Good understanding of Odoo backend and frontend (JavaScript, HTML, CSS).
- Experience with Odoo APIs and web services (REST/SOAP).
- Familiarity with Linux environments, Git version control.
- Ability to work independently and in a team with minimal supervision.
- Good analytical and problem-solving skills.
- Strong verbal and written communication skills. Knowledge of Odoo deployment (Linux, Docker, Nginx, Odoo.sh) is a plus
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
- Right from the idea stage to scope out business requirements.
- Design & architect the right solution and define tangible milestones.
- Setup dedicated and on-demand tech teams for agile delivery.
- Take accountability for successful deployments to ensure efficient go-to-market Implementations.


CSE grads can choose from a variety of impactful roles—ranging from hands-on technical positions to strategy-driven leadership, depending on expertise, interests, and projects. If you'd like to see sample resumes, skills-to-role mapping, or customized descriptions aligned with your specialization or career goals,


CSE grads can choose from a variety of impactful roles—ranging from hands-on technical positions to strategy-driven leadership, depending on expertise, interests, and projects. If you'd like to see sample resumes, skills-to-role mapping, or customized descriptions aligned with your specialization or career goals,


We are seeking a highly skilled and motivated MLOps Engineer with 3-5 years of experience to join our engineering team. The ideal candidate should possess a strong foundation in DevOps or software engineering principles with practical exposure to machine learning operational workflows. You will be instrumental in operationalizing ML systems, optimizing the deployment lifecycle, and strengthening the integration between data science and engineering teams.
Required Skills:
• Hands-on experience with MLOps platforms such as MLflow and Kubeflow.
• Proficiency in Infrastructure as Code (laC) tools like Terraform or Ansible.
• Strong familiarity with monitoring and alerting frameworks (Prometheus, Grafana, Datadog, AWS CloudWatch).
• Solid understanding of microservices architecture, service discovery, and load balancing.
• Excellent programming skills in Python, with experience in writing modular, testable, and maintainable code.
• Proficient in Docker and container-based application deployments.
• Experience with CI/CD tools such as Jenkins or GitLab Cl.
• Basic working knowledge of Kubernetes for container orchestration.
• Practical experience with cloud-based ML platforms such as AWS SageMaker, Databricks, or Google Vertex Al.
Good-to-Have Skills:
• Awareness of security practices specific to ML pipelines, including secure model endpoints and data protection.
• Experience with scripting languages like Bash or PowerShell for automation tasks.
• Exposure to database scripting and data integration pipelines.
Experience & Qualifications:
• 3-5+ years of experience in MLOps, Site Reliability Engineering (SRE), or
Software Engineering roles.
• At least 2+ years of hands-on experience working on ML/Al systems in production settings.
Job Summary:
We are hiring a Data Scientist – Gen AI with hands-on experience in developing Agentic AI applications using frameworks like LangChain, LangGraph, Semantic Kernel, or Microsoft Copilot. The ideal candidate will be proficient in Python, LLMs, and prompt engineering techniques such as RAG and Chain-of-Thought prompting.
Key Responsibilities:
- Build and deploy Agent AI applications using LLM frameworks.
- Apply advanced prompt engineering (Zero-Shot, Few-Shot, CoT).
- Integrate Retrieval-Augmented Generation (RAG).
- Develop scalable solutions in Python using NumPy, Pandas, TensorFlow/PyTorch.
- Collaborate with teams to deliver business-aligned Gen AI solutions.
Must-Have Skills:
- Experience with LangChain, LangGraph, or similar (priority given).
- Strong understanding of LLMs, RAG, and prompt engineering.
- Proficiency in Python and relevant ML libraries.
Nice-to-Have:
- Wrapper API development for LLMs.
- REST API integration within Agentic workflows.
Qualifications:
- Bachelor’s/Master’s in CS, Data Science, AI, or related.
- 4–7 years in AI/ML/Data Science, with 1–2 years in Gen AI/LLMs.


About the CryptoXpress Partner Program
Earn lifetime income just by liking posts, posting memes, art, simple threads, engaging on Twitter, Quora, Reddit, or Instagram, referral signups, commission from transactions like flight, hotel, trade, gift card etc.,
(Apply link at the bottom)
More Details:
- Student Partner Program - https://cryptoxpress.com/student-partner-program
- Ambassador Program - https://cryptoxpressambassadors.com
CryptoXpress has built two powerful tracks to help students gain experience, earn income, and launch real careers:
🌱 Growth Partner: Bring in new users, grow the network, and earn lifetime income from your referrals' transactions like trades, investments, flight/hotel/gift card purchases.
🎯 CX Ambassador: Complete creative tasks, support the brand, and get paid by liking posts, creating simple threads, memes, art, sharing your experience, and engaging on Twitter, Quora, Reddit, or Instagram.
Participants will be rewarded with payments, internship certificates, mentorship, certified Web3 learning and career opportunities.
About the Role
CryptoXpress is looking for a skilled Backend Engineer to build the core logic powering our Partner Program reward engines, task pipelines, and content validation systems. Your work will directly impact how we scale fair, fast, and fraud-proof systems for global Student Partners and CX Ambassadors.
Key Responsibilities
- Design APIs to handle submission, review, and payout logic
- Develop XP, karma, and level-up algorithms with fraud resistance
- Create content verification checkpoints (e.g., metadata checks, submission throttles)
- Handle rate limits, caching, retries, and fallback for reward processing
- Collaborate with AI and frontend engineers for seamless data flow
- debug reward or submission logic
- fix issues in task flows or XP systems
- patch verification bugs or payout edge cases
- optimize performance and API stability
Skills & Qualifications
- Proficient in Node.js, Python (Flask/FastAPI), or Go
- Solid understanding of PostgreSQL, Firebase, or equivalent databases
- Strong grasp of authentication, role-based permissions, and API security
- Bonus: Experience with reward engines, affiliate logic, or task-based platforms
- Bonus: Familiarity with moderation tooling or content scoring
Join us and play a key role in driving the growth of CryptoXpress in the cryptocurrency space!
Pro Tip: Tips for Application Success
- Please fill out the application below
- Explore CryptoXpress before applying, take 2 minutes to download and try the app so you understand what we're building
- Show your enthusiasm for crypto, travel, and digital innovation
- Mention any self-learning initiatives or personal crypto experiments
- Be honest about what you don't know - we value growth mindsets
How to Apply:
Interested candidates must complete the application form at

A fast-growing, tech-driven loyalty programs and benefits business is looking to hire a Technical Architect with expertise in:
Key Responsibilities:
1. Architectural Design & Governance
• Define, document, and maintain the technical architecture for projects and product modules.
• Ensure architectural decisions meet scalability, performance, and security requirements.
2. Solution Development & Technical Leadership
• Translate product and client requirements into robust technical solutions, balancing short-term deliverables with long-term product viability.
• Oversee system integrations, ensuring best practices in coding standards, security, and performance optimization.
3. Collaboration & Alignment
• Work closely with Product Managers and Project Managers to prioritize and plan feature development.
• Facilitate cross-team communication to ensure technical feasibility and timely execution of features or client deliverables.
4. Mentorship & Code Quality
• Provide guidance to senior developers and junior engineers through code reviews, design reviews, and technical coaching.
• Advocate for best-in-class engineering practices, encouraging the use of CI/CD, automated testing, and modern development tooling.5. Risk Management & Innovation
• Proactively identify technical risks or bottlenecks, proposing mitigation strategies.
• Investigate and recommend new technologies, frameworks, or tools that enhance product capabilities and developer productivity.
6. Documentation & Standards
• Maintain architecture blueprints, design patterns, and relevant documentation to align the team on shared standards.
• Contribute to the continuous improvement of internal processes, ensuring streamlined development and deployment workflows.
Skills:
1. Technical Expertise
• 7–10 years of overall experience in software development with at least a couple of years in senior or lead roles.
• Strong proficiency in at least one mainstream programming language (e.g., Golang,
Python, JavaScript).
• Hands-on experience with architectural patterns (microservices, monolithic systems, event-driven architectures).
• Good understanding of Cloud Platforms (AWS, Azure, or GCP) and DevOps practices
(CI/CD pipelines, containerization with Docker/Kubernetes).
• Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Location: Saket, Delhi (Work from Office)
Schedule: Monday – Friday
Experience : 7-10 Years
Compensation: As per industry standards


We’re building a powerful, AI-driven communication platform — a next-generation alternative to RingCentral or 8x8 — powered by OpenAI, LangChain, and SIP/WebRTC. We're looking for a Full-Stack Software Developer who’s passionate about building real-time, AI-enabled voice infrastructure and who’s excited to work in a fast-moving, founder-led environment.
This is an opportunity to build from scratch, take ownership of core systems, and innovate on the edge of VoIP + AI.
What You’ll Do
- Design and build AI-driven voice and messaging features (e.g. smart IVRs, call transcription, virtual agents)
- Develop backend services using Python, Node.js, or Golang
- Integrate OpenAI, Whisper, and LangChain with real-time VoIP systems like Twilio, SIP, or WebRTC
- Create scalable APIs, handle call logic, and build AI pipelines
- Collaborate with the founder and early team on product strategy and infrastructure
- Participate in occasional in-person strategy meetings (Delhi, Bangalore, or nearby)
Must-Have Skills
- Strong programming experience in Python, Node.js, or Go
- Hands-on experience with VoIP/SIP, WebRTC, or tools like Twilio, Asterisk, Plivo
- Experience integrating with LLM APIs, OpenAI, or speech-to-text models
- Solid understanding of backend design, Docker, Redis, PostgreSQL
- Ability to work independently and deliver production-grade code
Nice to Have
- Familiarity with LangChain or agent-based AI systems
- Knowledge of call routing logic, STUN/TURN, or media servers (e.g. FreeSWITCH)
- Interest in building scalable cloud-first SaaS products
Work Setup
- 🏠 Remote work (India-based, must be reachable for meetings)
- 🕐 Full-time role
- 💼 Direct collaboration with founder (technical)
- 🧘♂️ Flexible hours, strong ownership culture

Greetings from Edstellar
we are looking for Vibe Coder for entry Level
Position Overview
We're seeking passionate fresh graduates who are natural Vibe Coders - developers who code with intuition, creativity, and genuine enthusiasm for building amazing applications. Perfect for recent grads who bring fresh energy and innovative thinking to development.
Key Responsibilities
Build dynamic web and mobile applications with creative flair
Code with passion and embrace experimental approaches
Learn and implement emerging technologies rapidly
Collaborate in our innovation-friendly environment
Prototype ideas and iterate with speed and creativity
Bring fresh perspectives to development challenges
Required Qualifications
Education: Bachelor's in Computer Science/IT or related field
Experience: Fresh graduate (0-1 years)
Technical Skills:
Solid programming fundamentals (any language)
Basic web development (HTML, CSS, JavaScript)
Understanding of application development concepts
Familiarity with Git/version control
Creative problem-solving mindset
Preferred:
Good understanding in Python, JavaScript frameworks, or modern tech stack
AI tool familiarity
Mobile development interest
Open source contributions
Vibe Coder DNA
Passionate about coding and building innovative apps
Thrives with creative freedom and flexible approaches
Loves experimenting with new technologies
Values innovation and thinking outside the box
Natural curiosity and eagerness to learn
Collaborative spirit with independent drive
Resilient and adaptable to change

Role descriptions / Expectations from the Role
· 6-7 years of IT development experience with min 3+ years hands-on experience in Snowflake
· Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers.
· Strong experience with building productionized data ingestion and data pipelines in Snowflake
· Good knowledge of Snowflake's architecture, features likie Zero-Copy Cloning, Time Travel, and performance tuning capabilities
· Should have good exp on Snowflake RBAC and data security.
· Strong experience in Snowflake features including new snowflake features.
· Should have good experience in Python/Pyspark.
· Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF)
· Should have experience/knowledge in orchestration and scheduling tools experience like Airflow
· Should have good understanding on ETL or ELT processes and ETL tools.

5+ years of IT development experience with min 3+ years hands-on experience in Snowflake · Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers. · Strong experience with building productionized data ingestion and data pipelines in Snowflake · Good knowledge of Snowflake's architecture, features likie Zero-Copy Cloning, Time Travel, and performance tuning capabilities · Should have good exp on Snowflake RBAC and data security. · Strong experience in Snowflake features including new snowflake features. · Should have good experience in Python/Pyspark. · Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF) · Should have experience/knowledge in orchestration and scheduling tools experience like Airflow · Should have good understanding on ETL or ELT processes and ETL tools.

About the Role:
We are looking for a skilled and detail-oriented Data Analyst to join our team. The ideal candidate will be responsible for collecting, analyzing, and interpreting large datasets to support data-driven decision-making across the organization. Proficiency in MongoDB and SQL is essential for this role.
Key Responsibilities:
- Collect, process, and clean structured and unstructured data from various sources.
- Analyze data using SQL queries and MongoDB aggregations to extract insights.
- Develop and maintain dashboards, reports, and visualizations to present data in a meaningful way.
- Collaborate with cross-functional teams to identify business needs and provide data-driven solutions.
- Monitor data quality and integrity, ensuring accuracy and consistency.
- Support the development of predictive models and data pipelines.
Required Skills & Qualifications:
- Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field.
- Proven experience as a Data Analyst or similar role.
- Strong proficiency in SQL for data querying and manipulation.
- Hands-on experience with MongoDB, including working with collections, documents, and aggregations.
- Knowledge of data visualization tools such as Tableau, Power BI, or similar (optional but preferred).
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Good to Have:
- Experience with Python/R for data analysis.
- Exposure to ETL tools and data warehousing concepts.
- Understanding of statistical methods and A/B testing.

About us
Arka energy is focussed on changing the paradigm on energy. Arka focusses on creating innovative renewable energy solutions for residential customers. With its custom product design and an innovative approach to market the product solution, Arka aims to be a leading provider of energy solutions in the residential solar segment. Arka designs and develops end to end renewable energy solutions with teams in Bangalore and in the Bay area
This product is a 3d simulation software, to replicate rooftops/commercial sites, place solar panels and generate the estimation of solar energy.
What are we looking for?
· As a backend developer you will be responsible for developing solutions that will enable Arka solutions to be easily adopted by customers.
· Attention to detail and willingness to learn is a big part of this position.
· Commitment to problem solving, and innovative design approaches are important.
Role and responsibilities
● Develop cloud-based Python Django software products
● Working closely with UX and Front-end Developers
● Participating in architectural, design and product discussions Designing and creating RESTful APIs for internal and partner consumption
● Working in an agile environment with an excellent team of engineers
● Own/maintain code everything from development to fixing bugs/issues.
● Deliver clean, reusable high-quality code
● Facilitate problem diagnosis and resolution for issues reported by Customers
● Deliver to schedule and timelines based on an Agile/Scrum-based approach
● Develop new features and ideas to make product better and user centric.
● Must be able to independently write code and test major features, as well as work jointly with other team members to deliver complex changes
● Create algorithms from scratch and implement them in the software.
● Code Review, End to End Unit Testing.
● Guiding and monitoring Junior Engineers.
SKILL REQUIREMENTS
● Solid database skills in a relational database (i.e. PostgresSQL, MySQL, etc.) Knowledge of how to build and use with RESTful APIs
● Strong knowledge of version control (i.e. git, svn, etc.)
● Experience deploying Python applications into production
● Azure or Google cloud infrastructure knowledge is a plus
● Strong drive to learn new technologies
● Ability to learn new technologies quickly
● Continuous look-out for new and creative solutions to implement new features or improve old ones
● Data Structures, Algorithms, Django and Python
Good To have
· Knowledge on GenAI Applications.
Key Benefits
· Competitive development environment
· Engagement into full scale systems development
· Competitive Salary
· Flexible working environment
· Equity in an early-stage start-up
· Patent Filing Bonuses
· Health Insurance for Employee + Family

Azure Data Engineer—Job Summary Seeking an experienced Azure Data Engineer (8+ years) with expertise in designing and implementing scalable data solutions within the Azure ecosystem. The role involves managing data pipelines, storage, analytics, and optimization to support business intelligence and reporting.
Key Responsibilities:
- Develop and optimize data pipelines & ETL processes using Azure Data Factory (ADF), Databricks, PySpark.
- Manage Azure Data Lake and integrate with Azure Synapse Analytics for scalable storage and analytics.
- Design data solutions, optimize SQL queries, and implement .governance best practices.
- Support BI development and reporting needs.
- Implement CI/CD pipelines for data engineering solutions.
Mandatory Skills: ✔ 8+ years in Azure Data Engineering ✔ Expertise in SQL and a programming language (preferably Python) ✔ Strong proficiency in ADF, Databricks, Azure Data Lake, Azure Synapse Analytics, PySpark ✔ Solid understanding of data warehousing concepts and ETL processes ✔ Experience with Apache Spark or similar tools.
Preferred Skills: 🔹 Experience with Microsoft Fabric (MS Fabric) 🔹 Familiarity with Power BI for data visualization 🔹 Domain expertise in Finance, Procurement, or Human Capital.

ROLES AND RESPONSIBILITIES
As a Full Stack Developer at GoQuest Media, you will play a key role in building and maintaining
web applications that deliver seamless user experiences for our global clients. From
brainstorming features with the team to executing back-end logic, you will be involved in every
aspect of our application development process.
You will be working with modern technologies like NodeJS, ReactJS, NextJS, and Tailwind CSS
to create performant, scalable applications. Your role will span both front-end and back-end
development as you build efficient and dynamic solutions to meet the company’s and users’
needs.
What will you be accountable for?
● End-to-End Development:
● Design and develop highly scalable and interactive web applications from scratch.
● Take ownership of both front-end (ReactJS, NextJS, Tailwind CSS) and back-end
(NodeJS) development processes.
● Feature Implementation:
● Work closely with designers and product managers to translate ideas into highly
interactive and responsive interfaces.
● Maintenance and Debugging:
● Ensure applications are optimized for performance, scalability, and reliability.
● Perform regular maintenance, debugging, and testing of existing apps to ensure
they remain in top shape.
● Collaboration:
● Collaborate with cross-functional teams, including designers, product managers,
and stakeholders, to deliver seamless and robust applications.
● Innovation:
● Stay updated with the latest trends and technologies to suggest and implement
improvements in the development process.
Tech Stack
● Front-end: ReactJS, NextJS, Tailwind CSS
● Back-end: NodeJS, ExpressJS
● Database: MongoDB (preferred), MySQL
● Version Control: Git
● Tools: Webpack, Docker (optional but a plus)
Preferred Location
This role is based out of our Andheri Office, Mumbai.
Growth Opportunities for You
● Lead exciting web application projects end-to-end and own key product initiatives.
● Develop cutting-edge apps used by leading media clients around the globe.
● Gain experience working in a high-growth company in the media and tech industry.
● Potential to grow into a team lead role.
Who Should Apply?
● Individuals with a passion for coding and web technologies.
● Minimum 3-5 years of experience in full-stack development using NodeJS, ReactJS,
NextJS, and Tailwind CSS.
● Strong understanding of both front-end and back-end development and ability to
write efficient, reusable, and scalable code.
● Familiarity with databases like MongoDB and MySQL.
● Experience with CI/CD pipelines and cloud infrastructure (AWS, Google Cloud) is a
plus.
● Team players with excellent communication skills and the ability to work in a
fast-paced environment.
Who Should Not Apply?
● If you're not comfortable with both front-end and back-end development.
● If you don’t enjoy problem-solving or tackling complex development challenges.
● If working in a dynamic, evolving environment doesn’t appeal to you.

About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–6 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of at least 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.


POSITION / TITLE: Data Science Lead
Location: Offshore – Hyderabad/Bangalore/Pune
Who are we looking for?
Individuals with 8+ years of experience implementing and managing data science projects. Excellent working knowledge of traditional machine learning and LLM techniques.
The candidate must demonstrate the ability to navigate and advise on complex ML ecosystems from a model building and evaluation perspective. Experience in NLP and chatbots domains is preferred.
We acknowledge the job market is blurring the line between data roles: while software skills are necessary, the emphasis of this position is on data science skills, not on data-, ML- nor software-engineering.
Responsibilities:
· Lead data science and machine learning projects, contributing to model development, optimization and evaluation.
· Perform data cleaning, feature engineering, and exploratory data analysis.
· Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.
· Collaborate with other DS and engineers to deliver projects.
Technical Skills – Must have:
· Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.
· Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.
· Ability to translate complex machine learning problem statements into specific deliverables and requirements.
· Should have worked with major cloud platforms such as AWS, Azure or GCP.
· Working knowledge of SQL and no-SQL databases.
· Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.
· Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.
· Strong understanding of evaluation and monitoring metrics for machine learning projects.
Technical Skills – Good to have:
· Track record of getting ML models into production
· Experience building chatbots.
· Experience with closed and open source LLMs.
· Experience with frameworks and technologies like scikit-learn, BERT, langchain, autogen…
· Certifications or courses in data science.
Education:
· Master’s/Bachelors/PhD Degree in Computer Science, Engineering, Data Science, or a related field.
Process Skills:
· Understanding of Agile and Scrum methodologies.
· Ability to follow SDLC processes and contribute to technical documentation.
Behavioral Skills :
· Self-motivated and capable of working independently with minimal management supervision.
· Well-developed design, analytical & problem-solving skills
· Excellent communication and interpersonal skills.
· Excellent team player, able to work with virtual teams in several time zones.

Teknobuilt is an innovative construction technology company accelerating Digital and AI platform to help all aspects of program management and execution for workflow automation, collaborative manual tasks and siloed systems. Our platform has received innovation awards and grants in Canada, UK and S. Korea and we are at the frontiers of solving key challenges in the built environment and digital health, safety and quality.
Teknobuilt's vision is helping the world build better- safely, smartly and sustainably. We are on a mission to modernize construction by bringing Digitally Integrated Project Execution System - PACE and expert services for midsize to large construction and infrastructure projects. PACE is an end-to-end digital solution that helps in Real Time Project Execution, Health and Safety, Quality and Field management for greater visibility and cost savings. PACE enables digital workflows, remote working, AI based analytics to bring speed, flow and surety in project delivery. Our platform has received recognition globally for innovation and we are experiencing a period of significant growth for our solutions.
Job Responsibilities
As a Quality Analyst Engineer, you will be expected to:
· Thoroughly analyze project requirements, design specifications, and user stories to understand the scope and objectives.
· Arrange, set up, and configure necessary test environments for effective test case execution.
· Participate in and conduct review meetings to discuss test plans, test cases, and defect statuses.
Execute manual test cases with precision, analyze results, and identify deviations from expected behavior.
· Accurately track, log, prioritize, and manage defects through their lifecycle, ensuring clear communication with developers until resolution.
· Maintain continuous and clear communication with the Test Manager and development team regarding testing progress, roadblocks, and critical findings.
· Develop, maintain, and manage comprehensive test documentation, including:
o Detailed Test Plans
o Well-structured Test Cases for various testing processes
o Concise Summary Reports on test execution and defect status
o Thorough Test Data preparation for test cases
o "Lessons Learned" documents based on testing inputs from previous projects
o "Suggestion Documents" aimed at improving overall software quality
o Clearly defined Test Scenarios
· Clearly report identified bugs to developers with precise steps to reproduce, expected results, and actual results, facilitating efficient defect resolution

Senior Generative AI Engineer
Job Id: QX016
About Us:
The QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for the enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights; businesses will continue to face challenges to better understand their customers and even lose them.
Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Job Summary:
We seek a highly experienced Senior Generative AI Engineer who focus on the development, implementation, and engineering of Gen AI applications using the latest LLMs and frameworks. This role requires hands-on expertise in Python programming, cloud platforms, and advanced AI techniques, along with additional skills in front-end technologies, data modernization, and API integration. The Senior Gen AI engineer will be responsible for building applications from the ground up, ensuring robust, scalable, and efficient solutions.
Responsibilities:
· Build GenAI solutions such as virtual assistant, data augmentation, automated insights and predictive analytics
· Design, develop, and fine-tune generative AI models (GANs, VAEs, Transformers).
· Handle data preprocessing, augmentation, and synthetic data generation.
· Work with NLP, text generation, and contextual comprehension tasks.
· Develop backend services using Python or .NET for LLM-powered applications.
· Build and deploy AI applications on cloud platforms (Azure, AWS, GCP).
· Optimize AI pipelines and ensure scalability.
· Stay updated with advancements in AI and ML.
Skills & Requirements:
- Strong knowledge of machine learning, deep learning, and NLP.
- Proficiency in Python, TensorFlow, PyTorch, and Keras.
- Experience with cloud services, containerization (Docker, Kubernetes), and AI model deployment.
- Understanding of LLMs, embeddings, and retrieval-augmented generation (RAG).
- Ability to work independently and as part of a team.
- Bachelor’s degree in Computer Science, Mathematics, Engineering, or a related field.
- 6+ years of experience in Gen AI, or related roles.
- Experience with AI/ML model integration into data pipelines.
Core Competencies for Generative AI Engineers:
1. Programming & Software Development
a. Python – Proficiency in writing efficient and scalable code with strong knowledge with NumPy, Pandas, TensorFlow, PyTorch and Scikit-learn.
b. LLM Frameworks – Experience with Hugging Face Transformers, LangChain, OpenAI API, and similar tools for building and deploying large language models.
c. API integration such as FastAPI, Flask, RESTful API, WebSockets or Django.
d. Knowledge of Version Control, containerization, CI/CD Pipelines and Unit Testing.
2. Vector Database & Cloud AI Solutions
a. Pinecone, FAISS, ChromaDB, Neo4j
b. Azure Redis/ Cognitive Search
c. Azure OpenAI Service
d. Azure ML Studio Models
e. AWS (Relevant Services)
3. Data Engineering & Processing
- Handling large-scale structured & unstructured datasets.
- Proficiency in SQL, NoSQL (PostgreSQL, MongoDB), Spark, and Hadoop.
- Feature engineering and data augmentation techniques.
4. NLP & Computer Vision
- NLP: Tokenization, embeddings (Word2Vec, BERT, T5, LLaMA).
- CV: Image generation using GANs, VAEs, Stable Diffusion.
- Document Embedding – Experience with vector databases (FAISS, ChromaDB, Pinecone) and embedding models (BGE, OpenAI, SentenceTransformers).
- Text Summarization – Knowledge of extractive and abstractive summarization techniques using models like T5, BART, and Pegasus.
- Named Entity Recognition (NER) – Experience in fine-tuning NER models and using pre-trained models from SpaCy, NLTK, or Hugging Face.
- Document Parsing & Classification – Hands-on experience with OCR (Tesseract, Azure Form Recognizer), NLP-based document classifiers, and tools like LayoutLM, PDFMiner.
5. Model Deployment & Optimization
- Model compression (quantization, pruning, distillation).
- Deployment using Azure CI/CD, ONNX, TensorRT, OpenVINO on AWS, GCP.
- Model monitoring (MLflow, Weights & Biases) and automated workflows (Azure Pipeline).
- API integration with front-end applications.
6. AI Ethics & Responsible AI
- Bias detection, interpretability (SHAP, LIME), and security (adversarial attacks).
7. Mathematics & Statistics
- Linear Algebra, Probability, and Optimization (Gradient Descent, Regularization, etc.).
8. Machine Learning & Deep Learning
a. Expertise in supervised, unsupervised, and reinforcement learning.
a. Proficiency in TensorFlow, PyTorch, and JAX.
b. Experience with Transformers, GANs, VAEs, Diffusion Models, and LLMs (GPT, BERT, T5).
Personal Attributes:
- Strong problem-solving skills with a passion for data architecture.
- Excellent communication skills with the ability to explain complex data concepts to non-technical stakeholders.
- Highly collaborative, capable of working with cross-functional teams.
- Ability to thrive in a fast-paced, agile environment while managing multiple priorities effectively.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.
Ready to make an impact? Apply today and become part of the QX impact team!

Responsibilities:
- Develop, maintain and manage advanced reporting, analytics, dashboards in Tableau or PowerBI
- Ability to identify right data Visualization based on Business requirement.
- Perform and document data analysis, data validation and data mapping design
- Review and improve existing reports, dashboards, and analytics systems.
- Help optimise Tableau reports/dashboards performance on the server
- Develop presentations and documents that will have an impact
- Communicate complex topics to the team through both written and oral communications
- Ensure that project deliverable meet business requirements and ensure to complete the project within assigned timelines

Key Responsibilities:
· Lead the design and implementation of scalable infrastructure using IaC principles.
· Develop and manage configuration management tools primarily with Chef.
· Write and maintain automation scripts in Python to streamline infrastructure tasks.
· Build, manage, and version infrastructure using Terraform.
· Collaborate with cloud architects and DevOps teams to ensure highly available, secure, and scalable systems.
· Provide guidance and mentorship to junior engineers.
· Monitor infrastructure performance and provide optimization recommendations.
· Ensure compliance with best practices for security, governance, and automation.
· Maintain and improve CI/CD pipelines with infrastructure integration.
· Support incident management, troubleshooting, and root cause analysis for infrastructure issues.
Required Skills & Experience:
· Strong hands-on experience in:
o Chef (Cookbooks, Recipes, Automation)
o Python (Scripting, automation tasks, REST APIs)
o Terraform (Modules, state management, deployments)
· Experience in AWS services (EC2, VPC, IAM, S3, etc.)
· Familiarity with Windows administration and automation.
· Solid understanding of CI/CD processes, infrastructure lifecycle, and Git-based workflow

5K head count IT Company, into digital transformation servic

Immediate Hiring: L2/L3 Network Protocol Test Engineers (Python Automation)
📍 Locations: Bangalore | Chennai | Hyderabad
🧑💻 Experience: 4+ Years
👨💻 Open Position:
L2/L3 Network Protocol Test Engineer
(Strong Python automation skills required)
✅ Requirements:
In-depth knowledge of L2/L3 protocols: Ethernet, VLAN, xSTP, OSPF, BGP, LACP
Hands-on experience with Python scripting for test automation
Experience with tools like IXIA, Spirent, or similar traffic generators
Strong skills in test planning, execution, and bug tracking
Excellent communication and team collaboration skills
🌟 Nice to Have:
Exposure to MPLS, EVPN, or other advanced networking protocols

We are looking for a talented Frontend Developer to create modern, responsive, and interactive web applications using the latest technologies.
Key Responsibilities:
- Develop user interfaces with React.js, Redux, JavaScript (ES6+), HTML5, CSS3, and Tailwind CSS.
- Optimize applications for speed, scalability, and cross-browser compatibility.
- Integrate REST APIs and ensure seamless UI/UX.
- Manage deployments using Vercel and Netlify.
- Collaborate with backend teams and use Git/GitHub for version control.
Required Skills:
- React.js, Redux, JavaScript (ES6+)
- HTML5, CSS3, Tailwind CSS
- REST API integration
- Git, GitHub, Vercel, Netlify
Qualifications:
- Bachelor’s degree in Computer Science or related field.
- Strong portfolio or GitHub profile preferred.

Job Description: AI/ML Specialist
We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Key Responsibilities
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices.
● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 3+ years of professional experience as a AI/ML Specialist
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs.
● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.
● Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.

Job Overview:
We are looking for a skilled and motivated Jr. Programmer Analyst with 2 years of hands-on experience in Python development and a strong understanding of software development principles. The ideal candidate should have experience with Odoo ORM, PostgreSQL, and API integration. If you have a passion for writing clean, optimized code and are excited about working in a product-based environment, we would love to meet you.
Key Responsibilities:
Develop, test, and maintain applications using Python (Pandas, NumPy, psycopg2).
Implement multi-threading and multi-processing where required.
Work on Odoo ORM, customizing and optimizing the application architecture.
Integrate third-party APIs and ensure smooth data flow between systems.
Optimize code for performance and scalability.
Collaborate with cross-functional teams using Agile methodologies.
Write efficient SQL queries and manage PostgreSQL databases.
Utilize Git for version control and contribute to CI/CD processes.
Work in a Linux environment for software development and deployment.
Support the team in product development from concept to deployment.
Technical Requirements (Must Have):
Strong proficiency in Python 3, especially:
Pandas, NumPy, Multi-threading, Multi-processing, psycopg2, API Integration
Code optimization techniques.
Experience with Odoo ORM and understanding of its architecture
Experience in FastAPI / Flask.
Proficiency in PostgreSQL and writing complex SQL queries
Familiarity with Git, HTML, CSS, and JavaScript.
Comfortable working on Linux OS.
Experience with Agile software development methodology.
Exposure to product development lifecycle.
Good to Have:
Basic knowledge of Docker.
Advanced proficiency with Linux.
Understanding of stock and crypto markets, especially candlestick patterns.
Perks & Benefits:
Opportunity to work in a fast-growing product environment.
Collaborative and supportive team culture.
Learning and development opportunities.
If you are passionate about technology and want to grow in a dynamic product-based company, we encourage you to apply!

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.


- Strong AI/ML OR Software Developer Profile
- Mandatory (Experience 1) - Must have 3+ YOE in Core Software Developement (SDLC)
- Mandatory (Experience 2) - Must have 2+ years of experience in AI/ML, preferably in conversational AI domain (spped to text, text to speech, speech emotional recognition) or agentic AI systems.
- Mandatory (Experience 3) - Must have hands-on experience in fine-tuning LLMs/SLM, model optimization (quantization, distillation) and RAG
- Mandatory (Experience 4) - Hands-on Programming experience in Python, TensorFlow, PyTorch and model APIs (Hugging Face, LangChain, OpenAI, etc

We’re offering an exciting short-term in-person internship in Delhi to help build a professional AI-Robotics Lab as part of our AI-led R&D platform, bodh scientific™, at SarthhakAI.
As an intern, you’ll get hands-on experience working with state-of-the-art robotics hardware, including Yanshee humanoid bots, Hiwonder robotic arms, Arduino-based sensor systems, and more. You will contribute to integrating robotics into a larger AI platform designed for scientific innovation.
We’re looking for:
- Bright engineering students, robotics enthusiasts, or even high-school hobbyists
- Demonstrated experience with DIY robotics or electronics
- Basic familiarity with Python, Raspberry Pi, Arduino
- A strong learning attitude and curiosity for building real-world systems
📍 Location: Delhi (In-person only)


Genspark is hiring Professionals for C Development for there Premium Client
Work Location- Chennai
Entry Criteria
Graduate from Any Engineering Background /BSc/MSc /MCA with specialization(Computer/Electronics/IT )
Minimum 1 year experience in Industry
Working Knowledge of C/Embedded/C++/DSA
Programming Aptitude (Any Language)
Basic understanding of programming constructs: variables, loops, conditionals, functions
Logical thinking and algorithmic approach
Computer Science Fundamentals:
Data structures basics: arrays, stacks, queues, linked lists
Operating System basics: what is a process/thread, memory, file system, etc.
Basic understanding of compilation, runtime, networking and sockets etc.
Problem Solving & Logical Reasoning
Ability to trace logic, find errors, and reason through pseudocode
Analytical and debugging capabilities
Learning Attitude & Communication
Demonstrated interest in low-level or systems programming (even if no experience)
Willingness to learn C and work close to the OS level
Clarity of thought and ability to explain what they do know
Soft Skills :
Able to explain and communicate the thoughts clearly in English
Confident in solving new problems independently or with guidance
Willingness to take feedback and iterate
Evaluation Process
Candidates will be assigned an online test, followed by Technical Screening.
Shortlisted Candidates will have to appear for a F2F Interview with the Client, Chennai.

Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.

We are looking for a talented and enthusiastic Full Stack Web Developer to join our dynamic development team. The ideal candidate will have a strong grasp of both front-end and back-end technologies. This is a trainer cum developer role


Role Overview
We are seeking a skilled Odoo Consultant with Python development expertise to support the design, development, and implementation of Odoo-based business solutions for our clients. The consultant will work on module customization, backend logic, API integrations, and configuration of business workflows using the Odoo framework.
Key Responsibilities
● Customize and extend Odoo modules based on client requirements
● Develop backend logic using Python and the Odoo ORM
● Configure business workflows, access rights, and approval processes
● Create and update views using XML and QWeb for reports and screens
● Integrate third-party systems using Odoo APIs (REST, XML-RPC)
● Participate in client discussions and translate business needs into technical solutions
● Support testing, deployment, and user training as required
Required Skills
● Strong knowledge of Python and Odoo framework (v12 and above)
● Experience working with Odoo models, workflows, and security rules
● Good understanding of XML, QWeb, and PostgreSQL
● Experience in developing or integrating APIs
● Familiarity with Git and basic Linux server operations
● Good communication and documentation skills
Preferred Qualifications
● Experience in implementing Odoo for industries such as manufacturing, retail, financial
services, or real estate
● Ability to work independently and manage project timelines
● Bachelor’s degree in Computer Science, Engineering, or related field


Role Overview
We are seeking a passionate and skilled Machine Learning Engineer to join our team. The ideal
candidate will have a strong background in machine learning, data science, and software engineering.
As a Machine Learning Engineer, you will work closely with our clients and internal teams to develop,
implement, and maintain machine learning models that solve real-world problems.
Must Have Skills
• 2+ years of experience into Computer vision and NLP projects.
• 2+ years of experience in machine learning and Gen AI, data science, or a related field.
• Strong experience in python programming
• Understanding of data structures, data modeling and software architecture
• Deep knowledge of math, probability, statistics and algorithms
• Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-
learn)
• Excellent communication skills
• Ability to work in a team
• Outstanding analytical and problem-solving skills
• BSc in Computer Science, Mathematics or similar field; Master’s degree is a plus
Role and Responsibilities
• Study and transform data science prototypes• Design machine learning systems
• Research and implement appropriate ML algorithms and tools
• Develop machine learning applications according to requirements
• Select appropriate datasets and data representation methods
• Run machine learning tests and experiments
• Perform statistical analysis and fine-tuning using test results
• Train and retrain systems when necessary
• Extend existing ML libraries and frameworks
• Keep abreast of developments in the field

Job Description:
• Experience in Python (Only Backend), Data structures, Oops, Algorithms, Django, NumPy etc.
• Notice/Joining of not more than 30 days.
• Only Premium Institute- Tier 1 and Tier 2.
• Hybrid Mode of working.
• Good understanding of writing Unit Tests using PYTest.
• Good understanding of parsing XML’s and handling files using Python.
• Good understanding with Databases/SQL, procedures and query tuning.
• Service Design Concepts, OO and Functional Development concepts.
• Agile Development Methodologies.
• Strong oral and written communication skills.
• Excellent interpersonal skills and professional approach Skills desired.

About Potentiam
Potentiam helps SME companies build world-class offshore teams. Our model is our locations and your dedicated staff under your control. Potentiam have offices in Iasi in Romania, Bangalore and Cape Town, home to large liquid pools of offshore talent working for international companies. Potentiam's management team have had over 15 years' experience in building offshore teams, and have specialist functional expertise to support the transition offshore in technology, finance, operations, engineering, digital marketing, technology and analytics. For decades corporations' scale has enabled them to benefit from the cost and skills advantage of offshore operations. Now SME companies can enjoy a similar benefit through Potentiam without any upfront investment.
Location : Bangalore ( Hybrid)
Experience - 6+ Years
Professional Experience:
- Experience using a Python backend web framework (like Django, Flask or FastAPI)
- In particular, experience building performant and reliable APIs and integrations
- Competency using SQL and ORMs
- Some experience with frontend web development would be a bonus using a JavaScript framework (such as Vue.js or React)
- Understanding of some of the following: Django Rest Framework, PostgreSQL, Celery, Docker, nginx, AWS
Benefits and Perks
- Health Insurance
- Referral Bonus
- Performance Bonus
- Flexible Working options
Job Types: Full-time, Permanent

● Proven experience in training, evaluating and deploying machine learning models
● Solid understanding of data science and machine learning concepts
● Experience with some machine learning / data engineering machine learning tech in Python (such as numpy, pytorch, pandas/polars, airflow, etc)
● Experience developing data products using large language model, prompt engineering, model evaluation.
● Experience with web services and programming (such as Python, docker, databases etc.)
● Understanding of some of the following: FastAPI, PostgreSQL, Celery, Docker, AWS, Modal, git, continuous integration.

About Role
We are seeking a skilled Backend Engineer with 2+ years of experience to join our dynamic team, focusing on building scalable web applications using Python frameworks (Django/FastAPI) and cloud technologies. You'll be instrumental in developing and maintaining our cloud-native backend services.
Responsibilities:
- Design and develop scalable backend services using Django and FastAPI
- Create and maintain RESTful APIs
- Implement efficient database schemas and optimize queries
- Implement containerisation using Docker and container orchestration
- Design and implement cloud-native solutions using microservices architecture
- Participate in technical design discussions, code reviews and maintain coding standards
- Document technical specifications and APIs
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
Requirements:
- Experience with Django and/or Fast-API (2+ years)
- Proficiency in SQL and ORM frameworks
- Docker containerisation and orchestration
- Proficiency in shell scripting (Bash/Power-Shell)
- Understanding of micro-services architecture
- Experience building server-less back end
- Knowledge of deployment and debugging on cloud platforms (AWS/Azure)


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.
Role Overview
· We are seeking a passionate and experienced Full Stack Developer skilled in MERN stack and Python (Django/Flask) to build and scale high-impact features across our web and mobile platforms. You will collaborate with cross-functional teams to deliver seamless user experiences and robust backend systems.
Key Responsibilities
· Design, develop, and maintain scalable web applications using MySQL/Postgres, MongoDB, Express.js, React.js, and Node.js
· Build and manage RESTful APIs and microservices using Python (Django/Flask/FastAPI)
· Integrate with third-party platforms like OpenAI, WhatsApp APIs (Whapi), Interakt, and Zoho
· Optimize performance across the frontend and backend
· Collaborate with product managers, designers, and other developers to deliver high-quality features
· Ensure security, scalability, and maintainability of code
· Write clean, reusable, and well-documented code
· Contribute to DevOps, CI/CD, and server deployment workflows (AWS/Lightsail)
· Participate in code reviews and mentor junior developers if needed
Required Skills
· Strong experience with MERN Stack: MongoDB, Express.js, React.js, Node.js
· Proficiency in Python and web frameworks like Django, Flask, or FastAPI
· Experience working with REST APIs, JWT/Auth, and WebSockets
· Good understanding of frontend design systems, state management (Redux/Context), and responsive UI
· Familiarity with database design and queries (MongoDB, PostgreSQL/MySQL)
· Experience with Git, Docker, and deployment pipelines
· Comfortable working in Linux-based environments (e.g., Ubuntu on AWS)
Bonus Skills
· Experience with AI integrations (e.g., OpenAI, LangChain)
· Familiarity with WooCommerce, WordPress APIs
· Experience in chatbot development or WhatsApp API integration
Who You Are
· You are a problem-solver with a product-first mindset
· You care about user experience and performance
· You enjoy working in a fast-paced, collaborative environment
· You have a growth mindset and are open to learning new technologies
Why Join Us?
· Work at the intersection of healthcare, community, and technology
· Directly impact the lives of women across India and beyond
· Flexible work environment and collaborative team
· Opportunity to grow with a purpose-driven startup


Role Overview
We’re looking for a Data Analyst who is excited to work at the intersection of data, technology, and women’s wellness. You'll be instrumental in helping us understand user behaviour, community engagement, campaign performance, and product usage across platforms — including app, web, and WhatsApp.
You’ll also have opportunities to collaborate on AI-powered features such as chatbots and personalized recommendations. Experience with GenAI or NLP is a plus but not a requirement.
Key Responsibilities
· Clean, transform, and analyse data from multiple sources (SQL databases, CSVs, APIs).
· Build dashboards and reports to track KPIs, user behaviour, and marketing performance.
· Collaborate with product, marketing, and customer teams to uncover actionable insights.
· Support experiments, A/B testing, and cohort analysis to drive growth and retention.
· Assist in documentation and communication of findings to technical and non-technical teams.
· Work with the data team to enhance personalization and AI features (optional).
Required Qualifications
· Bachelor’s degree in Data Science, Statistics, Computer Science, or a related field.
· 2 – 4 years of experience in data analysis or business intelligence.
· Strong hands-on experience with SQL and Python (pandas, NumPy, matplotlib).
· Familiarity with data visualization tools (Streamlit, Tableau, Metabase, Power BI, etc.)
· Ability to translate complex data into simple visual stories and clear recommendations.
· Strong attention to detail and a mindset for experimentation.
Preferred (Not Mandatory)
· Exposure to GenAI, LLMs (e.g., OpenAI, HuggingFace), or NLP concepts.
· Experience working with healthcare, wellness, or e-commerce datasets.
· Familiarity with REST APIs, JSON structures, or chatbot systems.
· Interest in building tools that impact women’s health and wellness.
Why Join Us?
· Be part of a high-growth startup tackling a real need in women’s healthcare.
· Work with a passionate, purpose-driven team.
· Opportunity to grow into GenAI/ML-focused roles as we scale.
· Competitive salary and career progression
Best Regards,
Indrani Dutta
MIROR THERAPEUTICS PRIVATE LIMITED

Job Summary:
The Lead IaC Engineer will design, implement, automate, and maintain infrastructure across on-premises and cloud environments. This role should have strong hands-on expertise in Chef, Python, Terraform, and some AWS & Windows administration knowledge.
8-12 years of experience
Primary Skills – Chef, Python, and Terraform
Secondary – AWS & Windows admin (Cloud is not mandatory)

About the role
Meltwater’s collaborative Security Team needs a passionate Security Engineer to continue to advance Meltwater’s security. Working with a group of fun loving people who are genuinely excited and passionate about security, there will be more laughs than facepalms! If you believe that improving security is about constantly moving technology forward to be more secure, and shifting security tools and checks earlier in the development lifecycle, then you’ll feel at home on Meltwater’s Security Team!
At Meltwater we want to ensure that we can have autonomous, empowered and highly efficient teams. Our Security Team charges head on into the challenge of ensuring our teams can maintain their autonomy without compromising the security of our systems, services and data. Through enablement and collaboration with teams, Security Engineers ensure that our development and infrastructure practices have security defined, integrated and implemented in a common-sense manner that reduces risk for our business. Security Engineers define best practices, build tools, implement security checks and controls together with the broader Engineering and IT teams to ensure that our employees and our customers' data stays safe.
As part of this, we leverage AWS as a key component of our cloud infrastructure. Security Engineers play a critical role in securing and optimizing AWS environments by implementing best practices, automating security controls, and collaborating with teams to ensure scalability, resilience, and compliance with industry standards.
Responsibilities
- In this role, you will be designing and implementing security functions ranging from checks on IaC (Infrastructure as Code) to SAST/DAST scanners in our CI/CD pipelines.
- You will be collaborating closely with almost every part of the Meltwater organization and help create security impact across all teams with strong support from the business.
- Collaborate closely with teams to help identify and implement frictionless security controls throughout the software development lifecycle
- Propose and implement solutions to enhance the overall cloud infrastructure and toolset.
- Perform ongoing security testing, including static (SAST), dynamic (DAST), and penetration testing, along with code reviews, vulnerability assessments, and regular security audits to identify risks, improve security, and develop mitigation strategies.
- Educate and share knowledge around secure coding practices
- Identify applicable industry best practices and consult with development teams on methods to continuously improve the risk posture.
- Build applications that improve our security posture and monitoring/alerting capabilities
- Implement and manage security technologies including firewalls, intrusion detection/prevention systems (IDS/IPS), endpoint protection, and security information and event management (SIEM) tools.
- Conduct vulnerability assessments, penetration testing, and regular security audits to identify risks and develop mitigation strategies.
- Monitor and respond to security incidents and alerts, performing root cause analysis and incident handling.
- Participate in incident response and disaster recovery planning, testing, and documentation.
- Manage identity and access management (IAM) solutions to enforce least privilege and role-based access controls (RBAC).
- Assist in the development of automated security workflows using scripting (Python, Bash, or similar).
Skills and background
- Strong collaboration skills with experience working cross functionally with a diverse group of stakeholders
- Strong communication skills with the ability to provide technical guidance to both technical and non-technical audiences
- Experience in implementing security controls early in the software development life cycle
- Knowledge of industry accepted security best practices/standards/policies such as NIST, OWASP, CIS, MITRE&ATT@CK
- Software developer experience in one or more of the following languages: JavaScript, Java, Kotlin or Python
- Experience in at least one public cloud provider, preferably AWS, with experience in security, infrastructure, and automation.
- Hands-on experience with SIEM platforms such as Splunk, QRadar, or similar.
- Proficiency in Linux operating system, network security, including firewalls, VPNs, IDS/IPS, and monitoring tools.
- Experience with vulnerability management tools (Snyk, Nessus, Dependabot) and penetration testing tools (Kali Linux, Metasploit).
- Experience in forensics and malware analysis.
- Self motivated learner that continuously wants to share knowledge to improve others
- The ideal candidate is someone from a Software Development background with a passion for security.
If you’re someone who understands the value of introducing security early in the software development lifecycle, and want to do so by enabling and empowering teams by building tools they WANT to use, we want to hear from you!

We are looking for an experienced and detail-oriented Senior Performance Testing Engineer to join our QA team. The ideal candidate will be responsible for designing, developing, and executing scalable and reliable performance testing strategies. You will lead performance engineering initiatives using tools like Locust, Python, Docker, Kubernetes, and cloud-native environments (AWS), ensuring our systems meet performance SLAs under real-world usage patterns.
Key Responsibilities
- Develop, enhance, and maintain Locust performance scripts using Python
- Design realistic performance scenarios simulating real-world traffic and usage patterns
- Parameterize and modularize scripts for robustness and reusability
- Execute performance tests in containerized environments using Docker and Kubernetes
- Manage performance test execution on Kubernetes clusters
- Integrate performance tests into CI/CD pipelines in collaboration with DevOps and Development teams
- Analyze performance test results, including throughput, latency, response time, and error rates
- Identify performance bottlenecks, conduct root cause analysis, and suggest optimizations
- Work with AWS (or other cloud platforms) to deploy, scale, and monitor tests in cloud-native environments
- Write and optimize complex SQL queries, stored procedures, and perform DB performance testing
- Work with SQL Server extensively; familiarity with Postgres is a plus
- Develop and maintain performance testing strategies and test plans
- Define and track KPIs, SLAs, workload models, and success criteria
- Guide the team on best practices and promote a performance engineering mindset
Must-Have Qualifications
- Proven hands-on experience with Locust and Python for performance testing
- Working knowledge of microservices architecture
- Hands-on with Kubernetes and Docker, especially in the context of running Locust at scale
- Experience integrating performance tests in CI/CD pipelines
- Strong experience with AWS or similar cloud platforms for deploying and scaling tests
- Solid understanding of SQL Server, including tuning stored procedures and query optimization
- Strong experience in performance test planning, execution, and analysis
Good-to-Have Skills
- Exposure to Postgres DB
- Familiarity with observability tools like Prometheus, Grafana, CloudWatch, and Datadog
- Basic knowledge of APM (Application Performance Monitoring) tools


We are seeking a visionary and hands-on AI/ML and Chatbot Lead to spearhead the design, development, and deployment of enterprise-wide Conversational and Generative AI solutions. This role will be instrumental in establishing and scaling our AI Lab function, defining chatbot and multimodal AI strategies, and delivering intelligent automation solutions that enhance user engagement and operational efficiency.
Key Responsibilities
- Strategy & Leadership
- Define and lead the enterprise-wide strategy for Conversational AI, Multimodal AI, and Large Language Models (LLMs).
- Establish and scale an AI/Chatbot Lab, with a clear roadmap for innovation across in-app, generative, and conversational AI use cases.
- Lead, mentor, and scale a high-performing team of AI/ML engineers and chatbot developers.
- Architecture & Development
- Architect scalable AI/ML systems encompassing presentation, orchestration, AI, and data layers.
- Build multi-turn, memory-aware conversations using frameworks like LangChain or Semantic Kernel.
- Integrate chatbots with enterprise platforms such as Salesforce, NetSuite, Slack, and custom applications via APIs/webhooks.
- Solution Delivery
- Collaborate with business stakeholders to assess needs, conduct ROI analyses, and deliver high-impact AI solutions.
- Identify and implement agentic AI capabilities and SaaS optimization opportunities.
- Deliver POCs, pilots, and MVPs, owning the full design, development, and deployment lifecycle.
- Monitoring & Governance
- Implement and monitor chatbot KPIs using tools like Kibana, Grafana, and custom dashboards.
- Champion ethical AI practices, ensuring compliance with governance, data privacy, and security standards.
Must-Have Skills
- Experience & Leadership
- 10+ years of experience in AI/ML with demonstrable success in chatbot, conversational AI, and generative AI implementations.
- Proven experience in building and operationalizing AI/Chatbot architecture frameworks across enterprises.
- Technical Expertise
- Programming: Python
- AI/ML Frameworks & Libraries: LangChain, ElasticSearch, spaCy, NLTK, Hugging Face
- LLMs & NLP: GPT, BERT, RAG, prompt engineering, PEFT
- Chatbot Platforms: Azure OpenAI, Microsoft Bot Framework, CLU, CQA
- AI Deployment & Monitoring at Scale
- Conversational AI Integration: APIs, webhooks
- Infrastructure & Platforms
- Cloud: AWS, Azure, GCP
- Containerization: Docker, Kubernetes
- Vector Databases: Pinecone, Weaviate, Qdrant
- Technologies: Semantic search, knowledge graphs, intelligent document processing
- Soft Skills
- Strong leadership and team management
- Excellent communication and documentation
- Deep understanding of AI governance, compliance, and ethical AI practices
Good-to-Have Skills
- Familiarity with tools like Glean, Perplexity.ai, Rasa, XGBoost
- Experience integrating with Salesforce, NetSuite, and understanding of Customer Success domain
- Knowledge of RPA tools like UiPath and its AI Center



What We’re Looking For:
Meltwater is a global leader in media intelligence and social analytics. Our mission is to help businesses make more informed decisions by providing them with actionable insights drawn from the vast ocean of online data. With a diverse and talented team across the globe, we are dedicated to innovation and committed to pushing the boundaries of what’s possible in our field.
We are seeking a Software Engineer to join our new Automations team in the Hyderabad office. In this role, you will play a vital part in building advanced automation-driven solutions that enhance the value and scalability of our products for a global client base. You will report directly to the Software Team Lead – Automations and work closely with cross-functional teams including Product, DevOps, and Data Engineering to deliver high-impact solutions.
We’re looking for a proactive, fast-learning engineer who thrives in collaborative, agile environments and is eager to contribute across all stages of the development lifecycle.
At Meltwater, we foster a culture of continuous learning, team autonomy, and a DevOps mindset. Our engineering teams take full ownership of their subsystems, including infrastructure management and participation in on-call rotations. As a member of the Automations team, you will work extensively with modern technologies such as Azure services, Elasticsearch, AWS Lambda, and Terraform.
We value experience in search engines, big data analytics, infrastructure, systems engineering, and distributed systems. This role offers the opportunity to work on challenging, large-scale systems whether by extending open-source libraries or driving innovation through existing technologies.
If you're passionate about building distributed systems at scale and driving automation to unlock insights from vast amounts of data, we invite you to be part of this exciting journey.
What You'll Do:
- Analyze use cases, plan, and estimate work efforts.
- Design, develop, test, and maintain high-quality software components.
- Take full ownership of developed services within the team.
- Create robust, scalable, and maintainable software solutions using Python, ReactJS and related technologies.
- Collaborate closely with cross-functional teams to deliver software solutions that meet business requirements.
- Design efficient algorithms, data structures, and multithreaded applications for optimal performance.
- Participate actively in all phases of the software development lifecycle.
- Research and evaluate new technologies to enhance development processes and product capabilities.
- Troubleshoot and debug software issues, providing timely resolutions.
- Contribute to code reviews, architectural discussions, and technical documentation.
What You'll Bring:
- Bachelor's or master’s degree in Computer Science or a related field
- Minimum of 3 years of hands-on experience in software development using React.js and Python (Django/Flask).
- Proficiency in Functional Programming, encompassing data structures, algorithms, and multithreading.
- Proficiency in TypeScript features such as type definitions, decorators, generics, and async/await for building robust and scalable applications.
- Strong understanding of performance optimization, memory management, and CPU utilization.
- Experience with testing frameworks, particularly jest, mocha, Pytest, or behave
- Familiarity with dependency management tools.
- Familiarity with designing and maintaining microservices in a distributed architecture.
- Deployment expertise in cloud environments.
- Design and implementation proficiency in microservices architecture, including REST and Azure Function Apps.
- Knowledgeable about Azure, Kubernetes, and Docker for containerization and orchestration.
- Curiosity and a passion for learning new technologies and concepts Enjoyment in collaborative problem-solving endeavors.
- Exceptional written and verbal communication skills in English.
- Openness to a hybrid work schedule, requiring one day per week in the office.
- Ability to collaborate with frontend developers, designers, and product managers to align technical solutions with business goals.
What We Offer:
- Enjoy comprehensive paid time off options for enhanced work-life balance.
- Comprehensive health insurance tailored for you.
- Employee assistance programs cover mental health, legal, financial, wellness, and behavior areas to ensure your overall well-being.
- Energetic work environment with a hybrid work style, providing the balance you need.
- Benefit from our family leave program, which grows with your tenure at Meltwater.
- Thrive within our inclusive community and seize ongoing professional development opportunities to elevate your career.


Job Title : Python Developer – API Integration & AWS Deployment
Experience : 5+ Years
Location : Bangalore
Work Mode : Onsite
Job Overview :
We are seeking an experienced Python Developer with strong expertise in API development and AWS cloud deployment.
The ideal candidate will be responsible for building scalable RESTful APIs, automating power system simulations using PSS®E (psspy), and deploying automation workflows securely and efficiently on AWS.
Mandatory Skills : Python, FastAPI/Flask, PSS®E (psspy), RESTful API Development, AWS (EC2, Lambda, S3, EFS, API Gateway), AWS IAM, CloudWatch.
Key Responsibilities :
Python Development & API Integration :
- Design, build, and maintain RESTful APIs using FastAPI or Flask to interface with PSS®E.
- Automate simulations and workflows using the PSS®E Python API (psspy).
- Implement robust bulk case processing, result extraction, and automated reporting systems.
AWS Cloud Deployment :
- Deploy APIs and automation pipelines using AWS services such as EC2, Lambda, S3, EFS, and API Gateway.
- Apply cloud-native best practices to ensure reliability, scalability, and cost efficiency.
- Manage secure access control using AWS IAM, API keys, and implement monitoring using CloudWatch.
Required Skills :
- 5+ Years of professional experience in Python development.
- Hands-on experience with RESTful API development (FastAPI/Flask).
- Solid experience working with PSS®E and its psspy Python API.
- Strong understanding of AWS services, deployment, and best practices.
- Proficiency in automation, scripting, and report generation.
- Knowledge of cloud security and monitoring tools like IAM and CloudWatch.
Good to Have :
- Experience in power system simulation and electrical engineering concepts.
- Familiarity with CI/CD tools for AWS deployments.

About Us:
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews - all while improving their SEO.
But that's just the beginning.
We're also the creators of Convertlens, our generative Al-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We're a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solve real problems - you'll fit right in.
What You'll Do:
• Collaborate with product managers, designers, and other devs to ideate, build, and ship high-impact features
• Own full-stack development using Node.js, Next.js, and React.js
• Build fast, responsive front-ends with pixel-perfect execution
• Design and manage scalable back-end systems with MySQL/PostgreSQL
• Troubleshoot and resolve issues from live deployments with Ops team
• Contribute to documentation, internal tools, and process improvement
• Work on our generative Al tools and help scale Convertlens.
What You Bring:
• 2+ years of experience in a product/startup environment
• Strong foundation in Node.js, Next.js, and React.js
• Solid understanding of relational databases (MySQL, PostgreSQL)
• Fluency in modern JavaScript and the HTTP/REST ecosystem
• Comfortable with HTML, CSS, Git, and version control workflows
• Bonus: experience with Python or interest in working on Al-powered systems
• Great communication skills and a love for collaboration
• A builder mindset - scrappy, curious, and ready to ship
Perks & Culture:
• Flexible work setup remote-first for most, hybrid if you're in Delhi NCR
• A high-growth, high-impact environment where your code goes live fast
• Opportunities to work with cutting-edge tech including generative Al
• Small team, big vision your work truly matters here.
Join Us
If you're excited about building meaningful tech in a fast-moving startup let's talk.

You will:
- Collaborate with the I-Stem Voice AI team and CEO to design, build and ship new agent capabilities
- Develop, test and refine end-to-end voice agent models (ASR, NLU, dialog management, TTS)
- Stress-test agents in noisy, real-world scenarios and iterate for improved robustness and low latency
- Research and prototype cutting-edge techniques (e.g. robust speech recognition, adaptive language understanding)
- Partner with backend and frontend engineers to seamlessly integrate AI components into live voice products
- Monitor agent performance in production, analyze failure cases, and drive continuous improvement
- Occasionally demo our Voice AI solutions at industry events and user forums
You are:
- An AI/Software Engineer with hands-on experience in speech-centric ML (ASR, NLU or TTS)
- Skilled in building and tuning transformer-based speech models and handling real-time audio pipelines
- Obsessed with reliability: you design experiments to push agents to their limits and root-cause every error
- A clear thinker who deconstructs complex voice interactions from first principles
- Passionate about making voice technology inclusive and accessible for diverse users
- Comfortable moving fast in a small team, yet dogged about code quality, testing and reproducibility

Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that
specializes in digital services for startups to fortune-500s. We work closely with our clients to
create a comprehensive soul for their brand in the online world, engaged through multiple
platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think
out of the box or tread the un-trodden path in order to deliver the best results for our clients.
We pride ourselves on Practical Creativity where the idea is only as good as the returns it
fetches for our clients.
We are looking for an experienced Backend Developer with a strong foundation in Python,
Django, and MySQL to join our development team. The ideal candidate should have at least 4
years of hands-on experience building scalable, secure, and high-performing web applications
and APIs. You will play a critical role in developing server-side logic, managing database
operations, and ensuring optimal application performance.
Key Responsibilities:
● Design, develop, test, and maintain robust backend systems using Python and Django.
● Build RESTful APIs and integrate them with front-end components or third-party
systems.
● Design and optimize relational database schemas in MySQL.
● Write clean, maintainable, and efficient code following best practices.
● Optimize application performance and troubleshoot production issues.
● Ensure the security and data protection of applications.
● Collaborate with front-end developers, QA, DevOps, and product teams.
● Participate in code reviews and mentor junior developers (if applicable).
Required Skills:
● Strong programming skills in Python, with in-depth knowledge of the Django framework.
● Experience in designing, maintaining, and querying MySQL databases.
● Understanding of MVC design patterns and RESTful service architecture.
● Familiarity with Git version control.
● Knowledge of software development best practices, including unit testing and CI/CD.
Educational Qualifications:
Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical
experience).
Benefits
● Competitive salary and performance bonuses.
● Health insurance
● Opportunities for professional development and career growth.