50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Detailed JD (Roles and Responsibilities)
Full stack (Backend focused) Ownership. Programing - Python, react (Good to have - C#, Node),Agile .Flexible to learn new things
As an AI Engineer Intern, you’ll contribute to designing and implementing multi-agent workflows using modern LLM frameworks and distributed systems technologies.
You’ll collaborate closely with the core engineering team and gain exposure to scalable AI infrastructure and orchestration design.
This internship is completely remote, but Delhi NCR candidates are preferred for occasional in-person collaboration (3–4 times a month).
🔧 Key Responsibilities
- Develop and test multi-agent workflows using LangGraph / LangChain / CrewAI.
- Build high-throughput API and orchestration pipelines with Redis, MongoDB, and PostgreSQL (pgvector).
- Integrate LLM-driven reasoning, contextual memory, and autonomous decision-making.
- Implement queue orchestration and job management with BullMQ.
- Contribute to observability, metrics, and performance analysis (OpenTelemetry, Grafana).
- Document workflows, experiment outcomes, and design iterations.
🎓 Requirements
- Strong foundation in Python or TypeScript (Node.js)
- Basic understanding of Redis, PostgreSQL, and MongoDB
- Exposure to LLM APIs (OpenAI, Gemini, Claude)
- Understanding of distributed systems, event-driven architecture, and message queues
- Curiosity and ownership to learn AI infra, orchestration, and performance engineering
About NEXUS SP Solutions
European tech company (Spain) in telecom/IT/cybersecurity. We’re hiring a Part-time Automation Developer (20h/week) to build and maintain scripts, integrations and CI/CD around our Odoo v18 + eCommerce stack.
What you’ll do
• Build Python automations: REST API integrations (vendors/payments), data ETL, webhooks, cron jobs.
• Maintain CI/CD (GitHub Actions) for modules and scripts; basic Docker.
• Implement backups/alerts and simple monitors (logs, retries).
• Collaborate with Full-Stack dev and UX on delivery and performance.
Requirements
• 2–5 yrs coding in Python for integrations/ETL.
• REST/JSON, OAuth2, webhooks; solid Git.
• Basic Docker + GitHub Actions (or GitLab CI).
• SQL/PostgreSQL basics; English for daily comms (Spanish/French is a plus).
• ≥ 3h overlap with CET; able to start within 15–30 days.
Nice to have
• Odoo RPC/XML-RPC, Selenium/Playwright, Linux server basics, retry/idempotency patterns.
Compensation & terms
• ₹2.5–5 LPA for 20h/week (contract/retainer).
• Long-term collaboration; IP transfer, work in our repos; PR-based workflow; CI/CD.
Process
- 30–45’ technical call. 2) Paid mini-task (8–10h): Python micro-service calling a REST API with retries, logging and unit test. 3) Offer.
About NEXUS SP Solutions
European tech company (Spain) in telecom/IT/cybersecurity. We’re hiring a Full-Stack Developer experienced in Odoo v17–18, Python and JavaScript to continuously improve our ERP & eCommerce.
Responsibilities
• Build/customize Odoo modules (Sales/Inventory/Website/eCommerce).
• Integrate REST APIs & payments (Stripe/Redsys/Bizum).
• Improve performance, security and reliability.
• Collaborate with UX/UI; deliver clean front code (OWL/QWeb, HTML/CSS/JS).
• Use Git and CI/CD (GitHub Actions); write docs/tests.
Requirements
• 2–6 yrs with Python + Odoo (ORM, models, views, ACL/record rules).
• PostgreSQL, XML/QWeb/OWL, REST, Git.
• English for daily communication (Spanish/French is a plus).
• Full-time remote with 3h overlap with CET.
Compensation
• ₹5–9 LPA (≈ ₹41.7k–₹75k / month; FX-dependent ≈ €460–€940).
• Long-term contract, roadmap, IP transfer (code belongs to NEXUS), repos in our org, CI/CD.
Process
- 30–45’ technical interview. 2) Paid task (8–12h): mini Odoo module + README + 1 test. 3) Offer.
As a Senior Software Engineer, you’ll be responsible for building and maintaining high-performance web applications across the stack. You’ll collaborate with product managers, designers, and business stakeholders to translate complex business needs into reliable digital systems.
Key Responsibilities
- Design, build, and maintain scalable web applications end-to-end.
- Work closely with product and design teams to deliver user-centric, high-performance interfaces.
- Develop and optimize backend APIs, database queries, and integrations.
- Write clean, maintainable, and testable code following best practices.
- Mentor junior developers and contribute to team-wide tech decisions.
Requirements
- Experience: 5+ years of hands-on full-stack development experience.
- Backend: Proficiency in Python
- Frontend: Experience with React, Angular, or Vue.js.
- Database: Strong knowledge of SQL databases (MySQL, PostgreSQL, or Oracle).
- Communication: Comfortable in English or Hindi.
- Location: Bangalore, 5 days a week (Work from Office).
- Availability: Immediate joiners preferred.
Why Join Us
- Be part of a fast-growing global diamond brand backed by two industry leaders.
- Collaborate with a sharp, experienced tech and product team solving real-world business challenges.
- Work at the intersection of luxury, data, and innovation — building systems that directly impact global operations.
Role Details:
We are seeking a highly skilled and experienced Automation Engineer to join our dynamic team. You will play a key role in designing, implementing, and maintaining our automation testing framework, with a primary focus on Selenium, Python and BDD
Position: Automation Engineer
Must-Have Skills & Qualifications:
- Bachelor’s degree in Engineering (Computer Science, IT, or related field)
- Hands-on experience with Selenium using Python and BDD framework
- Strong foundation in Manual Testing
Good-to-Have Skills:
- Allure
- Boto3
- Appium
Benefits:
- Competitive salary & comprehensive benefits package
- Work with cutting-edge technologies & industry-leading experts
- Flexible hybrid work environment
- Professional development & continuous learning opportunities
- Dynamic, collaborative culture with career growth paths
Requirements :
- Strong communication skills are essential.
- The candidate should have 4-5 years of experience in automation and be proficient in Python and pytest (minimum 3-4 years of experience).
- They should have experience with SQL queries and JDBC connection.
- They must be capable of performing tasks with minimal supervision and taking responsibility.
- Agile working experience is crucial, and they should be available to attend & contribute in Agile ceremonies during UK/US hours.
- The candidate should proactively raise any issues or risks.
- They should have an understanding of CI/CD with Azure pipelines.
- The ability to understand flows and functionalities and identify issues through automation is important.
- A good grasp of code reusability, writing methods with fuzzy logic, and understanding OOP concepts is required.
Must-Have Skills & Qualifications:
- Bachelor's degree in Engineering (Computer Science, IT, or related field)
- 8+ years of experience in manual testing of web and mobile applications
- Working knowledge of test automation tools: Selenium
- Experience with API testing using tools like Postman or equivalent
- Experience with BDD
- Strong understanding of test planning, test case design, and defect tracking processes
- Experience leading QA for projects and production releases
- Familiarity with Agile/Scrum methodologies
- Effective collaboration skills – able to work with cross-functional teams and contribute to automation efforts as needed
Good-to-Have Skills:
- Familiarity with CI/CD pipelines and version control tools (Git, Jenkins)
- Exposure to performance or security testing
Strong Software Engineering Profile
Mandatory (Experience 1): Must have 7+ years of experience using Python to design software solutions.
Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)
Preferred
Preferred (Skills 1): Experience in Task Queues like Celery and RabbitMQ is preferred.
Preferred (Skills 2): Experience with RDBMS/SQL is also preferrable.
Preferred (Education): Computer science
About Verix
Verix is a platform for verification, engagement, and trust in the age of AI. Powered by blockchain and agentic AI, Verix enables global organizations—such as Netflix, Amdocs, The Stevie Awards, Room to Read, and UNICEF—to seamlessly design, issue, and manage digital credentials for learning, enterprise skilling, continuing education, compliance, membership, and events.
With dynamic credentials that reflect recipient growth over time, modern design templates, and attached rewards, Verix empowers enterprises to drive engagement while building trust and community.
Founded by industry veterans Kirthiga Reddy (ex-Meta, MD Facebook India) and Saurabh Doshi (ex-Meta, ex-Viacom), Verix is backed by Polygon Ventures, Micron Ventures, FalconX, and leading angels including Randi Zuckerberg and Harsh Jain.
What is OptimizeGEO?
OptimizeGEO is Verix’s flagship product that helps brands stay visible and discoverable in AI-powered answers.
Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes AEO/GEO principles ensuring brands are mentioned, cited, and trusted by generative systems (ChatGPT, Gemini, Perplexity, Claude, etc.) and answer engines (featured snippets, voice search, and AI answer boxes).
Role Overview
We are hiring a Backend Engineer to build the data and services layer that powers OptimizeGEO’s analytics, scoring, and reporting.
This role partners closely with our SEO/AEO domain experts and data teams to translate frameworks—gap analysis, share-of-voice, entity/knowledge-graph coverage, trust signals—into scalable backend systems and APIs.
You will design secure, reliable, and observable services that ingest heterogeneous web and third-party data, compute metrics, and surface actionable insights to customers via dashboards and reports.
Key Responsibilities
- Own backend services for data ingestion, processing, and aggregation across crawlers, public APIs, search consoles, analytics tools, and third-party datasets.
- Operationalize GEO/AEO metrics (visibility scores, coverage maps, entity health, citation/trust signals, competitor benchmarks) as versioned, testable algorithms.
- Design & implement APIs for internal use (data science, frontend) and external consumption (partner/export endpoints), with clear SLAs and quotas.
- Data pipelines & orchestration: batch and incremental jobs, queueing, retries/backoff, idempotency, and cost-aware scaling.
- Storage & modeling: choose fit-for-purpose datastores (OLTP/OLAP), schema design, indexing/partitioning, lineage, and retention.
- Observability & reliability: logging, tracing, metrics, alerting; SLOs for freshness and accuracy; incident response playbooks.
- Security & compliance: authN/authZ, secrets management, encryption, PII governance, vendor integrations.
- Collaborate cross-functionally with domain experts to convert research into productized features and executive-grade reports.
Minimum Qualifications
- 4–8 years of experience building backend systems in production (startups or high-growth product teams preferred).
- Proficiency in one or more of: Python, Node.js/TypeScript, Go, or Java.
- Experience with cloud platforms (AWS/GCP/Azure) and containerized deployment (Docker, Kubernetes).
- Hands-on with data pipelines (Airflow/Prefect, Kafka/PubSub, Spark/Flink or equivalent) and REST/GraphQL API design.
- Strong grounding in systems design, scalability, reliability, and cost/performance trade-offs.
Preferred Qualifications (Nice to Have)
- Familiarity with technical SEO artifacts: schema.org/structured data, E-E-A-T, entity/knowledge-graph concepts, and crawl budgets.
- Exposure to AEO/GEO and how LLMs weigh sources, citations, and trust; awareness of hallucination risks and mitigation.
- Experience integrating SEO/analytics tools (Google Search Console, Ahrefs, SEMrush, Similarweb, Screaming Frog) and interpreting their data models.
- Background in digital PR/reputation signals and local/international SEO considerations.
- Comfort working with analysts to co-define KPIs and build executive-level reporting.
What Success Looks Like (First 6 Months)
- Ship a reliable data ingestion and scoring service with clear SLAs and automated validation.
- Stand up share-of-voice and entity-coverage metrics that correlate with customer outcomes.
- Deliver exportable executive reports and dashboard APIs consumed by the product team.
- Establish observability baselines (dashboards & alerts) and a lightweight on-call rotation.
Tooling & Stack (Illustrative)
- Runtime: Python / TypeScript / Go
- Data: Postgres / BigQuery + object storage (S3 / GCS)
- Pipelines: Airflow / Prefect, Kafka / PubSub
- Infra: AWS / GCP, Docker, Kubernetes, Terraform
- Observability: OpenTelemetry, Prometheus / Grafana, ELK / Cloud Logging
- Collab: GitHub, Linear / Jira, Notion, Looker / Metabase
Working Model
- Hybrid-remote within India with periodic in-person collaboration (Bengaluru or mutually agreed hubs).
- Startup velocity with pragmatic processes; bias to shipping, measurement, and iteration.
Equal Opportunity
Verix is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Job Title: Full Stack Engineer (Real-Time Audio Systems) – Voice AI
Location: Remote
Experience: 4+ Years
Employment Type: Full-time
About the Role
We’re looking for an experienced engineer to lead the development of a real-time Voice AI platform. This role blends deep expertise in conversational AI, audio infrastructure, and full-stack systems, making you a key contributor to building natural, low-latency voice-driven agents for complex healthcare workflows and beyond.
You’ll work directly with the founding team to build and deploy production-grade voice AI systems.
If you love working with WebRTC, WebSockets, and streaming pipelines, this is the place to build something impactful.
Key Responsibilities
- Build and optimize voice-driven AI systems integrating ASR (speech recognition), TTS (speech synthesis), and LLM inference with WebRTC and WebSocket infrastructure.
- Orchestrate multi-turn conversations using frameworks like Pipecat with memory and context management.
- Develop scalable backends and APIs to support streaming audio pipelines, stateful agents, and secure healthcare workflows.
- Implement real-time communication features with low-latency audio streaming pipelines.
- Collaborate closely with research, engineering, and product teams to ship experiments and deploy into production rapidly.
- Monitor, optimize, and maintain deployed voice agents for high reliability, safety, and performance.
- Translate experimental AI audio models into production-ready services.
Requirements
- 4+ years of software engineering experience with a focus on real-time systems, streaming, or conversational AI.
- Proven experience building and deploying voice AI, audio/video, or low-latency communication systems.
- Strong proficiency in Python (FastAPI, async frameworks, LangChain or similar).
- Working knowledge of modern front-end frameworks like Next.js (preferred).
- Hands-on experience with WebRTC, WebSockets, Redis, Kafka, Docker, and AWS.
- Exposure to healthcare tech, RCM, or regulated environments (highly valued).
Bonus Points
- Contributions to open-source audio/media projects.
- Experience with DSP, live streaming, or media infrastructure.
- Familiarity with observability tools (e.g., Grafana, Prometheus).
- Passion for reading research papers and discussing the future of voice communication.
🚀 We’re Hiring: AI Engineer (2+ Yrs, Generative AI/LLMs)
🌍 Remote | ⚡ Immediate Joiner - 15days
Work on cutting-edge GenAI apps, LLM fine-tuning & integrations (OpenAI, Gemini, Anthropic).
Exciting projects across industries in a service-company environment.
🔹 Skills: Python | LLMs | Fine-tuning | Vector DBs | GenAI Apps
🔸Apply: https://lnkd.in/dVQwSMBD
✨ Let’s build the future of AI together!
#AIJobs #Hiring #GenAI #LLM #Python #RemoteJobs
Position: Senior Data Engineer
Overview:
We are seeking an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and infrastructure to support cross-functional teams and next-generation data initiatives. The ideal candidate is a hands-on data expert with strong technical proficiency in Big Data technologies and a passion for developing efficient, reliable, and future-ready data systems.
Reporting: Reports to the CEO or designated Lead as assigned by management.
Employment Type: Full-time, Permanent
Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Key Responsibilities:
- Design and develop scalable data pipeline architectures for data extraction, transformation, and loading (ETL) using modern Big Data frameworks.
- Identify and implement process improvements such as automation, optimization, and infrastructure re-design for scalability and performance.
- Collaborate closely with Engineering, Product, Data Science, and Design teams to resolve data-related challenges and meet infrastructure needs.
- Partner with machine learning and analytics experts to enhance system accuracy, functionality, and innovation.
- Maintain and extend robust data workflows and ensure consistent delivery across multiple products and systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 10+ years of hands-on experience in Data Engineering.
- 5+ years of recent experience with Apache Spark, with a strong grasp of distributed systems and Big Data fundamentals.
- Proficiency in Scala, Python, Java, or similar languages, with the ability to work across multiple programming environments.
- Strong SQL expertise and experience working with relational databases such as PostgreSQL or MySQL.
- Proven experience with Databricks and cloud-based data ecosystems.
- Familiarity with diverse data formats such as Delta Tables, Parquet, CSV, and JSON.
- Skilled in Linux environments and shell scripting for automation and system tasks.
- Experience working within Agile teams.
- Knowledge of Machine Learning concepts is an added advantage.
- Demonstrated ability to work independently and deliver efficient, stable, and reliable software solutions.
- Excellent communication and collaboration skills in English.
About the Organization:
We are a leading B2B data and intelligence platform specializing in high-accuracy contact and company data to empower revenue teams. Our technology combines human verification and automation to ensure exceptional data quality and scalability, helping businesses make informed, data-driven decisions.
What We Offer:
Our workplace embraces diversity, inclusion, and continuous learning. With a fast-paced and evolving environment, we provide opportunities for growth through competitive benefits including:
- Paid Holidays and Leaves
- Performance Bonuses and Incentives
- Comprehensive Medical Policy
- Company-Sponsored Training Programs
We are an Equal Opportunity Employer, committed to maintaining a workplace free from discrimination and harassment. All employment decisions are made based on merit, competence, and business needs.
Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2-4 years of relevant experience as a Zoho Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively
About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
Job Title : Perl Developer
Experience : 6+ Years
Engagement Type : C2C (Contract)
Location : Remote
Shift Timing : General Shift
Job Summary :
We are seeking an experienced Perl Developer with strong scripting and database expertise to support an application modernization initiative.
The role involves code conversion for compatibility between Sybase and MS SQL, ensuring performance, reliability, and maintainability of mission-critical systems.
You will work closely with the engineering team to enhance, migrate, and optimize codebases written primarily in Perl, with partial transitions toward Python for long-term sustainability.
Mandatory Skills :
Perl, Python, T-SQL, SQL Server, ADO, Git, Release Management, Monitoring Tools, Automation Tools, CI/CD, Sybase-to-MSSQL Code Conversion
Key Responsibilities :
- Analyze and convert existing application code from Sybase to MS SQL for compatibility and optimization.
- Maintain and enhance existing Perl scripts and applications.
- Where feasible, refactor or rewrite legacy components into Python for improved scalability.
- Collaborate with development and release teams to ensure seamless integration and deployment.
- Follow established Git/ADO version control and release management practices.
- (Optional) Contribute to monitoring, alerting, and automation improvements.
Required Skills :
- Strong Perl development experience (primary requirement).
- Proficiency in Python for code conversion and sustainability initiatives.
- Hands-on experience with T-SQL / SQL Server for database interaction and optimization.
- Familiarity with ADO/Git and standard release management workflows.
Nice to Have :
- Experience with monitoring and alerting tools.
- Familiarity with automation tools and CI/CD pipelines.
Role: Perl Developer
Location: Remote
Experience: 6–8 years
Shift: General
Job Description
Primary Skills (Must Have):
- Strong Perl development skills.
- Good knowledge of Python and T-SQL / SQL Server to create compatible code.
- Hands-on experience with ADO, Git, and release management practices.
Secondary Skills (Good to Have):
- Familiarity with monitoring/alerting tools.
- Exposure to automation tools.
Day-to-Day Responsibilities
- Perform application code conversion for compatibility between Sybase and MS SQL.
- Work on existing Perl-based codebase, ensuring maintainability and compatibility.
- Convert code into Python where feasible (as part of the migration strategy).
- Where Python conversion is not feasible, create compatible code in Perl.
- Collaborate with the team on release management and version control (Git).
Lead technical Consultant
Experience: 9-15 Years
This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend
Backend
- 1st Primary Language - Java or Python or Go Or ROR or Rust
- 2nd Primary Language - one of the above or Node
The candidate should be experienced in atleast 2 backend tech stacks.
Frontend
- React or Angular
- HTML, CSS
The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)
The candidate should have experience with Unit testing, CI/CD, devops etc.
Good Communication skills is a must have.
Senior Technical Consultant (Polyglot)
Experience- 5-9 Years
This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend
Backend
- 1st Primary Language - Java or Python or Go Or ROR or Rust
- 2nd Primary Language - one of the above or Node
The candidate should be experienced in atleast 2 backend tech stacks.
Frontend
- React or Angular
- HTML, CSS
The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)
The candidate should have experience with Unit testing, CI/CD, devops etc.
Good Communication skills is a must have.
Strong Software Engineering Profile
Mandatory (Experience 1): Must have 5+ years of experience using Python to design software solutions.
Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)
Preferred
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are seeking a Site Reliability Engineer (SRE) with a minimum of 2 years of experience who is passionate about monitoring, observability, and ensuring system reliability. The ideal candidate will have strong expertise in Grafana, Prometheus, Opensearch, and AWS CloudWatch, with the ability to design insightful dashboards and proactively optimize system performance.
Key Responsibilities
- Design, develop, and maintain monitoring and alerting systems using Grafana, Prometheus, and AWS CloudWatch.
- Create and optimize dashboards to provide actionable insights into system and application performance.
- Collaborate with development and operations teams to ensure high availability and reliability of services.
- Proactively identify performance bottlenecks and drive improvements.
- Continuously explore and adopt new monitoring/observability tools and best practices.
Required Skills & Qualifications
- Minimum 2 years of experience in SRE, DevOps, or related roles.
- Hands-on expertise in Grafana, Prometheus, and AWS CloudWatch.
- Proven experience in dashboard creation, visualization, and alerting setup.
- Strong understanding of system monitoring, logging, and metrics collection.
- Excellent problem-solving and troubleshooting skills.
- Quick learner with a proactive attitude and adaptability to new technologies.
Good to Have (Optional)
- Experience with AWS services beyond CloudWatch.
- Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
- Scripting knowledge (Python, Bash, or similar).
Why Join Us
At MyOperator, you will play a key role in ensuring the reliability, scalability, and performance of systems that power AI-driven business communication for leading global brands. You’ll work in a fast-paced, innovation-driven environment where your expertise will directly impact thousands of businesses worldwide.
Note: The shift hours for this job are from 4PM- 1AM IST
About The Role:
We are seeking a highly skilled and experienced QA Automation Engineer with over 5 years of experience in both automation and manual testing. The ideal candidate will possess strong expertise in Python, Playwright, PyTest, Pywinauto, and Java with Selenium, API testing with Rest Assured, and SQL. Experience in the mortgage domain, Azure DevOps, and desktop & web application testing is essential. The role requires working in evening shift timings (4 PM – 1 AM IST) to collaborate with global teams.
Key Responsibilities:
- Design and develop automation test scripts using Python, Playwright, PywinAuto, and PyTest.
- Design, develop, and maintain automation frameworks for desktop applications using Java with WinAppDriver and Selenium, and Python with Pywinauto.
- Understand business requirements in the mortgage domain and prepare detailed test plans, test cases, and test scenarios.
- Define automation strategy and identify test cases to automate for web, desktop, and API testing.
- Perform manual testing for desktop, web, and API applications to validate functional and non-functional requirements.
- Create and execute API automation scripts using Rest Assured for RESTful services validation.
- Perform SQL queries to validate backend data and ensure data integrity in mortgage domain application.
- Use Azure DevOps for test case management, defect tracking, CI/CD pipeline execution, and test reporting.
- Collaborate with DevOps and development teams to integrate automated tests within CI/CD pipelines.
- Proficient in version control and collaborative development using Git.
- Experience in managing test automation projects and dependencies using Maven.
- Work closely with developers, BAs, and product owners to clarify requirements and provide early feedback.
- Report and track defects with clear reproduction steps, logs, and screenshots until closure.
- Apply mortgage domain knowledge to test scenarios for loan origination, servicing, payments, compliance, and default modules.
- Ensure adherence to regulatory and compliance standards in mortgage-related applications.
- Perform cross-browser testing and desktop compatibility testing for client-based applications.
- Drive defect prevention by identifying gaps in requirements and suggesting improvements.
- Ensure best practices in test automation - modularization, reusability, and maintainability.
- Provide daily/weekly status reports on testing progress, defect metrics, and automation coverage.
- Maintain documentation for automation frameworks, test cases, and domain-specific scenarios.
- Experienced in working within Agile/Scrum development environments.
- Proven ability to thrive in a fast-paced environment and consistently meet deadlines with minimal supervision.
- Strong team player with excellent multitasking skills, capable of managing multiple priorities in a deadline-driven environment.
Key requirements:
- 4-8 years of experience in Quality Assurance (manual and automation).
- Strong proficiency in Python, Pywinauto, PyTest, Playwright
- Hands-on experience with Rest Assured for API automation.
- Expertise in SQL for backend testing and data validation.
- Experience in mortgage domain applications (loan origination, servicing, compliance).
- Knowledge of Azure DevOps for CI/CD, defect tracking, and test case management.
- Proficiency in testing desktop and web applications.
- Excellent collaboration and communication skills to work with cross-functional global teams.
- Willingness to work in evening shift timings (4 PM – 1 AM IST).
Job Title : Senior Technical Consultant (Polyglot)
Experience Required : 5 to 10 Years
Location : Bengaluru / Chennai (Remote Available)
Positions : 2
Notice Period : Immediate to 1 Month
Role Overview :
We seek passionate polyglot developers (Java/Python/Go) who love solving complex problems and building elegant digital products.
You’ll work closely with clients and teams, applying Agile practices to deliver impactful digital experiences..
Mandatory Skills :
Strong in Java/Python/Go (any 2), with frontend experience in React/Angular, plus knowledge of HTML, CSS, CI/CD, Unit Testing, and DevOps.
Key Skills & Requirements :
Backend (80% Focus) :
- Strong expertise in Java, Python, or Go (at least 2 backend stacks required).
- Additional exposure to Node.js, Ruby on Rails, or Rust is a plus.
- Hands-on experience in building scalable, high-performance backend systems.
Frontend (20% Focus) :
- Proficiency in React or Angular
- Solid knowledge of HTML, CSS, JavaScript
Other Must-Haves :
- Strong understanding of unit testing, CI/CD pipelines, and DevOps practices.
- Ability to write clean, testable, and maintainable code.
- Excellent communication and client-facing skills.
Roles & Responsibilities :
- Tackle technically challenging and mission-critical problems.
- Collaborate with teams to design and implement pragmatic solutions.
- Build prototypes and showcase products to clients.
- Contribute to system design and architecture discussions.
- Engage with the broader tech community through talks and conferences.
Interview Process :
- Technical Round (Online Assessment)
- Technical Round with Client (Code Pairing)
- System Design & Architecture (Build from Scratch)
✅ This is a backend-heavy polyglot developer role (80% backend, 20% frontend).
✅ The right candidate is hands-on, has multi-stack expertise, and thrives in solving complex technical challenges.
🌍 We’re Hiring: Senior Field AI Engineer | Remote | Full-time
Are you passionate about pioneering enterprise AI solutions and shaping the future of agentic AI?
Do you thrive in strategic technical leadership roles where you bridge advanced AI engineering with enterprise business impact?
We’re looking for a Senior Field AI Engineer to serve as the technical architect and trusted advisor for enterprise AI initiatives. You’ll translate ambitious business visions into production-ready applied AI systems, implementing agentic AI solutions for large enterprises.
What You’ll Do:
🔹 Design and deliver custom agentic AI solutions for mid-to-large enterprises
🔹 Build and integrate intelligent agent systems using frameworks like LangChain, LangGraph, CrewAI
🔹 Develop advanced RAG pipelines and production-grade LLM solutions
🔹 Serve as the primary technical expert for enterprise accounts and build long-term customer relationships
🔹 Collaborate with Solutions Architects, Engineering, and Product teams to drive innovation
🔹 Represent technical capabilities at industry conferences and client reviews
What We’re Looking For:
✔️ 7+ years of experience in AI/ML engineering with production deployment expertise
✔️ Deep expertise in agentic AI frameworks and multi-agent system design
✔️ Advanced Python programming and scalable backend service development
✔️ Hands-on experience with LLM platforms (GPT, Gemini, Claude) and prompt engineering
✔️ Experience with vector databases (Pinecone, Weaviate, FAISS) and modern ML infrastructure
✔️ Cloud platform expertise (AWS, Azure, GCP) and MLOps/CI-CD knowledge
✔️ Strategic thinker able to balance technical vision with hands-on delivery in fast-paced environments
✨ Why Join Us:
- Drive enterprise AI transformation for global clients
- Work with a category-defining AI platform bridging agents and experts
- High-impact, customer-facing role with strategic influence
- Competitive benefits: medical, vision, dental insurance, 401(k)
🌍 We’re Hiring: Customer Facing Data Scientist (CFDS) | Remote | Full-time
Are you passionate about applied data science and enjoy partnering directly with enterprise customers to deliver measurable business impact?
Do you thrive in fast-paced, cross-functional environments and want to be the face of a cutting-edge AI platform?
We’re looking for a Customer Facing Data Scientist to design, develop, and deploy machine learning applications with our clients, helping them unlock the value of their data while building strong, trusted relationships.
What You’ll Do:
🔹 Collaborate directly with customers to understand their business challenges and design ML solutions
🔹 Manage end-to-end data science projects with a customer success mindset
🔹 Build long-term trusted relationships with enterprise stakeholders
🔹 Work across industries: Banking, Finance, Health, Retail, E-commerce, Oil & Gas, Marketing
🔹 Evangelize the platform, teach, enable, and support customers in building AI solutions
🔹 Collaborate internally with Data Science, Engineering, and Product teams to deliver robust solutions
What We’re Looking For:
✔️ 5–10 years of experience solving complex data problems using Machine Learning
✔️ Expert in ML modeling and Python coding
✔️ Excellent customer-facing communication and presentation skills
✔️ Experience in AI services or startup environments preferred
✔️ Domain expertise in Finance is a plus
✔️ Applied experience with Generative AI / LLM-based solutions is a plus
✨ Why Join Us:
- High-impact opportunity to shape a new business vertical
- Work with next-gen AI technology to solve real enterprise problems
- Backed by top-tier investors with experienced leadership
- Recognized as a Top 5 Data Science & ML platform by G2
- Comprehensive benefits: medical, vision, dental insurance, 401(k)
🚀 We’re Hiring: Senior AI Engineer (Customer Facing) | Remote
Are you passionate about building and deploying enterprise-grade AI solutions?
Do you enjoy combining deep technical expertise with customer-facing problem-solving?
We’re looking for a Senior AI Engineer to design, deliver, and integrate cutting-edge AI/LLM applications for global enterprise clients.
What You’ll Do:
🔹 Partner directly with enterprise customers to understand business requirements & deliver AI solutions
🔹 Architect and integrate intelligent agent systems (LangChain, LangGraph, CrewAI)
🔹 Build LLM pipelines with RAG and client-specific knowledge
🔹 Collaborate with internal teams to ensure seamless integration
🔹 Champion engineering best practices with production-grade Python code
What We’re Looking For:
✔️ 5+ years of hands-on experience in AI/ML engineering or backend systems
✔️ Proven track record with LLMs & intelligent agents
✔️ Strong Python and backend expertise
✔️ Experience with vector databases (Pinecone, We aviate, FAISS)
✔️ Excellent communication & customer-facing skills
Preferred: Cloud (AWS/Azure/GCP), MLOps knowledge, and startup/AI services experience.
🌍 Remote role | High-impact opportunity | Backed by strong leadership & growth
If this sounds like you (or someone in your network), let’s connect!
- Strong Software Engineering Profile
- Mandatory (Experience 1): Must have 5+ years of experience using Python to design software solutions.
- Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
- Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
- Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
- Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
- Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)
Full Stack Developer Internship – (Remote)
Pay: ₹20,000 - ₹30,000/month | Duration: 3 months
We’re building Pitchline - a voice based conversational sales AI agent, an ambitious AI-powered web app aimed at solving meaningful problems in the B2B space. It’s currently in MVP stage and has strong early demand. I’m looking for a hands-on an Full Stack Developer Intern who can work closely with me to bring this to life.
You’ll be one of the first people to touch the codebase — shaping the foundation and solving problems across AI integration, backend APIs, and a bit of frontend work.
What you'll be doing
- Build and maintain backend APIs (Python)
- Integrate AI models (OpenAI, LangChain, Pinecone/Weaviate etc.) for core workflows
- Design DB schemas and manage basic infra (Postgres)
- Support frontend development (basic UI integration in React or similar)
- Rapidly iterate on product ideas and ship working features
- Collaborate closely with me (Founder) to shape the MVP
What we're looking for
- Curiosity to learn new things. You don’t wait for someone to unblock you and take full ownership and get things done by yourself.
- Strong foundation in backend development
- Experience working with APIs, databases, and deploying backend services
- Curious about or experienced in AI/LLM tools like OpenAI APIs, LangChain, vector databases, etc.
- Comfortable working with version control and basic dev workflows
- Worked on real projects or shipped anything end-to-end (Even if it is a personal project)
Why join us?
You’ll be a core member of the team. What we’re building is one of a kind and being a part of the successful founding team will fast track your personal and professional growth.
You’ll work on a real product with potential, witnessing in real time the impact your hard work brings.
You’ll get ownership and be part of early decisions.
You'll learn how design, tech, and business come together in early-stage product building
Flexible working hours
Opportunity to convert to full-time upon successful conversion.
We’re a fast paced team, working hard to deploy the MVP as soon as possible. If you're excited about AI, startup building, and getting your hands dirty with real development then our company is a great place to grow.
About the Role
NeoGenCode Technologies is looking for a Senior Technical Architect with strong expertise in enterprise architecture, cloud, data engineering, and microservices. This is a critical role demanding leadership, client engagement, and architectural ownership in designing scalable, secure enterprise systems.
Key Responsibilities
- Design scalable, secure, and high-performance enterprise software architectures.
- Architect distributed, fault-tolerant systems using microservices and event-driven patterns.
- Provide technical leadership and hands-on guidance to engineering teams.
- Collaborate with clients, understand business needs, and translate them into architectural designs.
- Evaluate, recommend, and implement modern tools, technologies, and processes.
- Drive DevOps, CI/CD best practices, and application security.
- Mentor engineers and participate in architecture reviews.
Must-Have Skills
- Architecture: Enterprise Solutions, EAI, Design Patterns, Microservices (API & Event-driven)
- Tech Stack: Java, Spring Boot, Python, Angular (recent 2+ years experience), MVC
- Cloud Platforms: AWS, Azure, or Google Cloud
- Client Handling: Strong experience with client-facing roles and delivery
- Data: Data Modeling, RDBMS & NoSQL, Data Migration/Retention Strategies
- Security: Familiarity with OWASP, PCI DSS, InfoSec principles
Good to Have
- Experience with Mobile Technologies (native, hybrid, cross-platform)
- Knowledge of tools like Enterprise Architect, TOGAF frameworks
- DevOps tools, containerization (Docker), CI/CD
- Experience in financial services / payments domain
- Familiarity with BI/Analytics, AI/ML, Predictive Analytics
- 3rd-party integrations (e.g., MuleSoft, BizTalk)
We are seeking a Lead AI Engineer to spearhead development of advanced agentic workflows and large language model (LLM) systems. The ideal candidate should bring deep expertise in agent building, LLM evaluation/tracing, and prompt operations, combined with strong deployment experience at scale.
Key Responsibilities:
- Design and build agentic workflows leveraging modern frameworks.
- Develop robust LLM evaluation, tracing, and prompt ops pipelines.
- Lead MCP (Model Context Protocol) based system integrations.
- Deploy and scale AI/ML solutions with enterprise-grade reliability.
- Collaborate with product and engineering teams to deliver high-impact solutions.
Required Skills & Experience:
- Proficiency with LangChain, LangGraph, Pydantic, Crew.ai, and MCP.
- Strong understanding of LLM architecture, behavior, and evaluation methods.
- Hands-on expertise in MLOps, DevOps, and deploying AI/ML workloads at scale.
- Experience leading AI projects from prototyping to production.
- Strong foundation in prompt engineering, observability, and agent orchestration.
Product Manager – AI Go-to-Market (GTM)
You know that feeling when you see a product not just built, but truly adopted? That’s what this role is about.
We built something that turns the endless scroll of social video into business intelligence. The product is already strong — now it’s time to take it to market, scale adoption, and own how it reaches the world.
This isn’t another PM role. This is where you become the strategist who shapes how AI meets the market.
Who We Are
Our team is small, global, and moves fast. Not startup-fast. Not “we say we’re agile” fast. Actually fast.
We ship meaningful features in days, and now we need someone who can do the same on the market side.
The people here don’t just work with AI — they think in AI. They dream in Python. They know how to build.
What we’re missing is the person who knows how to launch, position, and scale.
What We Need
We need someone who’s lived the GTM life.
Someone who has:
- Shaped go-to-market strategies across multiple channels.
- Crafted positioning, messaging, and pricing that drove adoption.
- Partnered with sales & marketing to accelerate pipeline and conversion.
- Translated market insights into product direction.
You don’t need to be taught what adoption metrics look like. You don’t need to “grow into” GTM strategy. You already know these things so deeply that you can focus on the only thing that matters: getting AI into the hands of people who can’t live without it.
Who You Are
- Strong IT/product foundation with a track record in launching AI/tech products.
- An AI believer who sees how it will reshape industries.
- Obsessed with channels, adoption, differentiation, and growth loops.
- Someone who thrives where market execution meets product credibility.
The Reality
The work is beautifully challenging. The pace is intense in the best way. The problems are complex but worth solving. And the team? They care deeply.
If you get your energy from taking innovation to market and building adoption strategies that matter, you’ll probably fall in love with what we do here. If you prefer more structure or slower rhythms, this might not align — and that’s completely valid.
How to Apply
If you’re reading this thinking “finally, somewhere that gets it” — we’d love to see something you’ve launched. Not a resume. Not a cover letter. Show us proof of how you’ve taken a product to market.
We’re excited to see what you’ve built and have a real conversation about whether this could be magic for both of us.
About the Role
We are looking for enthusiastic LLM Interns to join our team remotely for a 3-month internship. This role is ideal for students or graduates interested in AI, Natural Language Processing (NLP), and Large Language Models (LLMs). You will gain hands-on experience working with cutting-edge AI tools, prompt engineering, and model fine-tuning. While this is an unpaid internship, interns who successfully complete the program will receive a Completion Certificate and a Letter of Recommendation.
Responsibilities
- Research and experiment with LLMs, NLP techniques, and AI frameworks.
- Design, test, and optimize prompts and workflows for different use cases.
- Assist in fine-tuning or integrating LLMs for internal projects.
- Evaluate model outputs and improve accuracy, efficiency, and reliability.
- Collaborate with developers, data scientists, and product managers to implement AI-driven features.
- Document experiments, results, and best practices.
Requirements
- Strong interest in Artificial Intelligence, NLP, and Machine Learning.
- Familiarity with Python and ML libraries (e.g., TensorFlow, PyTorch, Hugging Face Transformers).
- Basic understanding of LLM concepts such as embeddings, fine-tuning, and inference.
- Knowledge of APIs (OpenAI, Anthropic, Hugging Face, etc.) is a plus.
- Good analytical and problem-solving skills.
- Ability to work independently in a remote environment.
What You’ll Gain
- Practical exposure to state-of-the-art AI tools and LLMs.
- Mentorship from AI and software professionals.
- Completion Certificate upon successful completion.
- Letter of Recommendation based on performance.
- Experience to showcase in research projects, academic work, or future AI roles.
Internship Details
- Duration: 3 months
- Location: Remote (Work from Home)
- Stipend: Unpaid
- Perks: Completion Certificate + Letter of Recommendation
About the Role
We are looking for enthusiastic Backend Developer Interns to join our team remotely for a 3-month internship. This is an excellent opportunity to gain hands-on experience in backend development, work on real projects, and expand your technical skills. While this is an unpaid internship, interns who successfully complete the program will receive a Completion Certificate and a Letter of Recommendation.
Responsibilities
- Assist in developing and maintaining backend services and APIs.
- Work with databases (SQL/NoSQL) for data storage and retrieval.
- Collaborate with frontend developers to integrate user-facing elements with server-side logic.
- Write clean, efficient, and reusable code.
- Debug, troubleshoot, and optimize backend performance.
- Participate in code reviews and team discussions.
- Document technical processes and contributions.
Requirements
- Basic knowledge of at least one backend language/framework (Node.js, Python/Django/Flask, Java/Spring Boot, or similar).
- Understanding of RESTful APIs and web services.
- Familiarity with relational and/or NoSQL databases (MySQL, PostgreSQL, MongoDB, etc.).
- Knowledge of Git/GitHub for version control.
- Strong problem-solving and analytical skills.
- Ability to work independently in a remote environment.
- Good communication skills and eagerness to learn.
What You’ll Gain
- Real-world experience in backend development.
- Mentorship and exposure to industry practices.
- Completion Certificate at the end of the internship.
- Letter of Recommendation based on performance.
- Opportunity to strengthen your portfolio with live projects.
Internship Details
- Duration: 3 months
- Location: Remote (Work from Home)
- Stipend: Unpaid
- Perks: Completion Certificate + Letter of Recommendation
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
About the Role
We are looking for a highly motivated Innovation Engineer to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vertex AI, MCP, Vector Databases, AI Search, Agentic AI, Automation.
As an Innovation Engineer, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.
Key Responsibilities
- Research Implementation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, VertexAI, MCP and Automation.
- Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
- AI/ML Engineering: Design and develop AI/ML models, AI Agents, LLMs, intelligent search capabilities leveraging Vector embeddings.
- Vector AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
- Automation AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
- Collaboration Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
- Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.
Required Qualifications
- 4–10 years of experience in AI/ML, software engineering, or a related field.
- Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini, VertexAI, MCP.
- Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), MCP and agentic AI (Vertex, Autogen, ADK)
- Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
- Strong problem-solving skills and a passion for innovation.
- Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.
Preferred Qualifications
- Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
- Knowledge of data pipelines, MLOps, and AI governance.
- Contributions to open-source AI/ML projects or published research papers.
Why Join Us?
- Work on cutting-edge AI/ML innovations with the CTO Office.
- Influence the company’s future AI strategy and shape emerging technologies.
- Competitive compensation, growth opportunities, and a culture of continuous learning.
About Our Benefits
Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, career development, advancement opportunities, annual merit, a generous time-off policy, and a flexible work environment.
Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
About the Role
We are seeking a skilled AI CX Automation Engineer to design, build, and optimize AI-driven workflows for customer support automation.
This role will be responsible for enabling end-to-end L1 support automation using Freshworks AI (Freddy AI) and other conversational AI platforms. The ideal candidate will have strong technical expertise in conversational AI, workflow automation, and system integrations, working closely with Knowledge Managers and Customer Support teams to maximize case deflection and resolution efficiency.
Key Responsibilities
- AI Workflow Design: Build, configure, and optimize conversational AI workflows for L1 customer query handling.
- Automation Enablement: Implement automation using Freshworks AI (Freddy AI), chatbots, and orchestration tools to reduce manual support load.
- Integration: Connect AI agents with knowledge bases, CRM, and ticketing systems to enable contextual and seamless responses.
- Conversational Design: Craft natural, intuitive conversation flows for chatbots and virtual agents to improve customer experience.
- Performance Optimization: Monitor AI agent performance, resolution rates, and continuously fine-tune workflows.
- Cross-functional Collaboration: Partner with Knowledge Managers, Product Teams, and Support to ensure workflows align with up-to-date content and customer needs.
- Scalability Innovation: Explore emerging agentic AI capabilities and recommend enhancements to future-proof CX automation.
Required Qualifications
- 4–10 years of experience in conversational AI, automation engineering, or customer support technology.
- Hands-on expertise with Freshworks AI (Freddy AI) or similar AI-driven CX platforms (Zendesk AI, Salesforce Einstein, Dialogflow, Rasa, etc.).
- Strong experience in workflow automation, chatbot configuration, and system integrations (APIs, Webhooks, RPA).
- Familiarity with LLMs, intent recognition, and conversational AI frameworks.
- Strong analytical skills to evaluate and optimize AI agent performance.
- Excellent problem-solving, collaboration, and communication skills.
Preferred Qualifications
- Experience with agentic AI frameworks and multi-turn conversational flows.
- Knowledge of scripting or programming languages (Python, Node.js) for custom AI integrations.
- Familiarity with vector search, RAG (Retrieval-Augmented Generation), and AI search to improve context-driven answers.
- Exposure to SaaS, product-based companies, or enterprise-scale customer support operations.
Why Join Us?
- Be at the forefront of AI-driven customer support automation.
- Directly contribute to achieving up to 60% case resolution through AI workflows.
- Collaborate with Knowledge Managers and AI engineers to build next-gen CX solutions.
- Competitive compensation, benefits, and a culture of continuous learning.
Benefits
Unilog offers a competitive total rewards package including:
- Competitive salary
- Multiple medical, dental, and vision plans
- Career development and advancement opportunities
- Annual merit increases
- Generous time-off policy
- Flexible work environment
We are committed to fair hiring practices and advocate for diversity, equity, and inclusion.
About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- If senior, establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 3.5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.
Role: GenAI Full Stack Engineer
Fulltime
Work Location: Remote
Job Description:
• Python and familiar with AI/Gen AI frameworks. Experience with data manipulation libraries like Pandas and NumPy is crucial.
• Specific expertise in implementing and managing large language models (LLMs) is a must.
• Fast API experience for API development
• A solid grasp of software engineering principles, including version control (Git), continuous integration and continuous deployment (CI/CD) practices, and automated testing, is required. Experience in MLOps, ML engineering, and Data Science, with a proven track record of developing and maintaining AI solutions, is essential.
• We also need proficiency in DevOps tools such as Docker, Kubernetes, Jenkins, and Terraform, along with advanced CI/CD practices.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.
About Us
MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.
Are You The One?
As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business.
You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight.
You will contribute to
- Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services.
- Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark.
- Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services.
- Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases.
- Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment.
- Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM).
- Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights.
- Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines.
Responsibilities
- Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR.
- Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication.
- Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation.
- Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards).
- Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations.
- Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership.
- Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform.
Requirements
- At-least 6 years of experience in data engineering.
- Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum.
- Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs.
- Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation.
- Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale.
- Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions.
- Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments.
- Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene.
- Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders.
Brownie Points
- Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements.
- Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection.
- Familiarity with data contracts, data mesh patterns, and data as a product principles.
- Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases.
- Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3.
- Experience building data platforms for ML/AI teams or integrating with model feature stores.
MatchMove Culture:
- We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
- We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship.
- We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences.
- Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives.
Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger!
Job description
Job Title: React JS Developer - (Core Skill - React JS)
Core Skills -
- Minimum of 6 months of experience in frontend Dev using React JS (Excl any internship, Training programs)
The Company
Our mission is to enable and empower engineering teams to build world-class solutions, and release them faster than ever, we strongly believe engineers are the building block of a great society - we love building, and we love solving problems Talk about problem-solving and technical challenges. And unique problems faced by the Engineering Community. Our DNA of stems from Mohit’s passion for building technology products for solving problems which has a big impact.
We are a bootstrapped company largely and aspire to become the next household name in the engineering community and leave a signature on all the great technological products being built across the globe.
Who would be your customers - We, are going to shoulder the great responsibility of solving minute problems that you as an Engineer have faced over the years.
The Opportunity
An exciting opportunity to be part of a story, making an impact on How domain solutions will be built in years to come
Do you wish to lead the Engineering vertical, build your own fort, and shine through the journey of building the next-generation platform?
Blaash is looking to hire a problem solver with strong technical expertise in building large applications. You will build the next-generation AI solution for the Engineering Team - including backend and frontend.
Responsibility
Owning the front-end and back-end development in all aspects. Proposing high-level design solutions, and POCs to arrive at the right solution. Mentoring junior developers and interns.
What makes you an ideal team member we are eagerly waiting to meet - :
- Demonstrate strong architecture and design skills in building high-performance APIs using AWS services.
- Design and implement highly scalable, interactive web applications with high usability
- Collaborate with product teams to iterate ideas on data monetization products/services and define feasibility
- Rapidly iterate on product ideas, build prototypes, and participate in proof of concepts
- Collaborate with internal and external teams in troubleshooting functional and performance issues
- Work with DevOps Engineers to integrate any new code into existing CI/CD pipelines
- 6 months + of experience in frontend dev using React JS
- 6 moths + years of hands-on experience developing high-performance APIs & Web applications
Salary -
- The first 4 months of the Training and Probation period = 15K-18K INR per month
- On successful completion of the Probation period = 20K - 25K INR per month
- Annual Performance Bonus of INR 40,000
- Equity Benefits for deserving candidates
How we will set you up for success You will work closely with the Founding team to understand what we are building.
You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well. You will be involved in a monthly one-on-one with the founders to discuss feedback
If you’ve made it this far, then maybe you’re interested in joining us to build something pivotal, carving a unique story for you - Get in touch with us, or apply now!
🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
We are seeking a highly skilled Fabric Data Engineer with strong expertise in Azure ecosystem to design, build, and maintain scalable data solutions. The ideal candidate will have hands-on experience with Microsoft Fabric, Databricks, Azure Data Factory, PySpark, SQL, and other Azure services to support advanced analytics and data-driven decision-making.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Microsoft Fabric and Azure data services.
- Implement data integration, transformation, and orchestration workflows with Azure Data Factory, Databricks, and PySpark.
- Work with stakeholders to understand business requirements and translate them into robust data solutions.
- Optimize performance and ensure data quality, reliability, and security across all layers.
- Develop and maintain data models, metadata, and documentation to support analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to deliver insights-driven solutions.
- Stay updated with emerging Azure and Fabric technologies to recommend best practices and innovations.
- Required Skills & Experience
- Proven experience as a Data Engineer with strong expertise in the Azure cloud ecosystem.
Hands-on experience with:
- Microsoft Fabric
- Azure Databricks
- Azure Data Factory (ADF)
- PySpark & Python
- SQL (T-SQL/PL-SQL)
- Solid understanding of data warehousing, ETL/ELT processes, and big data architectures.
- Knowledge of data governance, security, and compliance within Azure.
- Strong problem-solving, debugging, and performance tuning skills.
- Excellent communication and collaboration abilities.
Preferred Qualifications
- Microsoft Certified: Fabric Analytics Engineer Associate / Azure Data Engineer Associate.
- Experience with Power BI, Delta Lake, and Lakehouse architecture.
- Exposure to DevOps, CI/CD pipelines, and Git-based version control.
Job Title: Backend Developer (Full Time)
Location: Remote
Interview: Virtual Interview
Experience Required: 3+ Years
Backend / API Development (About the Role)
- Strong proficiency in Python (FastAPI) or Node.js (Express) (Python preferred).
- Proven experience in designing, developing, and integrating APIs for production-grade applications.
- Hands-on experience deploying to serverless platforms such as Cloudflare Workers, Firebase Functions, or Google Cloud Functions.
- Solid understanding of Google Cloud backend services (Cloud Run, Cloud Functions, Secret Manager, IAM roles).
- Expertise in API key and secrets management, ensuring compliance with security best practices.
- Skilled in secure API development, including HTTPS, authentication/authorization, input validation, and rate limiting.
- Track record of delivering scalable, high-quality backend systems through impactful projects in production environments.
About Us
We are building the next generation of AI-powered products and platforms that redefine how businesses digitize, automate, and scale. Our flagship solutions span eCommerce, financial services, and enterprise automation, with an emerging focus on commercializing cutting-edge AI services across Grok, OpenAI, and the Azure Cloud ecosystem.
Role Overview
We are seeking a highly skilled Full-Stack Developer with a strong foundation in e-commerce product development and deep expertise in backend engineering using Python. The ideal candidate is passionate about designing scalable systems, has hands-on experience with cloud-native architectures, and is eager to drive the commercialization of AI-driven services and platforms.
Key Responsibilities
- Design, build, and scale full-stack applications with a strong emphasis on backend services (Python, Django/FastAPI/Flask).
- Lead development of eCommerce features including product catalogs, payments, order management, and personalized customer experiences.
- Integrate and operationalize AI services across Grok, OpenAI APIs, and Azure AI services to deliver intelligent workflows and user experiences.
- Build and maintain secure, scalable APIs and data pipelines for real-time analytics and automation.
- Collaborate with product, design, and AI research teams to bring experimental features into production.
- Ensure systems are cloud-ready (Azure preferred) with CI/CD, containerization (Docker/Kubernetes), and strong monitoring practices.
- Contribute to frontend development (React, Angular, or Vue) to deliver seamless, responsive, and intuitive user experiences.
- Champion best practices in coding, testing, DevOps, and Responsible AI integration.
Required Skills & Experience
- 5+ years of professional full-stack development experience.
- Proven track record in eCommerce product development (payments, cart, checkout, multi-tenant stores).
- Strong backend expertise in Python (Django, FastAPI, Flask).
- Experience with cloud services (Azure preferred; AWS/GCP is a plus).
- Hands-on with AI/ML integration using APIs like OpenAI, Grok, Azure Cognitive Services.
- Solid understanding of databases (SQL & NoSQL), caching, and API design.
- Familiarity with frontend frameworks such as React, Angular, or Vue.
- Experience with DevOps practices: GitHub/GitLab, CI/CD, Docker, Kubernetes.
- Strong problem-solving skills, adaptability, and a product-first mindset.
Nice to Have
- Knowledge of vector databases, RAG pipelines, and LLM fine-tuning.
- Experience in scalable SaaS architectures and subscription platforms.
- Familiarity with C2PA, identity security, or compliance-driven development.
What We Offer
- Opportunity to shape the commercialization of AI-driven products in fast-growing markets.
- A high-impact role with autonomy and visibility.
- Competitive compensation, equity opportunities, and growth into leadership roles.
- Collaborative environment working with seasoned entrepreneurs, AI researchers, and cloud architects.
Software engineers are the lifeblood of impress.ai. They build the software that powers our platform, the dashboard that recruiters around the world use, and all the other cool things we build and release. We are looking to expand our team with highly skilled backend engineers. As backend engineers, you don’t just build backend APIs and architect databases, you help bring to production the AI prototypes our Analytics/Data team builds, and you ensure that the cloud infrastructure, DevOps, and CI/CD processes that keep us ticking are optimal.
The Job:
The ideal candidate should have a few years of experience under the belt and have the technical skill, competencies, and maturity necessary to independently execute projects with minimal supervision. They should also have the ability to architect engineering solutions that require minimal input from senior software engineers in order to satisfy the business requirements.
At impress.ai our mission is to make hiring fairer for all applicants. We combine I/O Psychology with AI to create an application screening process that gives an opportunity to all candidates to undergo a structured interview. impress.ai has consciously used it to ensure that people were chosen based on their talent, knowledge, and capabilities as opposed to their gender, race, or name.
Responsibilities:
- Execute full software development life cycle (SDLC)
- Write well-designed, testable code
- Produce specifications and determine operational feasibility
- Build and integrate new software components into the Impress Platform
- Develop software verification plans and quality assurance procedures
- Document and maintain software functionality
- Troubleshoot, debug, and upgrade existing systems
- Deploy programs and evaluate user feedback
- Comply with project plans and industry standards
- Develop flowcharts, layouts, and documentation to identify requirements and solutions
You Bring to the Table:
- Proven work experience as a Software Engineer or Software Developer
- Proficiency in software engineering tools
- The ability to develop software in the Django framework (Python) is necessary for the role.
- Excellent knowledge of relational databases, SQL, and ORM technologies
- Ability to document requirements and specifications
- A BSc degree in Computer Science, Engineering, or a relevant field is preferred but not necessary.
Our Benefits:
- Work with cutting-edge technologies like Machine Learning, AI, and NLP and learn from the experts in their fields in a fast-growing international SaaS startup. As a young business, we have a strong culture of learning and development. Join our discussions, brown bag sessions, and research-oriented sessions.
- A work environment where you are given the freedom to develop to your full potential and become a trusted member of the team.
- Opportunity to contribute to the success of a fast-growing, market-leading product.
- Work is important, and so is your personal well-being. The work culture at impress.ai is designed to ensure a healthy balance between the two.
Diversity and Inclusion are more than just words for us. We are committed to providing a respectful, safe, and inclusive workplace. Diversity at impress.ai means fostering a workplace in which individual differences are recognized, appreciated, and respected in ways that fully develop and utilize each person’s talents and strengths. We pride ourselves on working with the best and we know our company runs on the hard work and dedication of our talented team members. Besides having employee-friendly policies and benefit schemes, impress.ai assures unbiased pay purely based on performance.
Must have skills:
1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data
from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical
data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source
applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data
warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate,
design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and
data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data
requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into
reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers,
quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
About Certa
Certa is a leading innovator in the no-code SaaS workflow space, powering the full lifecycle for suppliers, partners, and third parties. From onboarding and risk assessment to contract management and ongoing monitoring, Certa enables businesses with automation, collaborative workflows, and continuously updated insights. Join us in our mission to revolutionize third-party management!
What You'll Do
- Partner closely with Customer Success Managers to understand client workflows, identify quality gaps, and ensure smooth solution delivery.
- Design, implement, and execute both manual and automated tests for client-facing workflows across our web platform.
- Write robust and maintainable test scripts using Python (Selenium) to validate workflows, integrations, and configurations.
- Own test planning for client-specific features, including writing clear test cases and sanity scenarios — even in the absence of detailed specs.
- Collaborate with Product, Engineering, and Customer Success teams to reproduce client-reported issues, root-cause them, and verify fixes.
- Lead or contribute to exploratory testing, regression cycles, and release validations before client rollouts.
- Proactively identify gaps, edge cases, and risks in client implementations and communicate them effectively to stakeholders.
- Act as a client-facing QA representative during solution validation, ensuring confidence in delivery and post-deployment success.
What We're Looking For
- 3–5 years of experience in Software QA (manual + automation), ideally with exposure to client-facing or Customer Success workflows.
- Strong understanding of core QA principles (priority vs. severity, regression vs. sanity, risk-based testing).
- Hands-on experience writing automation test scripts with Python (Selenium).
- Experience with modern automation frameworks (Playwright + TypeScript or equivalent) is a strong plus.
- Familiarity with SaaS workflows, integrations, or APIs (JSON, REST, etc.).
- Excellent communication skills — able to interface directly with clients, translate feedback into testable requirements, and clearly articulate risks/solutions.
- Proactive, curious, and comfortable navigating ambiguity when working on client-specific use cases.
Good to Have
- Previous experience in a Customer Success, Professional Services, or client-facing QA role.
- Experience with CI/CD pipelines, BDD/TDD frameworks, and test data management.
- Knowledge of security testing, performance testing, or accessibility testing.
- Familiarity with no-code platforms or workflow automation tools.
Perks
- Best-in-class compensation
- Fully remote work
- Flexible schedules
- Engineering-first, high-ownership culture
- Massive learning and growth opportunities
- Paid vacation, comprehensive health coverage, maternity leave
- Yearly offsite, quarterly hacker house
- Workstation setup allowance
- Latest tech tools and hardware
- A collaborative and high-trust team environment
NOTE- This is a contractual role for a period of 3-6 months.
Responsibilities:
● Set up and maintain CI/CD pipelines across services and environments
● Monitor system health and set up alerts/logs for performance & errors ● Work closely with backend/frontend teams to improve deployment velocity
● Manage cloud environments (staging, production) with cost and reliability in mind
● Ensure secure access, role policies, and audit logging
● Contribute to internal tooling, CLI automation, and dev workflow improvements
Must-Haves:
● 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering
● Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP)
● Proficiency in writing scripts (Bash, Python) for automation
● Good understanding of system monitoring, logs, and alerting
● Strong debugging skills, ownership mindset, and clear documentation habits
● Infra monitoring tools like Grafana dashboards
Job Summary:
We are seeking a skilled and motivated Python Django Developer with experience in building high-performance APIs using Django Ninja.
The ideal candidate will have a strong background in web development, API design, and backend systems.
Experience with IX-API and internet exchange operations is a plus.
You will play a key role in developing scalable, secure, and efficient backend services that support our network infrastructure and service delivery.
Key Responsibilities:
Design, develop, and maintain backend services using Python Django and Django Ninja. Build and document RESTful APIs for internal and external integrations.
Collaborate with frontend, DevOps, and network engineering teams to deliver end-to-end solutions. Ensure API implementations follow industry standards and best practices.
Optimize performance and scalability of backend systems.
Troubleshoot and resolve issues related to API functionality and performance.
Participate in code reviews, testing, and deployment processes.
Maintain clear and comprehensive documentation for APIs and workflows.
Required Skills & Qualifications:
Proven experience with Python Django and Django Ninja for API development.
Strong understanding of RESTful API design, JSON, and Open API specifications.
Proficiency in Python and familiarity with asynchronous programming.
Experience with CI/CD tools (e.g., Jenkins, GitLab CI).
Knowledge of relational databases (e.g., PostgreSQL, MySQL).
Familiarity with version control systems (e.g., Git).
Excellent problem-solving and communication skills.
Preferred Qualifications:
Experience with IX-API development and integration.
Understanding of internet exchange operations and BGP routing.
Exposure to network automation tools (e.g., Ansible, Terraform).
Familiarity with containerization and orchestration tools (Docker, Kubernetes).
Experience with cloud platforms (AWS, Azure, GCP).
Contributions to open-source projects or community involvement
Job Title : Senior Security Engineer – ServiceNow Security & Threat Modelling
Experience : 6+ Years
Location : Remote
Type : Contract
Job Summary :
We’re looking for a Senior Security Engineer to strengthen our ServiceNow ecosystem with security-by-design.
You will lead threat modelling, perform security design reviews, embed security in SDLC, and ensure risks are mitigated across applications and integrations.
Mandatory Skills :
ServiceNow platform security, threat modelling (STRIDE/PASTA), SAST/DAST (Checkmarx/Veracode/Burp/ZAP), API security, OAuth/SAML/SSO, secure CI/CD, JavaScript/Python.
Key Responsibilities :
- Drive threat modelling, design reviews, and risk assessments.
- Implement & manage SAST/DAST, secure CI/CD pipelines, and automated scans.
- Collaborate with Dev/DevOps teams to instill secure coding practices.
- Document findings, conduct vendor reviews & support ITIL-driven processes.
- Mentor teams on modern security tools and emerging threats.
Required Skills :
- Strong expertise in ServiceNow platform security & architecture.
- Hands-on with threat modelling (STRIDE/PASTA, attack trees).
- Experience using SAST/DAST tools (Checkmarx, Veracode, Burp Suite, OWASP ZAP).
- Proficiency in API & Web Security, OAuth, SAML, SSO.
- Knowledge of secure CI/CD pipelines & automation.
- Strong coding skills in JavaScript/Python.
- Excellent troubleshooting & analytical abilities for distributed systems.
Nice-to-Have :
- Certifications: CISSP, CEH, OSCP, CSSLP, ServiceNow Specialist.
- Knowledge of cloud security (AWS/Azure/GCP) & compliance frameworks (ISO, SOC2, GDPR).
- Experience with incident response, forensics, SIEM tools.

























