50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
We are a forward-thinking company Hookux seeking a skilled Full Stack Developer to join our team. You will work on a variety of exciting projects that require problem-solving, innovation, and scalability. One such project is, a stock market and crypto investing simulation platform that teaches children financial skills through gamified competition.
Key Responsibilities:
- Develop and maintain robust, scalable, and efficient front-end and back-end systems.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Design and implement API endpoints and server-side logic.
- Work closely with the design and product teams to ensure the technical feasibility of UI/UX designs.
- Optimize the application for maximum speed and scalability.
- Write well-documented, clean code.
- Troubleshoot and debug applications.
- Stay up-to-date with emerging technologies and industry trends.
Technical Skills & Experience:
- Proficient in JavaScript/TypeScript, with expertise in React.js for front-end development.
- Strong experience with Node.js, Express.js, or other backend technologies.
- Familiarity with database technologies such as MongoDB, PostgreSQL, or MySQL.
- Experience with RESTful APIs and third-party integrations.
- Knowledge of cloud platforms like AWS, Azure, or Google Cloud.
- Proficient in version control (e.g., Git) and collaboration tools.
- Experience with agile methodologies and continuous integration/deployment (CI/CD).
Bonus Skills:
- Experience with React Native for mobile app development.
- Familiarity with blockchain technology or cryptocurrency-related platforms.
- Experience with containerization (e.g., Docker, Kubernetes).
- Knowledge of testing frameworks and tools.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- years of experience in full stack development.
- Ability to manage multiple priorities and work independently as well as in a team environment.
Benefits:
- Competitive salary and performance bonuses.
- Opportunities for career growth and learning.
- Flexible working hours and remote working options.
mail me your CV and portfolio at hr @ hookux.com
Job Title: Software Developer (Contractor)
Location: Remote, Up to 1-year contract
Compensation: Hourly
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
AI Lead (Backend Systems & Architecture)
This is not a feature-delivery role. This is an architecture, ownership, and AI systems leadership role.
At Techjays, we build production-grade AI platforms for global clients. We are looking for an AI Lead with strong backend engineering expertise—someone who can design, scale, and take complete ownership of intelligent systems end-to-end.
You will operate at the intersection of backend engineering, distributed systems, and applied AI, driving both technical direction and execution.
What You’ll Do
- Architect and scale backend systems powering AI-driven applications
- Design and implement AI workflows such as RAG pipelines, agents, and LLM integrations
- Own systems end-to-end: architecture, development, deployment, and scaling
- Build reliable, high-performance distributed systems
- Integrate and optimize LLMs (Claude, GPT, etc.) for real-world use cases
- Lead backend and AI initiatives with strong technical ownership
- Ensure performance, scalability, observability, and cost efficiency
- Mentor engineers and raise the technical bar across teams
- Collaborate with product and AI teams to build AI-native solutions
What We’re Looking For
- Proven experience in architecting and scaling backend systems end-to-end
- Strong expertise in Python (Django / Flask / FastAPI)
- Deep understanding of distributed systems and system design
- Hands-on experience with AWS or GCP in production environments
- Solid experience working with LLMs (Claude, GPT, etc.)
- Strong knowledge of:
- Retrieval-Augmented Generation (RAG)
- Vector databases (Pinecone, FAISS, Weaviate, etc.)
- Experience in building and managing microservices architectures
- Ability to lead teams, mentor engineers, and drive technical excellence
- Strong problem-solving skills with an ownership mindset
Nice to Have
- Experience building AI agents or autonomous systems
- Familiarity with real-time data systems or streaming (Kafka, etc.)
- Understanding of MLOps and AI system lifecycle
- Experience optimizing AI systems for latency, cost, and scalability
Who You Are
- You think in systems, not just features
- You take full ownership of what you build
- You are comfortable in fast-moving, ambiguous environments
- You stay updated with the latest advancements in AI and backend technologies
This role is ideal for someone who wants to lead, build, and scale AI-powered backend systems in production while driving real-world impact.
Job Title: Software Developer
Location: Remote
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
Senior Quality Engineer – AI Products
Fulltime
Remote
Requirements
● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.
● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.
● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.
● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.
● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.
● Experience with AWS or other major cloud platforms.
● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.
● Advanced skills with API and SQL testing methodologies.
● Familiarity with test management tools such as TestRail; experience with Qase is a plus.
● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.
● Experience with testing tools: Jira, Sentry, DataDog.
● Strong understanding of Agile/Scrum methodologies.
● Proven track record of mentoring junior engineers and contributing to process improvements.
● Excellent analytical and problem-solving abilities.
● Strong communication skills with ability to present to both technical and non-technical stakeholders.
● Proficiency in English (C1-C2 level).
● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.
Preferred Qualifications
● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).
● Hands-on experience with document parsing, OCR, or unstructured data pipelines.
● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.
● Experience testing SaaS products in regulated industries (such as PCI-compliant).
● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).
● Experience with microservice architectures and distributed systems.
● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.
● Background in security or compliance testing for AI systems.
● Certifications such as ISTQB or CSTE.
● Experience working in legal technology, fintech, or professional services software.
● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.
● Experience evaluating and implementing new QE tools and processes
About The Nexora Group Inc.
The Nexora Group Inc. is a technology-driven organization focused on building intelligent digital solutions using modern software engineering and artificial intelligence technologies. Our teams work on projects involving data-driven applications, automation systems, and AI-powered tools designed to solve real-world business challenges.
We are looking for motivated and enthusiastic Python Developer Interns with an interest in Artificial Intelligence who want to gain practical experience working on live development projects.
Internship Responsibilities
- Assist in developing backend applications using Python
- Work on AI-related modules such as machine learning models, data processing pipelines, and automation tools
- Write clean, scalable, and well-documented code
- Support the development of APIs and backend services
- Participate in debugging, testing, and performance optimization
- Collaborate with development teams on project tasks and deliverables
- Contribute to research and implementation of AI/ML solutions
Required Skills
- Basic understanding of Python programming
- Familiarity with data structures and algorithms
- Interest in Artificial Intelligence and Machine Learning
- Basic knowledge of NumPy, Pandas, or similar Python libraries
- Understanding of REST APIs is a plus
- Strong problem-solving skills
- Ability to learn quickly and work in a collaborative environment
Preferred Qualifications
- Students or recent graduates in Computer Science, IT, Data Science, or related fields
- Basic knowledge of Machine Learning concepts
- Experience with Git or version control systems is beneficial
- Familiarity with Flask, Django, or FastAPI is a plus
What Interns Will Gain
- Hands-on experience working on real-world development projects
- Exposure to AI and machine learning development workflows
- Mentorship from experienced developers
- Opportunity to build a strong portfolio with practical project experience
- Internship completion certificate based on performance and participation
About the Internship
The Nexora Group Inc. is looking for enthusiastic and motivated interns who want to build practical experience in Data Science and Artificial Intelligence. This internship is designed to provide hands-on exposure to real-world datasets, machine learning techniques, and AI-driven problem solving.
Interns will work closely with our technical team to analyze data, build predictive models, and explore AI tools that support data-driven decision-making.
Key Responsibilities
- Collect, clean, and preprocess structured and unstructured datasets
- Perform exploratory data analysis (EDA) to identify trends and patterns
- Develop machine learning models using Python-based libraries
- Assist in building AI-powered data analysis workflows
- Create dashboards, reports, and visualizations to communicate insights
- Work with tools such as Python, Pandas, NumPy, and visualization libraries
- Collaborate with team members on real-world data science projects
- Document project findings and maintain clear technical reports
Required Skills
- Basic knowledge of Python programming
- Understanding of data analysis and statistics
- Familiarity with Machine Learning concepts
- Knowledge of libraries such as Pandas, NumPy, Matplotlib, or Scikit-learn
- Strong analytical and problem-solving skills
- Good communication and documentation skills
Preferred Qualifications
- Students or recent graduates in Computer Science, Data Science, Statistics, Mathematics, or related fields
- Basic understanding of Artificial Intelligence concepts
- Familiarity with Jupyter Notebook or Google Colab
- Interest in working with real-world datasets and analytics tools
What You Will Gain
- Hands-on experience with Data Science and AI projects
- Mentorship from experienced professionals
- Internship completion certificate
- Opportunity to build portfolio projects
- Exposure to real-world industry workflows
Highlights - Current location of candidate should be Bangalore
Total Exp - 6-12yrs
Joining Time period - Within 30 days
GCP Bigquery expert, GCP Certified
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 6+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - GCP Certification
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Role Overview
We are looking for a QA Automation Engineer who can leverage AI-driven testing approaches to improve automation coverage, test reliability, and data generation.
The ideal candidate should have strong experience in backend-heavy automation testing, modern automation frameworks, and using AI tools to generate test cases, maintain test scripts, and create synthetic data for testing.
Key Responsibilities
- Design and develop automated test frameworks for backend and API-heavy applications.
- Use AI tools to generate test scripts from requirements (e.g., Gherkin/Cucumber-based test generation).
- Implement and maintain self-healing test automation frameworks that adapt to UI changes.
- Develop automated tests using Playwright, Appium, and other modern automation tools.
- Create synthetic test data using AI while ensuring PII compliance.
- Perform backend stress testing and API validation.
- Work closely with engineering teams to ensure product quality and release readiness.
- Continuously improve test coverage, test reliability, and automation efficiency.
Must-Have Skills
- 4+ years of experience in QA Automation
- Strong experience in automation testing frameworks
- Hands-on experience with Playwright for web automation
- Experience with Appium for mobile automation
- Proficiency in Python for test scripting and data generation
- Experience writing BDD-style test cases (Gherkin / Cucumber)
- Experience in API testing and backend automation
- Familiarity with AI-assisted test generation tools
- Strong knowledge of CI/CD pipelines and automated testing workflows
Relevant Skills
- Backend automation testing
- Test automation frameworks design
- AI-assisted test generation
- Synthetic test data generation
- Performance and stress testing
- API testing tools (Postman, REST clients)
- Test reporting and debugging
- Version control using Git
AI & Automation Expertise
- Using AI tools to generate test cases from requirements
- Experience with self-healing test automation frameworks such as Mabl or Testim
- Using AI to generate synthetic financial datasets for testing
- Testing AI-powered applications or AI features
Tools & Technologies
- Playwright
- Appium
- Python
- Cucumber / Gherkin
- CI/CD tools
- Git
Strong Plus
- Experience working in the Finance / FinTech sector
- Experience testing AI-powered applications
- Experience working closely with AI engineering teams
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is one part a Digital Product Studio that specializes in building superior product experiences, and one part Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi million valuation companies in the US, and a handful of sister ventures for large corporations including Target, US Ventures, Imprint Engine.
We’re a team of 100 strong from around the world that are radically open minded and believes in excellence, respecting one another and pushing our boundaries to furthest its ever been.
We are looking for passionate and motivated Developers to join our growing technical team. The ideal candidate should have strong foundational knowledge in Python/Django or React with Django and be eager to work on real-time web development projects.
Open Positions:
Python Django Developer
React + Django Developer
Key Responsibilities:
- Develop, test, and maintain scalable web applications.
- Write clean, efficient, and reusable code using Django and/or React.
- Collaborate with UI/UX designers and backend developers to implement new features.
- Debug, troubleshoot, and optimize application performance.
- Participate in code reviews and contribute to team discussions.
- Stay updated with the latest web development trends and technologies.
Requirements:
- Basic to strong knowledge of Python and Django framework.
- Familiarity with React.js (for React + Django role).
- Understanding of REST APIs and database concepts.
- Knowledge of HTML, CSS, and JavaScript.
- Strong problem-solving and logical thinking skills.
- Good communication and teamwork abilities.
- Freshers and career restart candidates are welcome to apply.
More Info:
Company: Altos Technologies
Website: www.altostechnologies.in
Job Type: Permanent Job
Industry: IT / Web Development
Function: Software Development
Employment Type: Full-time
Location: Kochi & Chennai
We're hiring a Python Developer in Jaipur.
Not looking for someone who can recite design patterns. Looking for someone who can open a Django codebase, figure out what's broken,
and fix it by end of day. 3-4 years. Django / Flask / FastAPI. REST APIs. PostgreSQL. If you've maintained production code (not just built tutorial projects) — this is your role.
Full-time | Jaipur | Industry-standard pay | Small team = real ownership
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
BluePMS Software Solutions Pvt Ltd is hiring a talented DevOps Engineer to join our growing engineering team. In this role, you will be responsible for building and maintaining scalable infrastructure, automating deployment processes, and improving the reliability of our software delivery pipelines.
KeyResponsibilities:
1: Design, build, and maintain CI/CD pipelines for faster and reliable deployments.
2: Manage and monitor cloud infrastructure and servers.
3: Automate build, testing, and deployment processes.
4: Collaborate with development and QA teams to improve release cycles.
5: Monitor system performance and ensure high availability and reliability.
6: Troubleshoot infrastructure and deployment issues.
7: Implement security best practices in DevOps workflows.
RequiredSkills:
1: Strong understanding of DevOps principles and CI/CD pipelines.
2: Experience with Docker, Kubernetes, or containerization technologies.
3: Familiarity with cloud platforms such as AWS, Azure, or GCP.
4: Experience with Git, Jenkins, GitHub Actions, or similar tools.
5: Basic scripting knowledge (Bash, Python, or Shell).
6: Good understanding of Linux systems and networking concepts.
Eligibility:
1: Experience: 2 – 7 years
2: Qualification: Bachelor's degree in Computer Science, IT, or related field
3: Strong analytical and problem-solving skills.
Location: Chennai / Remote
Apply here: https://connectsblue.com/jobs/753/devops-engineer-at-bluepms-software-solutions-pvt-ltd
Job Description:
Position Type: Full-Time Contract (with potential to convert to Permanent)
Location: Remote (Australian Time Zone)
Availability: Immediate Joiners Preferred
About the Role
We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.
The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.
Key Responsibilities
- Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
- Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
- Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
- Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
- Perform data profiling, data validation, and ensure data quality across systems.
- Work closely with data engineering teams to improve data structures for better reporting efficiency.
- Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
- Support deployment, version control, and documentation of BI solutions.
- Ensure availability of dashboards during Australian business hours.
Required Skills & Experience
- 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
- 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
- Advanced knowledge of SQL and performance tuning.
- Strong understanding of data modeling, ETL processes, and cloud data platforms.
- Experience working in fast-paced environments with tight delivery timelines.
- Excellent communication and stakeholder management skills.
- Ability to work independently and deliver high‑quality outputs aligned with business objectives.
Nice-to-Have Skills
- Knowledge of Python or any ETL tool.
- Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
- Tableau Server/Prep experience.
Contract Details
- Full-Time Contract for several months.
- High possibility of conversion to permanent, based on performance.
- Must be available to work on the Australian Time Zone.
- Immediate joiners are highly encouraged.
- 4–7 years of professional C++ experience in performance-critical systems
- Expert knowledge of modern C++ (C++11/14/17)
- Strong understanding of data structures, algorithms, and memory models
- Deep experience with multithreading, atomics, lock-free programming, and CPU cache behaviour
- Excellent knowledge of Linux internals and system-level programming
- Experience with low-level debugging and profiling (gdb, perf, valgrind, flamegraphs)
- Proficiency with CMake/Make and Git
2. Trading Systems Experience (Highly Preferred)
- Hands-on experience with order management systems (OMS) and execution engines
- Knowledge of exchange protocols: FIX, ITCH, OUCH, FAST
- Experience handling market data feeds (L1/L2, multicast, UDP)
- Understanding of latency measurement, clock synchronization, and time stamping
- Experience with network tuning (kernel bypass, socket tuning, CPU pinning)
- Familiarity with trading lifecycle, risk checks, and throttling mechanisms
3. Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related discipline
4. Soft Skills (Important for Trading Firms)
- Ability to work under extreme time and accuracy pressure
- Strong ownership of production systems
- Clear and direct communication with traders and quants
- Bias toward simple, fast, and reliable designs
5. Key Responsibilities
- Design, develop, and optimize ultra-low-latency C++ trading applications
- Build and maintain exchange connectivity and order execution systems
- Develop real-time market data pipelines with strict latency requirements
- Optimize systems at CPU, memory, and network levels
- Implement lock-free or low-lock concurrent designs
- Analyze latency using profiling tools and improve tail latency
- Ensure high availability, fault tolerance, and rapid recovery
- Work closely with Traders and Quant Researchers to implement strategies
- Participate in architecture and performance design reviews
- Review code, enforce best practices, and mentor junior engineers
- Support production systems and handle time-critical issues when needed
We are seeking an experienced Python Lead to design, develop, and scale high-performance backend systems. The ideal candidate will have strong expertise in Python-based backend development, system design, and cloud-native architectures. You will lead the development of scalable APIs, work with modern cloud platforms, and collaborate with cross-functional teams to deliver reliable and efficient applications.
Key Responsibilities
- Design and develop scalable backend services using Python (Django/Flask).
- Build and maintain RESTful APIs and WebSocket-based applications.
- Implement efficient algorithms, data structures, and design patterns for high-performance systems.
- Develop and optimize database schemas and queries using PostgreSQL, MySQL, or MongoDB.
- Integrate caching and queuing systems to improve system performance and reliability.
- Deploy and manage applications on AWS or GCP cloud environments.
- Implement and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions.
- Work with Docker containers and Linux-based environments for development and deployment.
- Collaborate with engineering teams to design scalable system architectures.
- Explore and integrate AI-driven capabilities such as RAG, LLMs, and vector databases where applicable.
Required Skills
- Strong expertise in Python backend development using Django or Flask
- Experience with REST APIs, WebSockets, and microservices architecture
- Solid knowledge of design patterns, algorithms, and data structures
- Experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB)
- Hands-on experience with AWS or GCP cloud services
- Experience with CI/CD pipelines and containerization (Docker)
- Proficiency in Git and Linux environments
Preferred Skills
- Familiarity with AI/ML concepts
- Experience with RAG architectures and LLM integrations
- Knowledge of vector databases such as Pinecone or ChromaDB
What We’re Looking For
- Strong problem-solving and system design skills
- Ability to lead backend development initiatives
- Experience building scalable and production-grade systems
- Excellent collaboration and communication skills
Role Overview:
We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
- Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
- Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
- Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
- Implement security best practices in pipelines, infrastructure, and cloud environments.
- Maintain version control and manage release cycles.
- Troubleshoot and resolve production issues efficiently.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, IT, or related field.
- Proven experience in DevOps, system administration, or cloud engineering.
- Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
- Hands-on experience with containerization (Docker, Kubernetes).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Scripting skills (Python, Bash, or PowerShell).
- Knowledge of infrastructure as code (Terraform, CloudFormation).
- Familiarity with monitoring and logging tools.
- Strong problem-solving, communication, and teamwork skills.
Preferred Qualifications:
- Experience with microservices architecture.
- Knowledge of networking, load balancing, and firewalls.
- Exposure to Agile/Scrum methodologies.
What We Offer:
- Competitive salary
- Flexible working hours and remote options.
- Learning and development opportunities.
- Collaborative and inclusive work environment.
Job Title: Data Engineer
Experience: 4–14 Years
Work Mode: Remote
Employment Type: Full-Time
Position Overview:
We are looking for highly experienced Senior Data Engineers to design, architect, and lead scalable, cloud-based data platforms on AWS. The role involves building enterprise-grade data pipelines, modernizing legacy systems, and developing high-performance scoring engines and analytics solutions and collaborate closely with architecture, analytics, risk, and business teams to deliver secure, reliable, and scalable data solutions.
Key Responsibilities:
· Design and build scalable data pipelines for financial and customer data
· Build and optimize scoring engines (credit, risk, fraud, customer scoring)
· Design, develop, and optimize complex ETL/ELT pipelines (batch & real-time)
· Ensure data quality, governance, reliability, and compliance standards
· Optimize large-scale data processing using SQL, Spark/PySpark, and cloud technologies
· Lead cloud data architecture, cost optimization, and performance tuning initiatives
· Collaborate with Data Science, Analytics, and Product teams to deliver business-ready datasets
· Mentor junior engineers and establish best practices for data engineering
Key Requirements:
· Strong programming skills in Python and advanced SQL
· Experience building scalable scoring or rule-based decision engines
· Hands-on experience with Big Data technologies (Spark/PySpark/Kafka)
· Strong expertise in designing ETL/ELT pipelines and data modeling
· Experience with cloud platforms (AWS/Azure) and modern data architectures
· Solid understanding of data warehousing, data lakes, and performance tuning
· Knowledge of CI/CD, version control (Git), and production support best practices
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Ideal Candidate
- Strong Full stack/Backend engineer profile
- Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
- Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding
- Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Mandatory (Company) : Product companies (B2B SaaS preferred)
- Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)
- Preferred (Location) - Mumbai, Bangalore
- Preferred (Skills) : Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education) : B.Tech from Tier 1,Tier 2 institutes
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred
Job Title : Data / Generative AI Engineer
Experience : 5+ Years (Mid-Level) | 10+ Years (Senior)
Location : Remote
Employment Type : Contract
Open Positions : 5
Job Overview :
We are hiring Data / Generative AI Engineers for remote contract engagements supporting client-facing AI implementations. The role involves building production-grade Generative AI solutions on AWS, including conversational AI systems, RAG-based architectures, intelligent automation platforms, and scalable data engineering pipelines.
Mandatory Skills :
Amazon Bedrock, Generative AI, RAG Architecture, LangChain/LlamaIndex/Bedrock Agents, Python (3.9+), AWS Serverless (Lambda, API Gateway, Step Functions), Vector Databases, Data Engineering & ETL, AWS Glue, Amazon Athena.
Key Responsibilities :
- Design and build production-ready Generative AI applications on AWS.
- Implement Retrieval-Augmented Generation (RAG) architectures for enterprise AI solutions.
- Integrate Amazon Bedrock with foundation models and enterprise systems.
- Develop AI agent orchestration workflows using frameworks such as LangChain, LlamaIndex, or Bedrock Agents.
- Build and manage serverless architectures using AWS services like Lambda, API Gateway, and Step Functions.
- Implement vector databases and semantic search solutions for intelligent knowledge retrieval.
- Design and maintain data engineering pipelines and ETL workflows for large-scale data processing.
- Use AWS Glue for data transformation and orchestration.
- Utilize Amazon Athena for querying large datasets and performing analytics.
- Develop scalable Python-based APIs and backend services.
- Collaborate with cross-functional teams and clients to deliver AI-powered solutions in production environments.
Required Skills :
- Strong experience with Amazon Bedrock and foundation model integrations
- Hands-on experience with LangChain, LlamaIndex, or Bedrock Agents
- Advanced Python (3.9+) development and API building
- Experience with AWS serverless architectures (Lambda, API Gateway, Step Functions)
- Experience implementing vector databases and semantic search systems
- Strong knowledge of data engineering and ETL pipeline development
- Hands-on experience with AWS Glue for data transformation and orchestration
- Experience using Amazon Athena for querying and analytics
- Experience building RAG-based AI applications
Engagement Details :
- Contract Duration : Minimum 3 to 6 Months
- Work Timing : 8:00 AM – 4:00 PM EST
- Start Timeline : Within 2 Weeks
- Open Positions : 5
Key Responsibilities
- Design end-to-end architecture for scalable full-stack applications.
- Lead backend development using Python and Flask framework.
- Design and optimize MongoDB data models and queries.
- Define frontend architecture (React/Angular/Vue – as applicable).
- Establish coding standards, design patterns, and best practices.
- Build and optimize RESTful APIs and microservices.
- Implement authentication, authorization, and security best practices.
- Ensure high performance, scalability, and reliability of applications.
- Drive CI/CD implementation and DevOps best practices.
- Review code, mentor developers, and guide technical decisions.
- Collaborate with product, DevOps, and data teams.
- Troubleshoot complex production issues and perform root cause analysis.
- Lead cloud deployment strategies (Azure/AWS/GCP preferred).
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science or related field.
- 8+ years of software development experience.
- 4+ years of hands-on Python backend development.
- Strong expertise in Flask framework.
- Deep experience with MongoDB (schema design, indexing, aggregation).
- Experience designing RESTful and microservices architectures.
- Strong understanding of frontend technologies (JavaScript, HTML, CSS).
- Experience with Git and modern CI/CD pipelines.
- Solid knowledge of system design, scalability, and performance tuning.
- Experience with containerization (Docker preferred).
- Strong problem-solving and architectural thinking skills.
Senior BackEnd Engineer
The ideal candidate will have a strong background in building scalable applications, a deep understanding of back-end technologies, and experience with cloud infrastructure. As a Back End Engineer, you will be responsible for designing, developing, and maintaining a scalable workflow management system. You will work closely with cross-functional teams to build robust and efficient applications that meet the needs of our users. Your expertise in Scala, Python, AI Agents/APIs, and GCP will be crucial in ensuring our system is reliable, performant, and scalable.
Key Responsibilities:
Back-End Development:
- Build and maintain back-end services and APIs using Scala.
- Implement and optimize Orchestration workflow system involving database queries and operations.
- Build API integrations with Third Party APIs and services.
- Ensure robust and scalable server-side logic.
Cloud Integration:
- Deploy, manage, and monitor applications on Google Cloud Platform (GCP).
- Utilize GCP services to enhance application performance and scalability.
- Implement cloud-based solutions for data storage, processing, and analytics.
Collaboration And Communication:
- Work closely with cross-functional teams to define, design, and ship new features.
- Participate in code reviews and contribute to sharing team knowledge.
- Document development processes, coding standards, and project requirements.
Qualifications:
- Educational Background:
- Completed a masters/bachelor degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Proficiency in Scala programming language.
- Strong experience with React and ReactJS.
- Familiarity with Google Cloud Platform (GCP) and its services.
- Knowledge of front-end development tools and best practices.
- Understanding of RESTful API design and implementation.
- Soft Skills:
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities.
- Eagerness to learn and adapt to new technologies and challenges.
Preferred Qualifications:
- Experience with version control systems such as Git.
- Familiarity with CI/CD pipelines and DevOps practices.
- Understanding of workflow management systems and their requirements.
- Experience with containerization technologies like Docker.
Must have Skills
- Scala - 4 Years
- React.Js - 1 Years
- RESTful API - 4 Years
- Docker - 2 Years
- Python - 3 Years
- Artificial Intelligence - 2 Years
Roles:
-Working on the full stack development (Both Front-end and Back-end)
-Working on any one of the following technologies:
• Java Application Programming
• Web Development with PHP
• Python Application Programming with Django
• Machine Learning
• Data Science
• Artificial Intelligence
• Cyber Security
Eligibility: BCA/MCA 2026/2027 students can apply
Duration: 1-6 months
Perks:
Internship Experience Certificate
Letter of Recommendation
Mode of internship: Online/Offline
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- 6 months of live project experience on Python development experience
- Minimum of 1 full-time internship for Python > 3 months
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Remote
Compensation: 15K -18K for 4 -6 months of Probation Period
2.5 L - 3 LPA after Probation
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
Read less
Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2-4 years of relevant experience as a Zoho Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively
About the Role
We are looking for a motivated and detail-oriented Python Developer Intern to join our development team. This internship provides hands-on experience in real-world software development projects, backend systems, APIs, and automation tools.
You will work closely with senior developers and contribute to live projects, gaining practical exposure to modern development practices.
Key Responsibilities
- Assist in developing, testing, and maintaining Python-based applications
- Write clean, efficient, and reusable code
- Work on backend development and API integrations
- Participate in debugging and troubleshooting applications
- Collaborate with cross-functional teams (Design, QA, Data teams)
- Contribute to documentation and project reports
- Learn and apply best practices in software development
Required Skills & Qualifications
- Basic understanding of Python programming
- Familiarity with OOP concepts
- Knowledge of libraries/frameworks such as Django, Flask, or FastAPI (preferred but not mandatory)
- Basic understanding of databases (MySQL, PostgreSQL, or MongoDB)
- Good problem-solving skills
- Ability to work independently and in a team environment
- Strong communication skills
Preferred (Nice to Have)
- Knowledge of REST APIs
- Familiarity with Git/GitHub
- Understanding of HTML, CSS, and JavaScript
- Exposure to cloud platforms (AWS/Azure/GCP)
What You Will Gain
- Real-time project experience
- Mentorship from experienced developers
- Exposure to industry tools and workflows
- Internship Certificate upon successful completion
- Performance-based stipend (if applicable)
- Opportunity for full-time role based on performance
Job Summary
We are looking for a Data Scientist – AI/ML who has hands-on experience in building, training, fine-tuning, and deploying machine learning and deep learning models. The ideal candidate should be comfortable working with real-world datasets, collaborating with cross-functional teams, and communicating insights and solutions to clients.
Experience: Fresher to 5 Years
Location: Ahmedabad
Employment Type: Full-Time
Key Responsibilities
Develop, train, and optimize Machine Learning and Deep Learning models
Perform data cleaning, preprocessing, and feature engineering
Fine-tune ML/DL models to improve accuracy and performance
Deploy models into production using APIs or cloud platforms
Monitor model performance and retrain models as required
Work closely with clients to understand business problems and translate them into AI/ML solutions
Present findings, model outcomes, and recommendations to stakeholders
Collaborate with data engineers, developers, and product teams
Document workflows, models, and deployment processes
Required Skills & Qualifications
Strong understanding of Machine Learning concepts (Supervised, Unsupervised learning)
Hands-on experience with ML algorithms (Linear/Logistic Regression, Decision Trees, Random Forest, XGBoost, etc.)
Experience with Deep Learning frameworks (TensorFlow / PyTorch / Keras)
Proficiency in Python and AI/ML libraries (NumPy, Pandas, Scikit-learn)
Experience in model deployment using Flask/FastAPI, Docker, or cloud platforms (AWS/GCP/Azure)
Understanding of model fine-tuning and performance optimization
Basic knowledge of SQL and data handling
Good client communication and documentation skills
Good to Have
Experience with NLP, Computer Vision, or Generative AI
Exposure to MLOps tools (MLflow, Airflow, CI/CD pipelines)
Experience working on live or client-based AI projects
Kaggle, GitHub, or portfolio showcasing AI/ML projects
Education
Bachelor’s / Master’s degree in Computer Science, Data Science, AI/ML, or related field
Relevant certifications or project experience will be an added advantage
What We Offer
Opportunity to work on real-world AI/ML projects
Mentorship from experienced AI/ML professionals
Career growth in Data Science & Artificial Intelligence
Collaborative and learning-driven work culture
Technical Project Manager
Current Location - Bangalore
Remote (with quarterly visit to Noida)
You can share your resume at ayushi.dwivedi at the rate cloudsufi.com
Role Overview
We are seeking a highly technical and execution-oriented Technical Project Manager (TPM) to lead the delivery of core Platform Engineering capabilities and critical Enterprise Application Integrations (EAI) for CLOUDSUFI customers. Unlike a traditional PM, this role is deeply embedded in the "how" of technical delivery—ensuring that complex, cloud-native infrastructure projects are executed on time, within scope, and with high technical integrity.
The ideal candidate acts as the bridge between high-level architectural design and day-to-day engineering execution, possessing deep expertise in GCP (Google Cloud Platform) and modern integration patterns.
Key Responsibilities
- Technical Execution & Delivery: Lead the end-to-end project lifecycle for platform engineering and EAI initiatives. Convert high-level roadmaps into actionable technical workstreams, ensuring milestones are met across multi-disciplinary teams.
- Sprint & Release Management: Facilitate technical grooming, sprint planning, and daily stand-ups. Manage the velocity and throughput of the platform engineering team, ensuring that technical debt is balanced against feature delivery.
- Dependency & Risk Mitigation: Proactively identify and resolve technical blockers, resource constraints, and cross-team dependencies. Maintain a rigorous risk register for complex integration projects involving third-party systems.
- Technical Scoping & Documentation: Collaborate with Architects to translate business requirements into detailed technical specifications, data flow diagrams, and API documentation. Ensure the technical team has a clear, unambiguous path to implementation.
- Stakeholder Coordination: Serve as the primary technical point of contact for external customers and internal business units. Communicate project status, technical risks, and architectural trade-offs to both executive and technical audiences.
- Quality & Operational Excellence: Define and track project-based KPIs such as deployment frequency, mean time to recovery (MTTR), and integration success rates. Ensure all deliveries meet CLOUDSUFI’s high standards for security and scalability.
- GCP Ecosystem Oversight: Direct the implementation of services within the Google Cloud ecosystem, ensuring projects leverage GCP best practices for cost-optimization and performance.
Experience and Qualifications
- Experience: 8+ years of experience in Technical Project Management or Engineering Management, with at least 3 years specifically focused on Cloud Infrastructure, Platform Engineering, or EAI.
- Technical Depth: * GCP Mandatory: Deep hands-on familiarity with Google Cloud Platform services (GKE, Pub/Sub, Cloud Functions, Apigee).
- Integrations: Proven track record of delivering large-scale EAI projects (API Gateways, Event-Driven Architecture, Service Mesh).
- Cloud-Native: Strong understanding of Kubernetes, Docker, CI/CD pipelines (GitLab/Jenkins), and Infrastructure as Code (Terraform).
- Project Leadership: Exceptional ability to lead "deep-tech" teams. You should be able to challenge technical estimates and understand code-level blockers without necessarily writing the code yourself.
- Agile Mastery: Expert-level proficiency in Scrum and Kanban. Advanced skills in Jira (setting up complex workflows, dashboards, and automation) and Confluence.
- Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related engineering field.
- Certifications: * Required: PMP, PRINCE2, or CSM (Certified Scrum Master).
- Preferred: Google Cloud Professional Cloud Architect or Professional Data Engineer certifications.
Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems
Role: Full Stack Developer
Industry Type: Software Product
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any Graduate
About the Job :
We are looking for a passionate and driven AI Intern to join our dynamic team. As an intern, you will have the opportunity to work on real-world projects, develop AI models, and collaborate with experienced professionals in the field. This internship is designed to provide hands-on experience in AI and machine learning, offering you the chance to contribute to impactful projects while enhancing your skills.
Job Description:
We are seeking a talented Artificial Intelligence Specialist to join our dynamic team. As an AI Specialist, you will be responsible for developing, implementing, and optimizing AI models and algorithms. You will collaborate closely with cross-functional teams to integrate AI capabilities into our products and services. The ideal candidate should have a strong background in machine learning, deep learning, and natural language processing, with a passion for applying AI to real-world problems.
Responsibilities:
- Design, develop, and deploy AI models and algorithms.
- Conduct data analysis and pre-processing to prepare data for modeling.
- Implement and optimize machine learning algorithms.
- Collaborate with software engineers to integrate AI models into production systems.
- Evaluate and improve the performance of existing AI models.
- Stay updated with the latest advancements in AI research and apply them to enhance our products.
- Provide technical guidance and mentorship to junior team members.
Requirements:
- Any Graduate / Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field; Master's degree preferred.
- Proven experience in developing and implementing machine learning models and algorithms.
- Strong programming skills in languages such as Python, R, or Java.
Benefits :
- Internship Certificate
- Letter of Recommendation
- Performance-Based Stipend
- Part-time work from home (2-3 hours per day)
- 5 days a week, fully flexible shift
Job description
Job Title: React JS Developer - (Core Skill - React JS)
Core Skills -
- Minimum of 6 months of experience in frontend Dev using React JS (Excl any internship, Training programs)
The Company
Our mission is to enable and empower engineering teams to build world-class solutions, and release them faster than ever, we strongly believe engineers are the building block of a great society - we love building, and we love solving problems Talk about problem-solving and technical challenges. And unique problems faced by the Engineering Community. Our DNA of stems from Mohit’s passion for building technology products for solving problems which has a big impact.
We are a bootstrapped company largely and aspire to become the next household name in the engineering community and leave a signature on all the great technological products being built across the globe.
Who would be your customers - We, are going to shoulder the great responsibility of solving minute problems that you as an Engineer have faced over the years.
The Opportunity
An exciting opportunity to be part of a story, making an impact on How domain solutions will be built in years to come
Do you wish to lead the Engineering vertical, build your own fort, and shine through the journey of building the next-generation platform?
Blaash is looking to hire a problem solver with strong technical expertise in building large applications. You will build the next-generation AI solution for the Engineering Team - including backend and frontend.
Responsibility
Owning the front-end and back-end development in all aspects. Proposing high-level design solutions, and POCs to arrive at the right solution. Mentoring junior developers and interns.
What makes you an ideal team member we are eagerly waiting to meet - :
- Demonstrate strong architecture and design skills in building high-performance APIs using AWS services.
- Design and implement highly scalable, interactive web applications with high usability
- Collaborate with product teams to iterate ideas on data monetization products/services and define feasibility
- Rapidly iterate on product ideas, build prototypes, and participate in proof of concepts
- Collaborate with internal and external teams in troubleshooting functional and performance issues
- Work with DevOps Engineers to integrate any new code into existing CI/CD pipelines
- 6 months + of experience in frontend dev using React JS
- 6 moths + years of hands-on experience developing high-performance APIs & Web applications
Salary -
- The first 4 months of the Training and Probation period = 15K - 20K INR per month
- On successful completion of the Probation period = 3 - 3.5 LPA INR per month
- Equity Benefits for deserving candidates
How we will set you up for success You will work closely with the Founding team to understand what we are building.
You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well. You will be involved in a monthly one-on-one with the founders to discuss feedback
If you’ve made it this far, then maybe you’re interested in joining us to build something pivotal, carving a unique story for you - Get in touch with us, or apply now!
Job Description
We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.
What will you need to be successful in this role?
Core Data Science Skills
• Strong foundation in statistics, probability, and mathematical modeling
• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)
• Strong SQL skills for data extraction, transformation, and complex analytical queries
• Experience with exploratory data analysis (EDA) and statistical hypothesis testing
• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)
• Strong understanding of feature engineering and data preprocessing techniques
• Experience with A/B testing, experimental design, and causal inference
Machine Learning & Analytics
• Strong experience building and deploying ML models (regression, classification, clustering)
• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)
• Understanding of time series analysis and forecasting techniques
• Experience with model evaluation metrics and cross-validation strategies
• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)
• Understanding of bias-variance tradeoff and model interpretability
• Experience with hyperparameter tuning and model optimization
GenAI & Advanced Analytics
• Working knowledge of LLMs and their application to business problems
• Experience with prompt engineering for analytical tasks
• Understanding of embeddings and semantic similarity for analytics
• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)
• Experience integrating AI/ML models into analytical workflows
Data Platforms & Tools
• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)
• Proficiency in Jupyter notebooks and collaborative development environments
• Familiarity with version control (Git) and collaborative workflows
• Experience working with large datasets and distributed computing (Spark/PySpark)
• Understanding of data warehousing concepts and dimensional modeling
• Experience with cloud platforms (AWS, Azure, or GCP)
Business Acumen & Communication
• Strong ability to translate business problems into analytical frameworks
• Experience presenting complex analytical findings to non-technical stakeholders
• Ability to create compelling data stories and visualizations
• Track record of driving business decisions through data-driven insights
• Experience working with cross-functional teams (Product, Engineering, Business)
• Strong documentation skills for analytical methodologies and findings
Good to have
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)
• Knowledge of reinforcement learning and optimization techniques
• Familiarity with graph analytics and network analysis
• Experience with MLOps and model deployment pipelines
• Understanding of model monitoring and performance tracking in production
• Knowledge of AutoML tools and automated feature engineering
• Experience with real-time analytics and streaming data
• Familiarity with causal ML and uplift modeling
• Publications or contributions to data science community
• Kaggle competitions or open-source contributions
• Experience in specific domains (finance, healthcare, e-commerce)
In this role, you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development.
Responsibilities
- Development of machine learning models
- Building and maintaining software development solutions
- Provide insights by applying data science methods
- Take ownership of delivering features and improvements on time
Must-have Qualifications
- 4 year's experience
- Senior data scientist preferable with knowledge of NLP
- Strong programming skills and extensive experience with Python
- Professional experience working with LLMs, transformers and open-source models from HuggingFace
- Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks
- Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc.).
- Experience using deep learning libraries and platforms, such as PyTorch
- Experience with frameworks such as Sklearn, Numpy, Pandas, Polars
- Excellent analytical and problem-solving skills
- Excellent oral and written communication skills
Extra Merit Qualifications
- Knowledge in at least one of the following: NLP, information retrieval, data mining
- Ability to do statistical modeling and building predictive models
- Programming skills and experience with Scala and/or Java
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
8+ years backend engineering experience in production systems
Proven experience architecting large-scale distributed systems (high throughput, low latency, high availability)
Deep expertise in system design including scalability, fault tolerance, and performance optimization
Experience leading cross-team technical initiatives in complex systems
Strong understanding of security, privacy, compliance, and secure coding practices
About the Role
We are seeking a hands-on Tech Lead to design, build, and integrate AI-driven systems that automate and enhance real-world business workflows. This is a high-impact role for someone who enjoys full-stack ownership — from backend AI architecture to frontend user experiences — and can align engineering decisions with measurable product outcomes.
You will begin as a strong individual contributor, independently architecting and deploying AI-powered solutions. As the product portfolio scales, you will lead a distributed team across India and Australia, acting as a System Integrator to align engineering, data, and AI contributions into cohesive production systems.
Example Project
Design and deploy a multi-agent AI system to automate critical stages of a company’s sales cycle, including:
- Generating client proposals using historical SharePoint data and CRM insights
- Summarizing meeting transcripts
- Drafting follow-up communications
- Feeding structured insights into dashboards and workflow tools
The solution will combine RAG pipelines, LLM reasoning, and React-based interfaces to deliver measurable productivity gains.
Key Responsibilities
- Architect and implement AI workflows using LLMs, vector databases, and automation frameworks
- Act as a System Integrator, coordinating deliverables across distributed engineering and AI teams
- Develop frontend interfaces using React/JavaScript to enable seamless human-AI collaboration
- Design APIs and microservices integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure)
- Drive architecture decisions balancing scalability, performance, and security
- Collaborate with product managers, clients, and data teams to translate business use cases into production-ready systems
- Mentor junior engineers and evolve into a broader leadership role as the team grows
Ideal Candidate Profile
Experience Requirements
- 5+ years in full-stack development (Python backend + React/JavaScript frontend)
- Strong experience in API and microservice integration
- 2+ years leading technical teams and coordinating distributed engineering efforts
- 1+ year of hands-on AI project experience (LLMs, Transformers, LangChain, OpenAI/Azure AI frameworks)
- Prior experience in B2B SaaS environments, particularly in AI, automation, or enterprise productivity solutions
Technical Expertise
- Designing and implementing AI workflows including RAG pipelines, vector databases, and prompt orchestration
- Ensuring backend and AI systems are scalable, reliable, observable, and secure
- Familiarity with enterprise integrations (SharePoint, Teams, Databricks, Azure)
- Experience building production-grade AI systems within enterprise SaaS ecosystems
Strong AI & Full-Stack Tech Lead
Mandatory (Experience 1): Must have 5+ years of experience in full-stack development, including Python for backend development and React/JavaScript for frontend, along with API/microservice integration.
Mandatory (Experience 2): Must have 2+ years of experience in leading technical teams, coordinating engineers, and acting as a system integrator across distributed teams.
Mandatory (Experience 3): Must have 1+ year of hands-on experience in AI projects, including LLMs, Transformers, LangChain, or OpenAI/Azure AI frameworks.
Mandatory (Tech Skills 1): Must have experience in designing and implementing AI workflows, including RAG pipelines, vector databases, and prompt orchestration.
Mandatory (Tech Skills 2): Must ensure backend and AI system scalability, reliability, observability, and security best practices.
Mandatory (Company): Must have experience working in B2B SaaS companies delivering AI, automation, or enterprise productivity solutions
Tech Skills (Familiarity): Should be familiar with integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure) and enterprise SaaS environmentsx
Mandatory (Note): Both founders are based out of Australia, design (2) and developer (4) team in India. Indian shift timings.
🚀 Hiring: C++ Content Writer Intern
📍 Remote | ⏳ 3 Months | 💼 Internship
We’re looking for someone who has strong proficiency in C++, DSA and maths (probability, statistics).
You should be comfortable with:
1. Modern C++ (RAII, memory management, move semantics)
2. Concurrency & low-latency concepts (threads, atomics, cache behavior)
3. OS fundamentals (threads vs processes, virtual memory)
4. Strong Maths (probability, stats)
5. Writing, Reading and explaining real code
What you’ll do:
1. Write deep technical content on C++, coding.
2. Break down core computer science, HFT-style, low-latency concepts
3. Create articles, code deep dives, and explainers
What you get:
1. Good Pay as per industry standards
2. Exposure to real C++ applied in quant engineering
3. Mentorship from top engineering minds.
4. A strong public technical portfolio
5. Clear signal for Quant Developer / SDE/ Low-latency C++ roles.
About Us:
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Job Summary:
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities:
- ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
- Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
- Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
- Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
- API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
- Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
- Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
- Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills:
- Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
- Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
- Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
- Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
- Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
- Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
- Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
- Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
- Experience with data validation techniques and tools.
- Familiarity with CI/CD practices and the ability to work in an Agile framework.
- Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
- Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
- Familiarity with similar large-scale public dataset integration initiatives.
- Experience with multilingual data integration.
About Wokelo:
Wokelo is an LLM agentic platform for investment research and decision making. We automate complex research and analysis tasks traditionally performed by humans. Our platform is leveraged by leading Private Equity firms, Investment Banks, Corporate Strategy teams, Venture Capitalists, and Fortune 500 companies.
With our proprietary agentic technology and state-of-the-art large language models (LLMs), we deliver rich insights and high-fidelity analysis in minutes—transforming how financial decisions are made.
Headquartered in Seattle, we are a global team backed by renowned venture funds and industry leaders. As we rapidly expand across multiple segments, we are looking for passionate individuals to join us on this journey.
Requirements:
- 0-1 years of experience as a Software Developer.
- Bachelor’s or Master’s degree in Computer Science or related field.
- Proficiency in Python with strong experience in Django Rest Framework.
- Hands-on experience with Django ORM.
- Ability to learn quickly and adapt to new technologies.
- Strong problem-solving and analytical skills.
- Knowledge of NLP, ML models, and related engineering practices (preferred).
- Familiarity with LLMs, RLHF, transformers, embeddings (a plus).
- Prior experience in building or scaling a SaaS platform (a plus).
- Strong attention to detail with experience integrating testing into development workflows.
Key Responsibilities:
- Develop, test, and maintain scalable backend services and APIs using Python (Django Rest Framework).
- Work with Django ORM to build efficient database-driven applications.
- Collaborate with cross-functional teams to design and implement features that enhance the Wokelo platform.
- Contribute to NLP engineering and ML model development to power GenAI solutions (preferred but not mandatory).
- Ensure testing and code quality are embedded into the development process.
- Research and adopt emerging technologies, providing innovative solutions to complex problems.
- Support the transition of prototypes into production-ready features on our SaaS platform.
- Perform adhoc tasks as and when required/assigned by manager.
Why Join Us?
- Opportunity to work on a first-of-its-kind Generative AI SaaS platform.
- A steep learning curve in a fast-paced, high-growth startup environment.
- Exposure to cutting-edge technologies in NLP, ML models, LLM Ops, and DevOps.
- Collaborative culture with global talent and visionary leadership.
- Full health coverage, flexible time-off, and remote work culture.
*Job description:*
*Company:* Innovative Fintech Start-up
*Location:* On-site in Gurgaon, India
*Job Type:* Full-Time
*Pay:* ₹100,000.00 - ₹150,000.00 per month
*Experience Level:* Senior (7+ years required)
*About Us*
We are a dynamic Fintech company revolutionizing the financial services landscape through cutting-edge technology. We're building innovative solutions to empower users in trading, market analysis, and financial compliance. As we expand, we're seeking a visionary Senior Developer to pioneer and lead our brand-new tech team from the ground up. This is an exciting opportunity to shape the future of our technology stack and drive mission-critical initiatives in a fast-paced environment.
*Role Overview*
As the Senior Developer and founding Tech Team Lead, you will architect, develop, and scale our core systems while assembling and mentoring a high-performing team. You'll work on generative AI-driven applications, integrate with financial APIs, and ensure robust, secure platforms for trading and market data. This role demands hands-on coding expertise combined with strategic leadership to deliver under tight deadlines and high-stakes conditions.
*Key Responsibilities*
Design, develop, and deploy scalable backend systems using Python as the primary language.
Lead the creation of a new tech team: recruit, mentor, and guide junior developers to foster a collaborative, innovative culture.
Integrate generative AI technologies (e.g., Claude from Anthropic, OpenAI models) to enhance features like intelligent coding assistants, predictive analytics, and automated workflows.
Solve complex problems in real-time, optimizing for performance in mission-critical financial systems.
Collaborate with cross-functional teams to align tech strategies with business goals, including relocation planning to Dubai.
Ensure code quality, security, and compliance in all developments.
Thrive in a high-pressure environment, managing priorities independently while driving projects to completion.
*Required Qualifications:*
7+ years of software development experience; 5+ years in Python.
Proven hands-on experience with OpenAI and Anthropic (Claude) APIs in production systems.
Strong problem-solving skills and ability to operate independently in ambiguous situations.
Experience leading projects, mentoring developers, or building teams.
Bachelor’s/Master’s degree in Computer Science, Engineering, or equivalent experience.
Experience with financial markets, trading systems, or market data platforms.
Familiarity with Meta Trader integrations.
Cloud experience, especially Google Cloud Platform (GCP).
Knowledge of fintech compliance and trade reporting standards.
*What We Offer:*
Competitive salary and benefits package.
Opportunity to build and lead a team in a high-growth Fintech space.
A collaborative, innovative work culture with room for professional growth.
*Job Types:* Full-time, Permanent
*Work Location:* In person
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
Read less
Job Description: Data Analyst
About the Role
We are seeking a highly skilled Data Analyst with strong expertise in SQL/PostgreSQL, Python (Pandas), Data Visualization, and Business Intelligence tools to join our team. The candidate will be responsible for analyzing large-scale datasets, identifying trends, generating actionable insights, and supporting business decisions across marketing, sales, operations, and customer experience..
Key Responsibilities
- Data Extraction & Management
- Write complex SQL queries in PostgreSQL to extract, clean, and transform large datasets.
- Ensure accuracy, reliability, and consistency of data across different platforms.
- Data Analysis & Insights
- Conduct deep-dive analyses to understand customer behavior, funnel drop-offs, product performance, campaign effectiveness, and sales trends.
- Perform cohort, LTV (lifetime value), retention, and churn analysis to identify opportunities for growth.
- Provide recommendations to improve conversion rates, average order value (AOV), and repeat purchase rates.
- Business Intelligence & Visualization
- Build and maintain interactive dashboards and reports using BI tools (e.g., PowerBI, Metabase or Looker).
- Create visualizations that simplify complex datasets for stakeholders and management.
- Python (Pandas)
- Use Python (Pandas, NumPy) for advanced analytics.
- Collaboration & Stakeholder Management
- Work closely with product, operations, and leadership teams to provide insights that drive decision-making.
- Communicate findings in a clear, concise, and actionable manner to both technical and non-technical stakeholders.
Required Skills
- SQL/PostgreSQL
- Complex joins, window functions, CTEs, aggregations, query optimization.
- Python (Pandas & Analytics)
- Data wrangling, cleaning, transformations, exploratory data analysis (EDA).
- Libraries: Pandas, NumPy, Matplotlib, Seaborn
- Data Visualization & BI Tools
- Expertise in creating dashboards and reports using Metabase or Looker.
- Ability to translate raw data into meaningful visual insights.
- Business Intelligence
- Strong analytical reasoning to connect data insights with e-commerce KPIs.
- Experience in funnel analysis, customer journey mapping, and retention analysis.
- Analytics & E-commerce Knowledge
- Understanding of metrics like CAC, ROAS, LTV, churn, contribution margin.
- General Skills
- Strong communication and presentation skills.
- Ability to work cross-functionally in fast-paced environments.
- Problem-solving mindset with attention to detail.
Education: Bachelor’s degree in Data Science, Computer Science, data processing

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one





















