Cutshort logo
Remote python jobs

50+ Remote Python Jobs in India

Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
LetsIntern

at LetsIntern

1 candid answer
Ashish Singh
Posted by Ashish Singh
Remote only
0 - 1 yrs
₹1L - ₹2L / yr
Artificial Intelligence (AI)
skill iconPython

About the Job :


We are looking for a passionate and driven AI Intern to join our dynamic team. As an intern, you will have the opportunity to work on real-world projects, develop AI models, and collaborate with experienced professionals in the field. This internship is designed to provide hands-on experience in AI and machine learning, offering you the chance to contribute to impactful projects while enhancing your skills.


Job Description:

We are seeking a talented Artificial Intelligence Specialist to join our dynamic team. As an AI Specialist, you will be responsible for developing, implementing, and optimizing AI models and algorithms. You will collaborate closely with cross-functional teams to integrate AI capabilities into our products and services. The ideal candidate should have a strong background in machine learning, deep learning, and natural language processing, with a passion for applying AI to real-world problems.


Responsibilities:


  • Design, develop, and deploy AI models and algorithms.
  • Conduct data analysis and pre-processing to prepare data for modeling.
  • Implement and optimize machine learning algorithms.
  • Collaborate with software engineers to integrate AI models into production systems.
  • Evaluate and improve the performance of existing AI models.
  • Stay updated with the latest advancements in AI research and apply them to enhance our products.
  • Provide technical guidance and mentorship to junior team members.



Requirements:


  • Any Graduate / Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field; Master's degree preferred.
  • Proven experience in developing and implementing machine learning models and algorithms.
  • Strong programming skills in languages such as Python, R, or Java.


Benefits :


  • Internship Certificate
  • Letter of Recommendation
  • Performance-Based Stipend
  • Part-time work from home (2-3 hours per day)
  • 5 days a week, fully flexible shift


Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
0 - 1 yrs
₹2L - ₹4L / yr
skill iconReact.js
skill iconPython

Job description

Job Title: React JS Developer - (Core Skill - React JS)

Core Skills -

  • Minimum of 6 months of experience in frontend Dev using React JS (Excl any internship, Training programs)

The Company

Our mission is to enable and empower engineering teams to build world-class solutions, and release them faster than ever, we strongly believe engineers are the building block of a great society - we love building, and we love solving problems Talk about problem-solving and technical challenges. And unique problems faced by the Engineering Community. Our DNA of stems from Mohit’s passion for building technology products for solving problems which has a big impact.

We are a bootstrapped company largely and aspire to become the next household name in the engineering community and leave a signature on all the great technological products being built across the globe.


Who would be your customers - We, are going to shoulder the great responsibility of solving minute problems that you as an Engineer have faced over the years.


The Opportunity

An exciting opportunity to be part of a story, making an impact on How domain solutions will be built in years to come


Do you wish to lead the Engineering vertical, build your own fort, and shine through the journey of building the next-generation platform?


Blaash is looking to hire a problem solver with strong technical expertise in building large applications. You will build the next-generation AI solution for the Engineering Team - including backend and frontend.


Responsibility


Owning the front-end and back-end development in all aspects. Proposing high-level design solutions, and POCs to arrive at the right solution. Mentoring junior developers and interns.


What makes you an ideal team member we are eagerly waiting to meet - :

  • Demonstrate strong architecture and design skills in building high-performance APIs using AWS services.
  • Design and implement highly scalable, interactive web applications with high usability
  • Collaborate with product teams to iterate ideas on data monetization products/services and define feasibility
  • Rapidly iterate on product ideas, build prototypes, and participate in proof of concepts
  • Collaborate with internal and external teams in troubleshooting functional and performance issues
  • Work with DevOps Engineers to integrate any new code into existing CI/CD pipelines
  • 6 months + of experience in frontend dev using React JS
  • 6 moths + years of hands-on experience developing high-performance APIs & Web applications


Salary -

  • The first 4 months of the Training and Probation period = 15K - 20K INR per month
  • On successful completion of the Probation period = 3 - 3.5 LPA INR per month
  • Equity Benefits for deserving candidates



How we will set you up for success You will work closely with the Founding team to understand what we are building.

You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well. You will be involved in a monthly one-on-one with the founders to discuss feedback


If you’ve made it this far, then maybe you’re interested in joining us to build something pivotal, carving a unique story for you - Get in touch with us, or apply now!

Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Nikita Sinha
Posted by Nikita Sinha
Remote only
5 - 10 yrs
Upto ₹60L / yr (Varies
)
skill iconPython
skill iconMachine Learning (ML)
MLOps
Problem solving

We are looking for a Machine Learning Engineer to design, build, and operate production-grade ML systems powering scalable, data-driven applications.


You will be responsible for developing end-to-end machine learning pipelines, ensuring seamless consistency between development and production environments while building reliable and scalable ML infrastructure.


This role focuses on production ML engineering, not experimentation-only data science. You will work closely with backend, data, and product teams to deploy and operate predictive systems at scale.


Requirements

  • Strong coding skills in Python, with the ability to build reliable, production-quality systems.
  • Experience developing end-to-end machine learning pipelines, ensuring consistency between development, training, and production environments.
  • Ability to design and implement scalable ML architectures tailored to site traffic, system scale, and predictive feature complexity.
  • Familiarity with model and data versioning, resource allocation, system scaling, and structured logging practices.
  • Experience building systems that monitor, detect, and respond to failures across infrastructure resources, data pipelines, and model predictions.
  • Hands-on expertise with MLOps tools and workflows for scalable, production-level model deployment and lifecycle management.
  • Strong problem-solving abilities and comfort working in a fast-paced, high-ownership environment.
Read more
Byteridge

at Byteridge

1 recruiter
Sweety S
Posted by Sweety S
Remote only
3 - 6 yrs
₹10L - ₹18L / yr
skill iconData Science
Generative AI
skill iconPython
skill iconAmazon Web Services (AWS)
Large Language Models (LLM) tuning
+1 more

Job Description

We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.

What will you need to be successful in this role?

Core Data Science Skills

• Strong foundation in statistics, probability, and mathematical modeling

• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)

• Strong SQL skills for data extraction, transformation, and complex analytical queries

• Experience with exploratory data analysis (EDA) and statistical hypothesis testing

• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)

• Strong understanding of feature engineering and data preprocessing techniques

• Experience with A/B testing, experimental design, and causal inference

Machine Learning & Analytics

• Strong experience building and deploying ML models (regression, classification, clustering)

• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)

• Understanding of time series analysis and forecasting techniques

• Experience with model evaluation metrics and cross-validation strategies

• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)

• Understanding of bias-variance tradeoff and model interpretability

• Experience with hyperparameter tuning and model optimization

GenAI & Advanced Analytics

• Working knowledge of LLMs and their application to business problems

• Experience with prompt engineering for analytical tasks

• Understanding of embeddings and semantic similarity for analytics

• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)

• Experience integrating AI/ML models into analytical workflows

Data Platforms & Tools

• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)

• Proficiency in Jupyter notebooks and collaborative development environments

• Familiarity with version control (Git) and collaborative workflows

• Experience working with large datasets and distributed computing (Spark/PySpark)

• Understanding of data warehousing concepts and dimensional modeling

• Experience with cloud platforms (AWS, Azure, or GCP)

Business Acumen & Communication

• Strong ability to translate business problems into analytical frameworks

• Experience presenting complex analytical findings to non-technical stakeholders

• Ability to create compelling data stories and visualizations

• Track record of driving business decisions through data-driven insights

• Experience working with cross-functional teams (Product, Engineering, Business)

• Strong documentation skills for analytical methodologies and findings

Good to have

• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)

• Knowledge of reinforcement learning and optimization techniques

• Familiarity with graph analytics and network analysis

• Experience with MLOps and model deployment pipelines

• Understanding of model monitoring and performance tracking in production

• Knowledge of AutoML tools and automated feature engineering

• Experience with real-time analytics and streaming data

• Familiarity with causal ML and uplift modeling

• Publications or contributions to data science community

• Kaggle competitions or open-source contributions

• Experience in specific domains (finance, healthcare, e-commerce)

Read more
Remote, Hyderabad
3 - 5 yrs
₹15L - ₹25L / yr
Natural Language Processing (NLP)
Large Language Models (LLM) tuning
Data Structures
Algorithms
skill iconPython
+9 more

In this role, you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development.


Responsibilities


  • Development of machine learning models
  • Building and maintaining software development solutions
  • Provide insights by applying data science methods
  • Take ownership of delivering features and improvements on time


Must-have Qualifications


  • 4 year's experience 
  • Senior data scientist preferable with knowledge of NLP
  • Strong programming skills and extensive experience with Python
  • Professional experience working with LLMs, transformers and open-source models from HuggingFace
  • Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks
  • Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc.).
  • Experience using deep learning libraries and platforms, such as PyTorch
  • Experience with frameworks such as Sklearn, Numpy, Pandas, Polars
  • Excellent analytical and problem-solving skills
  • Excellent oral and written communication skills


Extra Merit Qualifications


  • Knowledge in at least one of the following: NLP, information retrieval, data mining
  • Ability to do statistical modeling and building predictive models
  • Programming skills and experience with Scala and/or Java
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Remote only
9 - 15 yrs
₹9L - ₹16L / yr
skill iconMachine Learning (ML)
MLOps
CI/CD
skill iconPython
Generative AI
+1 more

Job Description -

Profile: Senior ML Lead

Experience Required: 10+ Years

Work Mode: Remote

Key Responsibilities:

  • Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
  • Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
  • Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
  • Ensure AI/ML solutions align with business goals, performance, and compliance requirements
  • Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap

Required Skills:

  • Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
  • Proficiency in Python with ML libraries and frameworks
  • MLOps: CI/CD/CT pipelines for ML deployment with Azure
  • Experience with OpenAI/Generative AI solutions
  • Cloud-native services: Azure ML, Snowflake
  • 8+ years in data science with at least 2 years in solution architecture role
  • Experience with large-scale model deployment and performance tuning

Good-to-Have:

  • Strong background in Computer Science or Data Science
  • Azure certifications
  • Experience in data governance and compliance


Read more
One2n

at One2n

3 candid answers
Krunali Lole
Posted by Krunali Lole
Remote, Pune
9 - 12 yrs
₹30L - ₹45L / yr
SRE
Monitoring
DevOps
Terraform
open telemetry
+7 more


About the role:

We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.

At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.


Key responsibilities:

  • Own and drive reliability and infrastructure strategy across multiple products or client engagements
  • Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
  • Lead architecture discussions around observability, scalability, availability, and cost efficiency.
  • Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
  • Build and review production-grade CI/CD and IaC systems used across teams
  • Act as an escalation point for complex production issues and incident retrospectives.
  • Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
  • Mentor young engineers through design reviews, technical guidance, and best practices.
  • Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
  • Help teams mature their on-call processes, reliability culture, and operational ownership.
  • Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice


About you:

  • 9+ years of experience in SRE, DevOps, or software engineering roles
  • Strong experience designing and operating Kubernetes-based systems on AWS at scale
  • Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
  • Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
  • Strong understanding of distributed systems, microservices, and containerized workloads.
  • Ability to write and review production-quality code (Golang, Python, Java, or similar)
  • Solid Linux fundamentals and experience debugging complex system-level issues
  • Experience driving cross-team technical initiatives.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.


Nice to have:

  • Experience working in consulting or multi-client environments.
  • Exposure to cost optimization, or large-scale AWS account management
  • Experience building internal platforms or shared infrastructure used by multiple teams.
  • Prior experience influencing or defining engineering standards across organizations.


Read more
Talent Pro
Remote only
8 - 10 yrs
₹40L - ₹90L / yr
skill iconPython

8+ years backend engineering experience in production systems


Proven experience architecting large-scale distributed systems (high throughput, low latency, high availability)


Deep expertise in system design including scalability, fault tolerance, and performance optimization


Experience leading cross-team technical initiatives in complex systems


Strong understanding of security, privacy, compliance, and secure coding practices

Read more
Remote only
6 - 12 yrs
₹45L - ₹50L / yr
skill iconPython
skill iconReact.js
skill iconJavascript
API management
RESTful APIs

About the Role

We are seeking a hands-on Tech Lead to design, build, and integrate AI-driven systems that automate and enhance real-world business workflows. This is a high-impact role for someone who enjoys full-stack ownership — from backend AI architecture to frontend user experiences — and can align engineering decisions with measurable product outcomes.

You will begin as a strong individual contributor, independently architecting and deploying AI-powered solutions. As the product portfolio scales, you will lead a distributed team across India and Australia, acting as a System Integrator to align engineering, data, and AI contributions into cohesive production systems.

Example Project

Design and deploy a multi-agent AI system to automate critical stages of a company’s sales cycle, including:

  • Generating client proposals using historical SharePoint data and CRM insights
  • Summarizing meeting transcripts
  • Drafting follow-up communications
  • Feeding structured insights into dashboards and workflow tools

The solution will combine RAG pipelines, LLM reasoning, and React-based interfaces to deliver measurable productivity gains.

Key Responsibilities

  • Architect and implement AI workflows using LLMs, vector databases, and automation frameworks
  • Act as a System Integrator, coordinating deliverables across distributed engineering and AI teams
  • Develop frontend interfaces using React/JavaScript to enable seamless human-AI collaboration
  • Design APIs and microservices integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure)
  • Drive architecture decisions balancing scalability, performance, and security
  • Collaborate with product managers, clients, and data teams to translate business use cases into production-ready systems
  • Mentor junior engineers and evolve into a broader leadership role as the team grows

Ideal Candidate Profile

Experience Requirements

  • 5+ years in full-stack development (Python backend + React/JavaScript frontend)
  • Strong experience in API and microservice integration
  • 2+ years leading technical teams and coordinating distributed engineering efforts
  • 1+ year of hands-on AI project experience (LLMs, Transformers, LangChain, OpenAI/Azure AI frameworks)
  • Prior experience in B2B SaaS environments, particularly in AI, automation, or enterprise productivity solutions

Technical Expertise

  • Designing and implementing AI workflows including RAG pipelines, vector databases, and prompt orchestration
  • Ensuring backend and AI systems are scalable, reliable, observable, and secure
  • Familiarity with enterprise integrations (SharePoint, Teams, Databricks, Azure)
  • Experience building production-grade AI systems within enterprise SaaS ecosystems




Read more
Mango Sciences
Remote only
5 - 7 yrs
₹10L - ₹15L / yr
skill iconPython
SQL
SQL quires

Database Programmer / Developer (SQL, Python, Healthcare)

Job Summary

We are seeking a skilled and experienced Database Programmer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining our database systems, with a strong focus on data integrity, performance, and security. The role requires expertise in SQL, strong programming skills in Python, and prior experience working within the healthcare domain to handle sensitive data and complex regulatory requirements.

Key Responsibilities

  • Design, implement, and maintain scalable and efficient database schemas and systems.
  • Develop and optimize complex SQL queries, stored procedures, and triggers for data manipulation and reporting.
  • Write and maintain Python scripts to automate data pipelines, ETL processes, and database tasks.
  • Collaborate with data analysts, software developers, and other stakeholders to understand data requirements and deliver robust solutions.
  • Ensure data quality, integrity, and security, adhering to industry standards and regulations such as HIPAA.
  • Troubleshoot and resolve database performance issues, including query tuning and indexing.
  • Create and maintain technical documentation for database architecture, processes, and applications.

Required Qualifications

  • Experience:
  • Proven experience as a Database Programmer, SQL Developer, or a similar role.
  • Demonstrable experience working with database systems, including data modeling and design.
  • Strong background in developing and maintaining applications and scripts using Python.
  • Direct experience within the healthcare domain is mandatory, including familiarity with medical data (e.g., patient records, claims data) and related regulatory compliance (e.g., HIPAA).
  • Technical Skills:
  • Expert-level proficiency in Structured Query Language (SQL) and relational databases (e.g., SQL Server, PostgreSQL, MySQL).
  • Solid programming skills in Python, including experience with relevant libraries for data handling (e.g., Pandas, SQLAlchemy).
  • Experience with data warehousing concepts and ETL (Extract, Transform, Load) processes.
  • Familiarity with version control systems, such as Git.

Preferred Qualifications

  • Experience with NoSQL databases (e.g., MongoDB, Cassandra).
  • Knowledge of cloud-based data platforms (e.g., AWS, GCP, Azure).
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Familiarity with other programming languages relevant to data science or application development.

Education

  • Bachelor’s degree in computer science, Information Technology, or a related field.

 

To process your resume for the next process, please fill out the Google form with your updated resume.


https://forms.gle/f7zgYAa632ww5Teb6

Read more
Remote only
6 - 12 yrs
₹45L - ₹60L / yr
skill iconPython
skill iconReact.js

Strong AI & Full-Stack Tech Lead

Mandatory (Experience 1): Must have 5+ years of experience in full-stack development, including Python for backend development and React/JavaScript for frontend, along with API/microservice integration.

Mandatory (Experience 2): Must have 2+ years of experience in leading technical teams, coordinating engineers, and acting as a system integrator across distributed teams.

Mandatory (Experience 3): Must have 1+ year of hands-on experience in AI projects, including LLMs, Transformers, LangChain, or OpenAI/Azure AI frameworks.

Mandatory (Tech Skills 1): Must have experience in designing and implementing AI workflows, including RAG pipelines, vector databases, and prompt orchestration.

Mandatory (Tech Skills 2): Must ensure backend and AI system scalability, reliability, observability, and security best practices.

Mandatory (Company): Must have experience working in B2B SaaS companies delivering AI, automation, or enterprise productivity solutions

Tech Skills (Familiarity): Should be familiar with integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure) and enterprise SaaS environmentsx

Mandatory (Note): Both founders are based out of Australia, design (2) and developer (4) team in India. Indian shift timings.

Read more
Amoolya Capital

at Amoolya Capital

2 candid answers
Shantanu Sharma
Posted by Shantanu Sharma
Remote only
0 - 3 yrs
₹10000 - ₹40000 / mo
skill iconC++
Operating systems
Probability
Thread
Data Structures
+2 more

🚀 Hiring: C++ Content Writer Intern

📍 Remote | ⏳ 3 Months | 💼 Internship

We’re looking for someone who has strong proficiency in C++, DSA and maths (probability, statistics).


You should be comfortable with:

1. Modern C++ (RAII, memory management, move semantics)

2. Concurrency & low-latency concepts (threads, atomics, cache behavior)

3. OS fundamentals (threads vs processes, virtual memory)

4. Strong Maths (probability, stats)

5. Writing, Reading and explaining real code


What you’ll do:

1. Write deep technical content on C++, coding.

2. Break down core computer science, HFT-style, low-latency concepts

3. Create articles, code deep dives, and explainers


What you get:

1. Good Pay as per industry standards

2. Exposure to real C++ applied in quant engineering

3. Mentorship from top engineering minds.

4. A strong public technical portfolio

5. Clear signal for Quant Developer / SDE/ Low-latency C++ roles.

Read more
Remote only
3 - 8 yrs
₹20L - ₹30L / yr
ETL
Google Cloud Platform (GCP)
skill iconPython
Pipeline management
BigQuery

About Us:


CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary:


We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities:


  • ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
  • Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
  • Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
  • Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 
  • API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
  • Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
  • Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
  • Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills:


  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
  • Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
  • Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
  • Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:


  • Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
  • Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
  • Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
  • Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
  • Experience with data validation techniques and tools.
  • Familiarity with CI/CD practices and the ability to work in an Agile framework.
  • Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:


  • Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
  • Familiarity with similar large-scale public dataset integration initiatives.
  • Experience with multilingual data integration.
Read more
Wokelo AI
Ishvika Dwivedi
Posted by Ishvika Dwivedi
Remote only
0 - 1 yrs
₹7L - ₹10L / yr
skill iconPython
skill iconDjango
SaaS
Natural Language Processing (NLP)
Large Language Models (LLM)
+1 more

About Wokelo:


Wokelo is an LLM agentic platform for investment research and decision making. We automate complex research and analysis tasks traditionally performed by humans. Our platform is leveraged by leading Private Equity firms, Investment Banks, Corporate Strategy teams, Venture Capitalists, and Fortune 500 companies.


With our proprietary agentic technology and state-of-the-art large language models (LLMs), we deliver rich insights and high-fidelity analysis in minutes—transforming how financial decisions are made.


Headquartered in Seattle, we are a global team backed by renowned venture funds and industry leaders. As we rapidly expand across multiple segments, we are looking for passionate individuals to join us on this journey.


Requirements:


  • 0-1 years of experience as a Software Developer.
  • Bachelor’s or Master’s degree in Computer Science or related field.
  • Proficiency in Python with strong experience in Django Rest Framework.
  • Hands-on experience with Django ORM.
  • Ability to learn quickly and adapt to new technologies.
  • Strong problem-solving and analytical skills.
  • Knowledge of NLP, ML models, and related engineering practices (preferred).
  • Familiarity with LLMs, RLHF, transformers, embeddings (a plus).
  • Prior experience in building or scaling a SaaS platform (a plus).
  • Strong attention to detail with experience integrating testing into development workflows.


Key Responsibilities:


  • Develop, test, and maintain scalable backend services and APIs using Python (Django Rest Framework).
  • Work with Django ORM to build efficient database-driven applications.
  • Collaborate with cross-functional teams to design and implement features that enhance the Wokelo platform.
  • Contribute to NLP engineering and ML model development to power GenAI solutions (preferred but not mandatory).
  • Ensure testing and code quality are embedded into the development process.
  • Research and adopt emerging technologies, providing innovative solutions to complex problems.
  • Support the transition of prototypes into production-ready features on our SaaS platform.
  • Perform adhoc tasks as and when required/assigned by manager.


Why Join Us?


  • Opportunity to work on a first-of-its-kind Generative AI SaaS platform.
  • A steep learning curve in a fast-paced, high-growth startup environment.
  • Exposure to cutting-edge technologies in NLP, ML models, LLM Ops, and DevOps.
  • Collaborative culture with global talent and visionary leadership.
  • Full health coverage, flexible time-off, and remote work culture.


Read more
Oceano Apex
Neeraj Dutt
Posted by Neeraj Dutt
Remote only
5 - 7 yrs
₹17L - ₹22L / yr
skill iconPython
Work in process

*Job description:*


*Company:* Innovative Fintech Start-up


*Location:* On-site in Gurgaon, India


*Job Type:* Full-Time


*Pay:* ₹100,000.00 - ₹150,000.00 per month


*Experience Level:* Senior (7+ years required)


*About Us*


We are a dynamic Fintech company revolutionizing the financial services landscape through cutting-edge technology. We're building innovative solutions to empower users in trading, market analysis, and financial compliance. As we expand, we're seeking a visionary Senior Developer to pioneer and lead our brand-new tech team from the ground up. This is an exciting opportunity to shape the future of our technology stack and drive mission-critical initiatives in a fast-paced environment.


*Role Overview*


As the Senior Developer and founding Tech Team Lead, you will architect, develop, and scale our core systems while assembling and mentoring a high-performing team. You'll work on generative AI-driven applications, integrate with financial APIs, and ensure robust, secure platforms for trading and market data. This role demands hands-on coding expertise combined with strategic leadership to deliver under tight deadlines and high-stakes conditions.


*Key Responsibilities*


Design, develop, and deploy scalable backend systems using Python as the primary language.


Lead the creation of a new tech team: recruit, mentor, and guide junior developers to foster a collaborative, innovative culture.


Integrate generative AI technologies (e.g., Claude from Anthropic, OpenAI models) to enhance features like intelligent coding assistants, predictive analytics, and automated workflows.


Solve complex problems in real-time, optimizing for performance in mission-critical financial systems.


Collaborate with cross-functional teams to align tech strategies with business goals, including relocation planning to Dubai.


Ensure code quality, security, and compliance in all developments.


Thrive in a high-pressure environment, managing priorities independently while driving projects to completion.


*Required Qualifications:*


7+ years of software development experience; 5+ years in Python.


Proven hands-on experience with OpenAI and Anthropic (Claude) APIs in production systems.


Strong problem-solving skills and ability to operate independently in ambiguous situations.


Experience leading projects, mentoring developers, or building teams.


Bachelor’s/Master’s degree in Computer Science, Engineering, or equivalent experience.


Experience with financial markets, trading systems, or market data platforms.


Familiarity with Meta Trader integrations.


Cloud experience, especially Google Cloud Platform (GCP).


Knowledge of fintech compliance and trade reporting standards.


*What We Offer:*


Competitive salary and benefits package.


Opportunity to build and lead a team in a high-growth Fintech space.


A collaborative, innovative work culture with room for professional growth.


*Job Types:* Full-time, Permanent


*Work Location:* In person

Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
1 - 2 yrs
₹4L - ₹5L / yr
skill iconPython
FastAPI
skill iconFlask

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.



Why Palcode.ai


Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data

High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday

Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions

Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment

Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions

Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software


Your Role:

  • Design and build our core AI services and APIs using Python
  • Create reliable, scalable backend systems that handle complex data
  • Help set up cloud infrastructure and deployment pipelines
  • Collaborate with our AI team to integrate machine learning models
  • Write clean, tested, production-ready code


You'll fit right in if:

  • You have 1 year of hands-on Python development experience
  • You're comfortable with full-stack development and cloud services
  • You write clean, maintainable code and follow good engineering practices
  • You're curious about AI/ML and eager to learn new technologies
  • You enjoy fast-paced startup environments and take ownership of your work


How we will set you up for success

  • You will work closely with the Founding team to understand what we are building.
  • You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
  • You will be involved in a monthly one-on-one with the founders to discuss feedback
  • A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
  • You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.


Location: Bangalore, Remote


Compensation: Competitive salary + Meaningful equity


If you get excited about solving hard problems that have real-world impact, we should talk.


All the best!!

Read less

Read more
suntekai
Kushi A
Posted by Kushi A
Remote only
0 - 1 yrs
₹10000 - ₹12000 / mo
skill iconPython
skill iconPostgreSQL
Data Visualization
Business Intelligence (BI)
SQL
+2 more

Job Description: Data Analyst


About the Role

We are seeking a highly skilled Data Analyst with strong expertise in SQL/PostgreSQL, Python (Pandas), Data Visualization, and Business Intelligence tools to join our team. The candidate will be responsible for analyzing large-scale datasets, identifying trends, generating actionable insights, and supporting business decisions across marketing, sales, operations, and customer experience..

Key Responsibilities

  • Data Extraction & Management

  • Write complex SQL queries in PostgreSQL to extract, clean, and transform large datasets.

  • Ensure accuracy, reliability, and consistency of data across different platforms.

  • Data Analysis & Insights

  • Conduct deep-dive analyses to understand customer behavior, funnel drop-offs, product performance, campaign effectiveness, and sales trends.

  • Perform cohort, LTV (lifetime value), retention, and churn analysis to identify opportunities for growth.

  • Provide recommendations to improve conversion rates, average order value (AOV), and repeat purchase rates.

  • Business Intelligence & Visualization

  • Build and maintain interactive dashboards and reports using BI tools (e.g., PowerBI, Metabase or Looker).

  • Create visualizations that simplify complex datasets for stakeholders and management.

  • Python (Pandas)

  • Use Python (Pandas, NumPy) for advanced analytics.

  • Collaboration & Stakeholder Management

  • Work closely with product, operations, and leadership teams to provide insights that drive decision-making.

  • Communicate findings in a clear, concise, and actionable manner to both technical and non-technical stakeholders.

Required Skills

  • SQL/PostgreSQL

  • Complex joins, window functions, CTEs, aggregations, query optimization.

  • Python (Pandas & Analytics)

  • Data wrangling, cleaning, transformations, exploratory data analysis (EDA).

  • Libraries: Pandas, NumPy, Matplotlib, Seaborn

  • Data Visualization & BI Tools

  • Expertise in creating dashboards and reports using Metabase or Looker.

  • Ability to translate raw data into meaningful visual insights.

  • Business Intelligence

  • Strong analytical reasoning to connect data insights with e-commerce KPIs.

  • Experience in funnel analysis, customer journey mapping, and retention analysis.

  • Analytics & E-commerce Knowledge

  • Understanding of metrics like CAC, ROAS, LTV, churn, contribution margin.

  • General Skills

  • Strong communication and presentation skills.

  • Ability to work cross-functionally in fast-paced environments.

  • Problem-solving mindset with attention to detail.



Education: Bachelor’s degree in Data Science, Computer Science, data processing




Read more
Remote only
0 - 5 yrs
₹1.5L - ₹3L / yr
AWS SageMaker
skill iconMachine Learning (ML)
skill iconPython
skill iconAmazon Web Services (AWS)

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.


We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.


Responsibilities:

- Build ML models for fraud detection and anomaly detection

- Work with transactional and behavioral data

- Deploy models on AWS (S3, SageMaker, EC2/Lambda)

- Build data pipelines and inference workflows

- Integrate ML models with backend APIs


Requirements:

- Strong Python and Machine Learning experience

- Hands-on AWS experience

- Experience deploying ML models in production

- Ability to work independently in a remote setup


Job Type: Contract / Freelance  

Duration: 3–6 months (extendable)  

Location: Remote (India)


Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3.5 - 5 yrs
₹14L - ₹20L / yr
skill iconPython
skill iconDjango
MySQL
skill iconPostgreSQL
FastAPI
+22 more

About Us:

MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.

Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain a scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture

Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)

Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform

Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one


Read more
Remote only
2 - 4 yrs
₹25L - ₹31L / yr
skill iconPython
Microservices

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)

Read more
Remote only
9 - 12 yrs
₹2L - ₹2.5L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Terraform
Data Transformation Tool (DBT)
SQL
+1 more

🚀 Hiring: Associate Tech Architect / Senior Tech Specialist

🌍 Remote | Contract Opportunity

We’re looking for a seasoned tech professional who can lead the design and implementation of cloud-native data and platform solutions. This is a remote, contract-based role for someone with strong ownership and architecture experience.

🔴 Mandatory & Most Important Skill Set

Hands-on expertise in the following technologies is essential:

AWS – Cloud architecture & services

Python – Backend & data engineering

Terraform – Infrastructure as Code

Airflow – Workflow orchestration

SQL – Data processing & querying

DBT – Data transformation & modeling

💼 Key Responsibilities

  • Architect and build scalable AWS-based data platforms
  • Design and manage ETL/ELT pipelines
  • Orchestrate workflows using Airflow
  • Implement cloud infrastructure using Terraform
  • Lead best practices in data architecture, performance, and scalability
  • Collaborate with engineering teams and provide technical leadership

🎯 Ideal Profile

✔ Strong experience in cloud and data platform architecture

✔ Ability to take end-to-end technical ownership

✔ Comfortable working in a remote, distributed team environment

📄 Role Type: Contract

🌍 Work Mode: 100% Remote

If you have deep expertise in these core technologies and are ready to take on a high-impact architecture role, we’d love to hear from you.


Read more
Blockify
Dhanur Sehgal
Posted by Dhanur Sehgal
Remote only
3 - 8 yrs
₹6L - ₹12L / yr
skill iconGo Programming (Golang)
skill iconPython
Scalability
Infrastructure architecture
SQL
+6 more

We’re hiring a remote, contract-based Backend & Infrastructure Engineer who can build and run production systems end-to-end.

You will build and scale high-throughput backend services in Golang and Python, operate ClickHouse-powered analytics at scale, manage Linux servers for maximum uptime, scalability, and reliability, and drive cost efficiency as a core engineering discipline across the entire stack.



What You Will Do:


Backend Development (Golang & Python)

  • Design and maintain high-throughput RESTful/gRPC APIs — primarily Golang, Python for tooling and supporting services
  • Architect for horizontal scalability, fault tolerance, and low-latency at scale
  • Implement caching (Redis/Memcached), rate limiting, efficient serialization, and CI/CD pipelines

Scalable Architecture & System Design

  • Design and evolve distributed, resilient backend architecture that scales without proportional cost increase
  • Make deliberate trade-offs (CAP, cost vs. performance) and design multi-region HA with automated failover

ClickHouse & Analytical Data Infrastructure

  • Deploy, tune, and operate ClickHouse clusters for real-time analytics and high-cardinality OLAP workloads
  • Design optimal table engines, partition strategies, materialized views, and query patterns
  • Manage cluster scaling, replication, schema migrations, and upstream/downstream integrations

Cost Efficiency & Cost Optimization

  • Own cost optimization end-to-end: right-sizing, reserved/spot capacity, storage tiering, query optimization, compression, batching
  • Build cost dashboards, budgets, and alerts; drive a culture of cost-aware engineering

Linux Server Management & Infrastructure

  • Administer and harden Linux servers (Ubuntu, Debian, CentOS/RHEL) — patching, security, SSH, firewalls
  • Manage VPS/bare-metal provisioning, capacity planning, and containerized workloads (Docker, Kubernetes/Nomad)
  • Implement Infrastructure-as-Code (Terraform/Pulumi); optionally manage AWS/GCP as needed

Data, Storage & Scheduling

  • Optimize SQL schemas and queries (PostgreSQL, MySQL); manage data archival, cold storage, and lifecycle policies
  • Build and maintain cron jobs, scheduled tasks, and batch processing systems

Uptime, Reliability & Observability

  • Own system uptime: zero-downtime deployments, health checks, self-healing infra, SLOs/SLIs
  • Build observability stacks (Prometheus, Grafana, Datadog, OpenTelemetry); structured logging, distributed tracing, alerting
  • Drive incident response, root cause analysis, and post-mortems


Required Qualifications:


Must-Have (Critical)

  • Deep proficiency in Golang (primary) and Python
  • Proven ability to design and build scalable, distributed architectures
  • Production experience deploying and operating ClickHouse at scale
  • Track record of driving measurable cost efficiency and cost optimization
  • 5+ years in backend engineering and infrastructure roles

Also Required

  • Strong Linux server administration (Ubuntu, Debian, CentOS/RHEL) — comfortable living in the terminal
  • Proven uptime and reliability track record across production infrastructure
  • Strong SQL (PostgreSQL, MySQL); experience with high-throughput APIs (10K+ RPS)
  • VPS/bare-metal provisioning, Docker, Kubernetes/Nomad, IaC (Terraform/Pulumi)
  • Observability tooling (Prometheus, Grafana, Datadog, OpenTelemetry)
  • Cron jobs, batch processing, data archival, cold storage management
  • Networking fundamentals (DNS, TCP/IP, load balancing, TLS)


Nice to Have

  • AWS, GCP, or other major cloud provider experience
  • Message queues / event streaming (Kafka, RabbitMQ, SQS/SNS)
  • Data pipelines (Airflow, dbt); FinOps practices
  • Open-source contributions; compliance background (SOC 2, HIPAA, GDPR)


What We Offer

  • Remote, contractual role
  • Flexible time zones (overlap for standups + incident coverage)
  • Competitive contract compensation + equity
  • Long-term engagement opportunity based on performance
Read more
Euphoric Thought Technologies
Remote, Bengaluru (Bangalore)
3 - 4 yrs
₹11L - ₹13L / yr
skill iconPython
SQL

We are seeking a Data Engineer with 3–4 years of relevant experience to join our team. The ideal candidate should have strong expertise in Python and SQL and be available to join immediately.

Location: Bangalore

Experience: 3–4 Years

Joining: Immediate Joiner preferred

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines and data models
  • Extract, transform, and load (ETL) data from multiple sources
  • Write efficient and optimized SQL queries for data analysis and reporting
  • Develop data processing scripts and automation using Python
  • Ensure data quality, integrity, and performance across systems
  • Collaborate with cross-functional teams to support business and analytics needs
  • Troubleshoot data-related issues and optimize existing processes

Required Skills & Qualifications:

  • 3–4 years of hands-on experience as a Data Engineer or similar role
  • Strong proficiency in Python and SQL
  • Experience working with relational databases and large datasets
  • Good understanding of data warehousing and ETL concepts
  • Strong analytical and problem-solving skills
  • Ability to work independently and in a team-oriented environment

Preferred:

  • Experience with cloud platforms or data tools (added advantage)
  • Exposure to performance tuning and data optimization





Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Remote only
9 - 18 yrs
₹5L - ₹29L / yr
skill iconPython
SQL
NOSQL Databases
DBA

Job Summary


We are looking for an experienced Python DBA with strong expertise in Python scripting and SQL/NoSQL databases. The candidate will be responsible for database administration, automation, performance optimization, and ensuring availability and reliability of database systems.


Key Responsibilities

  • Administer and maintain SQL and NoSQL databases
  • Develop Python scripts for database automation and monitoring
  • Perform database performance tuning and query optimization
  • Manage backups, recovery, replication, and high availability
  • Ensure data security, integrity, and compliance
  • Troubleshoot and resolve database-related issues
  • Collaborate with development and infrastructure teams
  • Monitor database health and performance
  • Maintain documentation and best practices


Required Skills

  • 10+ years of experience in Database Administration
  • Strong proficiency in Python
  • Experience with SQL databases (PostgreSQL, MySQL, Oracle, SQL Server)
  • Experience with NoSQL databases (MongoDB, Cassandra, etc.)
  • Strong understanding of indexing, schema design, and performance tuning
  • Good analytical and problem-solving skills


Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Bisman Gill
Posted by Bisman Gill
Remote only
4yrs+
Upto ₹27L / yr (Varies
)
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
+6 more

Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.

We do this by combining data-driven content, rigorous product comparisons, and user-first design all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics.

The Role

We are hiring a Senior Data Engineer to help design and scale the infrastructure behind our analytics,performance marketing, and experimentation platforms.

This role is ideal for someone who thrives on solving complex data problems, enjoys owning systems end-to-end, and wants to work closely with stakeholders across product, marketing, and analytics.

You’ll build reliable, scalable pipelines and models that support decision-making and automation at every level of the business.


What you’ll do

● Build, maintain, and optimize data pipelines using Spark, Kafka, Airflow, and Python

● Orchestrate workflows across GCP (GCS, BigQuery, Composer) and AWS-based systems

● Model data using dbt, with an emphasis on quality, reuse, and documentation

● Ingest, clean, and normalize data from third-party sources such as Google Ads, Meta,Taboola, Outbrain, and Google Analytics

● Write high-performance SQL and support analytics and reporting teams in self-serve data access

● Monitor and improve data quality, lineage, and governance across critical workflows

● Collaborate with engineers, analysts, and business partners across the US, UK, and India


What You Bring

● 4+ years of data engineering experience, ideally in a global, distributed team

● Strong Python development skills and experience

● Expert in SQL for data transformation, analysis, and debugging

● Deep knowledge of Airflow and orchestration best practices

● Proficient in DBT (data modeling, testing, release workflows)

● Experience with GCP (BigQuery, GCS, Composer); AWS familiarity is a plus

● Strong grasp of data governance, observability, and privacy standards

● Excellent written and verbal communication skills


Nice to have

● Experience working with digital marketing and performance data, including:

Google Ads, Meta (Facebook), TikTok, Taboola, Outbrain, Google Analytics (GA4)

● Familiarity with BI tools like Tableau or Looker

● Exposure to attribution models, media mix modeling, or A/B testing infrastructure

● Collaboration experience with data scientists or machine learning workflows


Why Join Us

● Monthly long weekends — every third Friday off

● Wellness reimbursement to support your health and balance

● Paid parental leave

● Remote-first with flexibility and trust

● Work with a world-class data and marketing team inside a globally recognized brand

Read more
ByteFoundry AI

at ByteFoundry AI

4 candid answers
Bisman Gill
Posted by Bisman Gill
Remote only
3 - 8 yrs
Upto ₹40L / yr (Varies
)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
+3 more

About the Role

We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-

quality code delivery.

Responsibilities

- Develop and maintain full-stack applications.

- Implement clean, maintainable, and efficient code.

- Collaborate with designers, product managers, and backend engineers.

- Participate in code reviews and debugging.

- Work with REST APIs/GraphQL.

- Contribute to CI/CD pipelines.

- Ability to work independently as well as within a collaborative team environment.


Required Technical Skills

- Strong knowledge of JavaScript/TypeScript.

- Experience with React.js, Next.js.

- Backend experience with Node.js, Express, NestJS.

- Understanding of SQL/NoSQL databases.

- Experience with Git, APIs, debugging tools.ß

- Cloud familiarity (AWS/GCP/Azure).

AI and System Mindset

Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.

Soft Skills

- Strong problem-solving ability.

- Good communication and teamwork.

- Fast learner and adaptable.

Education

Bachelor's degree in Computer Science / Engineering or equivalent.

Read more
Alpheva AI
Ramakant gupta
Posted by Ramakant gupta
Remote only
1 - 3 yrs
₹10L - ₹25L / yr
skill iconReact Native
skill iconReact.js
skill iconNextJs (Next.js)
skill iconPython
skill iconPostgreSQL

About the Role

We’re hiring a Full Stack Engineer who can own features end to end, from UI to APIs to data models.

This is not a “ticket executor” role. You’ll work directly with product, AI, and founders to shape how users interact with intelligent financial systems.

If you enjoy shipping real features, fixing real problems, and seeing users actually use what you built, this role is for you.


What You Will Do

  • Build and ship frontend features using React, Next.js, and React Native
  • Develop backend services and APIs using Python and/or Golang
  • Own end-to-end product flows like onboarding, dashboards, insights, and AI conversations
  • Integrate frontend with backend and AI services (LLMs, tools, data pipelines)
  • Design and maintain PostgreSQL schemas, queries, and migrations
  • Ensure performance, reliability, and clean architecture across the stack
  • Collaborate closely with product, AI, and design to ship fast and iterate
  • Debug production issues and continuously improve UX and system quality


What We’re Looking For

  • 2 to 3+ years of professional full stack engineering experience
  • Strong hands-on experience with React, Next.js, and React Native
  • Backend experience with Python and/or Golang in production
  • Solid understanding of PostgreSQL, APIs, and system design
  • Strong fundamentals in HTML, CSS, TypeScript, and modern frontend patterns
  • Ability to work independently and take ownership in a startup environment
  • Product-minded engineer who thinks in terms of user outcomes, not just code
  • B.Tech in Computer Science or related field


Nice to Have

  • Experience with fintech, dashboards, or data-heavy products
  • Exposure to AI-powered interfaces, chat systems, or real-time data
  • Familiarity with cloud platforms like AWS or GCP
  • Experience handling sensitive or regulated data


Why Join Alpheva AI

  • Build real product used by real users from day one
  • Work directly with founders and influence core product decisions
  • Learn how AI-native fintech products are built end to end
  • High ownership, fast execution, zero corporate nonsense
  • Competitive compensation with meaningful growth upside


Read more
Unilog

at Unilog

3 candid answers
1 video
Bisman Gill
Posted by Bisman Gill
Remote, BLR, Mysore
8yrs+
Upto ₹60L / yr (Varies
)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Google Vertex AI
Agentic AI
PyTorch
+7 more

About Unilog

Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.

With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail. 

Unilog’ s Mission Statement

At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.


Designation:- AI Architect

Location: Bangalore/Mysore/Remote  

Job Type: Full-time  

Department: Software R&D  


About the Role  

We are looking for a highly motivated AI Architect to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vector Databases, AI Search, Agentic AI, Automation, and more.  

As an Architect, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.


Key Responsibilities  

Research & Experimentation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, and Automation. 


Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.


AI/ML Engineering: Design and develop AI/ML models, LLMs, embedding’s, and intelligent search capabilities leveraging state-of-the-art techniques. 


Vector & AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.  


Automation & AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.  


Collaboration & Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.


Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.  


Required Qualifications  


  1. 8-14 years of experience in AI/ML, software engineering, or a related field.  
  2. Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini.
  3. Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), and agentic AI.  
  4. Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.  
  5. Strong problem-solving skills and a passion for innovation.  
  6. Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.  


Preferred Qualifications  

  • Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.  
  • Knowledge of data pipelines, MLOps, and AI governance.  
  • Contributions to open-source AI/ML projects or published research papers.  


Why Join Us?  

  • Work on cutting-edge AI/ML innovations with the CTO Office.  
  • Influence the company’s future AI strategy and shape emerging technologies.  
  • Competitive compensation, growth opportunities, and a culture of continuous learning.    


About our Benefits:

Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, 401K match, career development, advancement opportunities, annual merit, pay-for-performance bonus eligibility, a generous time-off policy, and a flexible work environment.


Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class. 

Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Remote only
4 - 7 yrs
₹4.5L - ₹10.5L / yr
skill iconPython
FastAPI
API
SQLAlchemy
Pydantic

🚀 Hiring: Python Developer at Deqode

⭐ Experience: 4+ Years

⭐ Work Mode:- Remote

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


Role Overview:

We are looking for a skilled Software Development Engineer (Python) to design, develop, and maintain scalable backend applications and high-performance RESTful APIs. The ideal candidate will work on modern microservices architecture, ensure clean and efficient code, and collaborate with cross-functional teams to deliver robust solutions.

Key Responsibilities:

  • Develop and maintain RESTful APIs and backend services using Python
  • Build scalable microservices and integrate third-party APIs
  • Design and optimize database schemas and queries
  • Ensure application security, performance, and reliability
  • Write clean, testable, and maintainable code
  • Participate in code reviews and follow best engineering practices

Mandatory Skills (3):

  1. Python – Strong hands-on experience in backend development
  2. FastAPI / REST API Development – Building and maintaining APIs
  3. SQLAlchemy / Relational Databases – Database modeling and optimization


Read more
Remote only
2 - 4 yrs
₹25L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
2 - 4 yrs
₹23L - ₹30L / yr
skill iconPython
Microservices
skill iconVue.js
MySQL
RESTful APIs
+21 more

Job Details

- Job Title: Software Developer (Python, React/Vue)

- Industry: Technology

- Experience Required: 2-4 years

- Working Days: 5 days/week

- Job Location: Remote working

- CTC Range: Best in Industry


Review Criteria

  • Strong Full stack/Backend engineer profile
  • 2+ years of hands-on experience as a full stack developer (backend-heavy)
  • (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
  • (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
  • (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
  • (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
  • (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
  • (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
  • Product companies (B2B SaaS preferred)


Preferred

  • Preferred (Location) - Mumbai
  • Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
  • Preferred (Education): B.Tech from Tier 1, Tier 2 institutes


Role & Responsibilities

This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.

 

You will:

  • Build and own features end-to-end — from design → deployment → scale.
  • Architect scalable, loosely coupled systems powering AI-native workflows.
  • Create robust integrations with 3rd-party systems.
  • Push boundaries on reliability, performance, and automation.
  • Write clean, tested, secure code → and continuously improve it.
  • Collaborate directly with Founders & Snr engineers in a high-trust environment.

 

Our Tech Arsenal:

  • We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
  • That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Read more
Remote only
2 - 4 yrs
₹25L - ₹32L / yr
skill iconPython
skill iconAmazon Web Services (AWS)

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Read more
Talent Pro
Remote only
2 - 4 yrs
₹25L - ₹32L / yr
skill iconPython
Microservices

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Preferred

Preferred (Location) - Mumbai

Preferred (Skills) : Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong

Preferred (Education) : B.Tech from Tier 1,Tier 2 institutes

Read more
LuvFitz

at LuvFitz

1 candid answer
Recruitment Team
Posted by Recruitment Team
Remote only
2 - 6 yrs
₹10L - ₹12L / yr
Data engineering
skill iconPython
skill iconReact.js
Web Scraping

Founding Full-Stack Engineer

Consumer AI | Delhi (Hybrid) | Building for the US Market


The Opportunity

We are a pre-seed startup redefining fashion discovery. We believe the future of e-commerce isn't about search bars and endless scrolling; it’s about context-aware curation.

We are looking for a Founding Full-Stack Engineer who sits at the intersection of Systems and Design. You will be the technical anchor of the team, turning chaotic web data into a fluid, consumer-grade experience.


The Role

You won't just be writing code; you will define the product's architecture. We need Vertical Ownership—the ability to take a feature from a raw database row all the way to a pixel-perfect interaction on the screen.

  • The Data Engine (Backend & Scraping): You will build robust data pipelines that scrape and frequently refresh data from top fashion retailers. You will further structure and enrich this data, add additional attributes and prepare it for downstream consumption. Added bonus: prior experience with feature engineering
  • The Experience (Frontend): You will craft the user interface. We are building a consumer brand, so "functional" isn't enough—it needs to feel "alive.”. If you have a refined design sense, and prior exposure to building eComm websites with complex landing and category pages, then you are the right fit for this role. 
  • The Intelligence (AI Integration): You should be able to understand and re-run existing research algorithms on Fill-In-The-Blank (FITB) tasks.


The Toolkit

  • Core Stack: Python for the heavy lifting; JavaScript/TypeScript (React) for the web interface.
  • Data Ops: Experience building complex scrapers, handling proxies, and managing data pipelines is non-negotiable.
  • Design Engineering: A strong grasp of building advanced eCommerce components (multi-tile grid layouts, drag-and-drop elements, etc.)
  • Low-code tools: Adept at rapidly prototyping with Replit, Lovable, etc.


The DNA We Are Looking For

  • You are a Builder, not just a Coder. You prefer shipping a live prototype over debating a PR for three days.
  • You have Taste. You understand that building for the US consumer market requires a level of polish and minimalism that standard B2B SaaS ignores.
  • You are "T-Shaped." You have deep expertise in one area (either scraping or frontend), but you are dangerous enough across the entire stack to build solo.
  • You embrace the Chaos. You know how to build structure out of messy unstructured data.


Why Join Now?

  • 0 to 1 Ownership: No legacy code, no technical debt. Just you, the founders, and a blank editor.
  • Global Impact: You are building from India, but for the most competitive consumer market in the world (USA). The standards are higher, and so is the reward.
  • Founding Team Status: Competitive pay with a future path to equity
Read more
Hudson Data

at Hudson Data

1 recruiter
MadanLal Gupta
Posted by MadanLal Gupta
Remote only
6 - 10 yrs
₹9L - ₹12L / yr
skill iconPython
SQL
skill iconGoogle Analytics
Linux/Unix
Google Cloud Platform (GCP)
+1 more

About Hudson Data


At Hudson Data, we view AI as both an art and a science. Our cross-functional teams — spanning business leaders, data scientists, and engineers — blend AI/ML and Big Data technologies to solve real-world business challenges. We harness predictive analytics to uncover new revenue opportunities, optimize operational efficiency, and enable data-driven transformation for our clients.


Beyond traditional AI/ML consulting, we actively collaborate with academic and industry partners to stay at the forefront of innovation. Alongside delivering projects for Fortune 500 clients, we also develop proprietary AI/ML products addressing diverse industry challenges.


Headquartered in New Delhi, India, with an office in New York, USA, Hudson Data operates globally, driving excellence in data science, analytics, and artificial intelligence.



About the Role


We are seeking a Data Analyst & Modeling Specialist with a passion for leveraging AI, machine learning, and cloud analytics to improve business processes, enhance decision-making, and drive innovation. You’ll play a key role in transforming raw data into insights, building predictive models, and delivering data-driven strategies that have real business impact.



Key Responsibilities


1.⁠ ⁠Data Collection & Management

• Gather and integrate data from multiple sources including databases, APIs, spreadsheets, and cloud warehouses.

• Design and maintain ETL pipelines ensuring data accuracy, scalability, and availability.

• Utilize any major cloud platform (Google Cloud, AWS, or Azure) for data storage, processing, and analytics workflows.

• Collaborate with engineering teams to define data governance, lineage, and security standards.


2.⁠ ⁠Data Cleaning & Preprocessing

• Clean, transform, and organize large datasets using Python (pandas, NumPy) and SQL.

• Handle missing data, duplicates, and outliers while ensuring consistency and quality.

• Automate data preparation using Linux scripting, Airflow, or cloud-native schedulers.


3.⁠ ⁠Data Analysis & Insights

• Perform exploratory data analysis (EDA) to identify key trends, correlations, and drivers.

• Apply statistical techniques such as regression, time-series analysis, and hypothesis testing.

• Use Excel (including pivot tables) and BI tools (Tableau, Power BI, Looker, or Google Data Studio) to develop insightful reports and dashboards.

• Present findings and recommendations to cross-functional stakeholders in a clear and actionable manner.


4.⁠ ⁠Predictive Modeling & Machine Learning

• Build and optimize predictive and classification models using scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, and H2O.ai.

• Perform feature engineering, model tuning, and cross-validation for performance optimization.

• Deploy and manage ML models using Vertex AI (GCP), AWS SageMaker, or Azure ML Studio.

• Continuously monitor, evaluate, and retrain models to ensure business relevance.


5.⁠ ⁠Reporting & Visualization

• Develop interactive dashboards and automated reports for performance tracking.

• Use pivot tables, KPIs, and data visualizations to simplify complex analytical findings.

• Communicate insights effectively through clear data storytelling.


6.⁠ ⁠Collaboration & Communication

• Partner with business, engineering, and product teams to define analytical goals and success metrics.

• Translate complex data and model results into actionable insights for decision-makers.

• Advocate for data-driven culture and support data literacy across teams.


7.⁠ ⁠Continuous Improvement & Innovation

• Stay current with emerging trends in AI, ML, data visualization, and cloud technologies.

• Identify opportunities for process optimization, automation, and innovation.

• Contribute to internal R&D and AI product development initiatives.



Required Skills & Qualifications


Technical Skills

• Programming: Proficient in Python (pandas, NumPy, scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, H2O.ai).

• Databases & Querying: Advanced SQL skills; experience with BigQuery, Redshift, or Azure Synapse is a plus.

• Cloud Expertise: Hands-on experience with one or more major platforms — Google Cloud, AWS, or Azure.

• Visualization & Reporting: Skilled in Tableau, Power BI, Looker, or Excel (pivot tables, data modeling).

• Data Engineering: Familiarity with ETL tools (Airflow, dbt, or similar).

• Operating Systems: Strong proficiency with Linux/Unix for scripting and automation.


Soft Skills

• Strong analytical, problem-solving, and critical-thinking abilities.

• Excellent communication and presentation skills, including data storytelling.

• Curiosity and creativity in exploring and interpreting data.

• Collaborative mindset, capable of working in cross-functional and fast-paced environments.



Education & Certifications

• Bachelor’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.

• Master’s degree in Data Analytics, Machine Learning, or Business Intelligence preferred.

• Relevant certifications are highly valued:

• Google Cloud Professional Data Engineer

• AWS Certified Data Analytics – Specialty

• Microsoft Certified: Azure Data Scientist Associate

• TensorFlow Developer Certificate



Why Join Hudson Data


At Hudson Data, you’ll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools — from AI and ML frameworks to cloud-based analytics platforms — to solve meaningful problems. You’ll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.


Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Remote only
3 - 5 yrs
₹15L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconPython
SQL

If interested please share your resume at ayushi.dwivedi at cloudsufi.com


Note - This role is remote but with quarterly visit to Noida office (1 week in a qarter) if you are ok for that then pls share your resume.


Data Engineer 

Position Type: Full-time


About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary

We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities

ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.

Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.

Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.

Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 

API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.

Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.

Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.

Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills

Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.

Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.

Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.

Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:

Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)

Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)

Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling

Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).

Experience with data validation techniques and tools.

Familiarity with CI/CD practices and the ability to work in an Agile framework.

Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:

Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).

Familiarity with similar large-scale public dataset integration initiatives.

Experience with multilingual data integration.

Read more
Rapid Canvas

at Rapid Canvas

1 candid answer
Nikita Sinha
Posted by Nikita Sinha
Remote only
6 - 10 yrs
Upto ₹60L / yr (Varies
)
skill iconJava
skill iconPython
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconPHP

We are looking for back end engineering experts with the passion to take on new challenges in a high growth startup environment. If you love finding creative solutions to coding challenges using the latest tech stack such as Java 18+, Spring Boot 3+, then we would like to speak with you.

Roles & Responsibilities

  • You will be part of a team that focuses on building a world-class data science platform
  • Work closely with both product owners and architects to fully understand business requirements and the design philosophy
  • Optimize web and data applications for performance and scalability
  • Collaborate with automation engineering team to deliver high-quality deliverables within a challenging time frame
  • Produce quality code, raising the bar for team performance and speed
  • Recommend systems solutions by comparing advantages and disadvantages of custom development and purchased alternatives
  • Follow emerging technologies

Key Skills Required

  • Bachelor’s degree (or equivalent) in computer science
  • At least 6 years of experience in software development using Java / Python, SpringBoot, REST API and scalable microservice frameworks.
  • Strong foundation in computer science, algorithms, and web design
  • Experience in writing highly secure web applications
  • Knowledge of container/orchestration tools (Kubernetes, Docker, etc.) and UI frameworks (NodeJS, React)
  • Good development habits, including unit testing, CI, and automated testing
  • High growth mindset that challenges the status quo and focuses on unique ideas and solutions
  • Experience on working with dynamic startups / high intensity environment would be a Plus
  • Experience working with shell scripting, Github Actions, Unix and prominent cloud providers like GCP, Azure, S3 is a plus

Why Join Us

  • Drive measurable impact for Fortune 500 customers across the globe, helping them turn AI vision into operational value.
  • Be part of a category-defining AI company, pioneering a hybrid model that bridges agents and experts.
  • Own strategic accounts end-to-end and shape what modern AI success looks like.
  • Work with a cross-functional, high-performance team that values execution, clarity, and outcomes.
  • Globally competitive compensation and benefits tailored to your local market.
  • Recognized as a Top 5 Data Science and Machine Learning platform on G2 for customer satisfaction.

Read more
Rapid Canvas

at Rapid Canvas

1 candid answer
Nikita Sinha
Posted by Nikita Sinha
Remote only
4 - 8 yrs
Upto ₹60L / yr (Varies
)
skill iconPython
Large Language Models (LLM) tuning
Pipeline management
Systems design
Artificial Intelligence (AI)

We are seeking a highly motivated and skilled AI Engineer. You will have strong fundamentals in applied machine learning. You will have a passion for building and deploying production-grade AI solutions for enterprise clients. You will be a key technical expert and the face of our company. You will directly interface with customers to design, build, and deliver cutting-edge AI applications. This is a customer-facing role. It requires a balance of deep technical expertise and excellent communication skills.

Roles & Responsibilities

Design & Deliver AI Solutions

  • Interact directly with customers.
  • Understand their business requirements.
  • Translate them into robust, production-ready AI solutions.
  • Manage AI projects with the customer's vision in mind.
  • Build long-term, trusted relationships with clients.

Build & Integrate Agents

  • Architect, build, and integrate intelligent agent systems.
  • Automate IT functions and solve specific client problems.
  • Use expertise in frameworks like LangChain or LangGraph to build multi-step tasks.
  • Integrate these custom agents directly into the RapidCanvas platform.

Implement LLM & RAG Pipelines

  • Develop grounding pipelines with retrieval-augmented generation (RAG).
  • Contextualize LLM behavior with client-specific knowledge.
  • Build and integrate agents with infrastructure signals like logs and APIs.

Collaborate & Enable

  • Work with customer data science teams.
  • Collaborate with other internal Solutions Architects, Engineering, and Product teams.
  • Ensure seamless integration of AI solutions.
  • Serve as an expert on the RapidCanvas platform.
  • Enable and support customers in building their own applications.
  • Act as a Product Champion, providing crucial feedback to the product team to drive innovation.

Data & Model Management

  • Oversee the entire AI project lifecycle.
  • Start from data preprocessing and model development.
  • Finish with deployment, monitoring, and optimization.

Champion Best Practices

  • Write clean, maintainable Python code.
  • Champion engineering best practices.
  • Ensure high performance, accuracy, and scalability.

Key Skills Required

Experience

  • Minimum 5+ years of hands-on experience in AI/ML engineering or backend systems.
  • Recent exposure to LLMs or intelligent agents is a must.

Technical Expertise

  • Proficiency in Python.
  • Proven track record of building scalable backend services or APIs.
  • Expertise in machine learning, deep learning, and Generative AI concepts.
  • Hands-on experience with LLM platforms (e.g., GPT, Gemini).
  • Deep understanding of and hands-on experience with agentic frameworks like LangChain, LangGraph, or CrewAI.
  • Experience with vector databases (e.g., Pinecone, Weaviate, FAISS).

Customer & Communication Skills

  • Proven ability to partner with enterprise stakeholders.
  • Excellent presentation skills.
  • Comfortable working independently.
  • Manage multiple projects simultaneously.

Preferred Skills

  • Experience with cloud platforms (e.g., AWS, Azure, Google Cloud).
  • Knowledge of MLOps practices.
  • Experience in the AI services industry or startup environments.

Why Join us

  • High-impact opportunity: Play a pivotal role in building a new business vertical within a rapidly growing AI company.
  • Strong leadership & funding: Backed by top-tier investors, our leadership team has deep experience scaling AI-driven businesses.
  • Recognized as a top 5 Data Science and Machine Learning platform by independent research firm G2 for customer satisfaction.


Read more
Grey Chain Technology

at Grey Chain Technology

5 candid answers
Deebaj Mir
Posted by Deebaj Mir
Remote only
7 - 10 yrs
₹18L - ₹24L / yr
skill iconPython
FastAPI
Generative AI
AI Agents
skill iconAmazon Web Services (AWS)
+1 more

Company: Grey Chain AI

Location: Remote

Experience: 7+ Years

Employment Type: Full Time


About the Role

We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.


You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.

Key Responsibilities

  • Lead the design and development of Python-based AI systems, APIs, and microservices.
  • Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
  • Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
  • Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
  • Ensure reliability, scalability, and security of AI solutions in production.
  • Mentor junior engineers and provide technical leadership to the team.
  • Work closely with clients to understand business needs and translate them into robust AI solutions.
  • Drive adoption of latest GenAI trends, tools, and best practices across projects.

Must-Have Technical Skills

  • 7+ years of hands-on experience in Python development, building scalable backend systems.
  • Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
  • Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
  • Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
  • Experience designing multi-agent workflows, tool calling, and prompt pipelines.
  • Strong understanding of REST APIs, microservices, and cloud-native architectures.
  • Experience deploying AI solutions on AWS, Azure, or GCP.
  • Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
  • Proficiency with Git, CI/CD, and production deployment pipelines.


Leadership & Client-Facing Experience

  • Proven experience leading engineering teams or acting as a technical lead.
  • Strong experience working directly with foreign or enterprise clients.
  • Ability to gather requirements, propose solutions, and own delivery outcomes.
  • Comfortable presenting technical concepts to non-technical stakeholders.


What We Look For

  • Excellent communication, comprehension, and presentation skills.
  • High level of ownership, accountability, and reliability.
  • Self-driven professional who can operate independently in a remote setup.
  • Strong problem-solving mindset and attention to detail.
  • Passion for GenAI, agentic systems, and emerging AI trends.


Why Grey Chain AI

Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.

Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.


Read more
Flycatch infotech PVT LTD
Flycatch Recruitment
Posted by Flycatch Recruitment
Remote only
3 - 4 yrs
₹8.4L - ₹9.6L / yr
skill iconJava
skill iconPython

1. Minimum of 3 years of experience in ERPNext development, with a strong understanding of *ERPNext framework and customization. *

2. Proficiency in Python, JavaScript, HTML, CSS, and Frappe framework. Experience with ERPNext’s core modules such as Accounting, Sales, Purchase, Inventory, and HR is essential.

3. Experience with *MySQL or MariaDB databases. *

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹8L - ₹12L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS RDS
AWS opensearch
+22 more

About MyOperator

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Job Summary

We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.


Key Responsibilities

  • Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
  • Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
  • Containerize applications using Docker and manage deployments with Helm charts
  • Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
  • Provision and manage infrastructure using Terraform (Infrastructure as Code)
  • Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
  • Write and maintain Python scripts for automation, monitoring, and operational tasks
  • Ensure high availability, scalability, performance, and cost optimization of cloud resources
  • Implement and follow security best practices across AWS and Kubernetes environments
  • Troubleshoot production issues, perform root cause analysis, and support incident resolution
  • Collaborate closely with development and QA teams to streamline deployment and release processes

Required Skills & Qualifications

  • 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
  • Strong experience with AWS services, including:
  • EC2, RDS, OpenSearch, VPC, S3
  • Application Load Balancer (ALB), API Gateway, Lambda
  • SNS and SQS.
  • Hands-on experience with AWS EKS (Kubernetes)
  • Strong knowledge of Docker and Helm charts
  • Experience with Terraform for infrastructure provisioning and management
  • Solid experience building and managing CI/CD pipelines using Jenkins
  • Practical experience with Prometheus and Grafana for monitoring and alerting
  • Proficiency in Python scripting for automation and operational tasks
  • Good understanding of Linux systems, networking concepts, and cloud security
  • Strong problem-solving and troubleshooting skills

Good to Have (Preferred Skills)

  • Exposure to GitOps practices
  • Experience managing multi-environment setups (Dev, QA, UAT, Production)
  • Knowledge of cloud cost optimization techniques
  • Understanding of Kubernetes security best practices
  • Experience with log aggregation tools (e.g., ELK/OpenSearch stack)

Language Preference

  • Fluency in English is mandatory.
  • Fluency in Hindi is preferred.
Read more
Discovered Labs
Remote only
4 - 10 yrs
₹25L - ₹44L / yr
skill iconPython
skill iconReact.js
skill iconNextJs (Next.js)
TypeScript
Data architecture
+1 more

Senior Engineer (Full-Stack)


Apply Here

👉 Submit your application HERE: https://airtable.com/app9tS5cclInJg589/shr3gTEXrhh8fi7lg


About Discovered Labs

At Discovered Labs we work with $10M - $50M ARR companies to help them get more leads, users and customers from Google, Bing and AI assistants such as ChatGPT, Claude and Perplexity.

We approach marketing the way engineers approach systems: data in, insights out, feedback loops everywhere. Every decision traces back to measurable outcomes. Every workflow is designed to eliminate manual bottlenecks and compound over time.


High-level overview of our approach:

  • Data-driven automation: We treat marketing programs like products. We instrument everything, automate the repetitive, and focus human effort on high-leverage problems.
  • First principles thinking: We don't copy what others do. We understand the underlying mechanics of how search and AI systems work, then build solutions from that foundation.
  • Full-stack ownership: SEO and AEO rarely work as isolated tasks. We work across the entire funnel and multiple surface areas to ensure we own the outcome and clients win.


The Team

We're a deeply technical team building the SpaceX of the AEO & SEO space. You'll work alongside engineers who have built fraud engines powering Stripe, Plaid, and Coinbase; developed self-driving car systems at Aurora; and conducted AI research at Stanford. We don't have layers of management. You'll work directly with founders who can go deep on architecture, code, and product.


This Role

We're looking for a Senior Engineer to own the development and delivery of some of our core product infrastructure. You'll work directly with the CTO to build client-facing dashboards, AI visibility tooling, and automated content and outreach systems.

This is a high-ownership, hands-on role. You'll take feature specs from idea to production, own the quality of your releases, and help us ship faster without sacrificing reliability. If you thrive on building products that matter, not just writing code, this is for you.


What You'll Do

  • Build client-facing products: Design and ship deep analytics dashboards to uncover insights in AI search performance in a data-driven manner, all the way to mechanism interpretability of these LLMs.
  • Develop AI-powered tooling: Extend our internal systems into public-facing products, including automated reporting and intelligent workflows.
  • Own the full lifecycle: Take features from spec to production, monitor reliability, and iterate based on feedback. You own what you build.


The Ideal Person for This Role

  • A builder who ships. You care about getting working software into users' hands, not endless planning or polish. You've shipped products people actually use.
  • An owner. You take responsibility for outcomes, not just tasks. When something you ship breaks, you fix it.
  • Humble and curious. You acknowledge what you don't know, ask good questions, and genuinely want to learn. You take feedback as a gift, not a threat.
  • A first-principles thinker. You understand why things work, not just how. You can go five levels deep on technical decisions.
  • Always improving. You're not satisfied with "good enough." You actively seek ways to get better at your craft and make systems better over time.


Requirements

  • 4+ years of professional software engineering experience
  • Strong full-stack skills (TypeScript, React, Next.js, Python)
  • Track record of taking briefs and shipping robust, production-ready code without heavy hand-holding
  • You don't just build features. You leave the codebase better than you found it.
  • Comfortable with data modeling, API design, and pragmatic architecture decisions
  • Excellent written communication


Preferred Qualifications

  • Experience with AI/ML or LLM model finetuning, evaluation, or large-scale production deployments
  • Prior experience at a fast-moving startup or agency


What's in It for You

  • Fully remote position
  • Work directly with the CTO on high-impact projects
  • High ownership and autonomy. No micromanagement.
  • First-hand exposure to cutting-edge AI and search technology
  • Your work will directly impact well-known (10M+ ARR) companies’ performance
  • Join a fast-growing company at the intersection of AI and marketing


Our Hiring Process

  1. Application
  2. Take-Home Project
  3. Technical Deep Dive
  4. Leadership Interview
  5. Reference Checks


Apply Here

👉 Submit your application HERE: https://airtable.com/app9tS5cclInJg589/shr3gTEXrhh8fi7lg

Read more
Shortcastle Technologies

at Shortcastle Technologies

3 recruiters
Arun Srinivaas R S
Posted by Arun Srinivaas R S
Remote only
0 - 2 yrs
₹7000 - ₹10000 / mo
skill iconPython
skill iconJavascript
skill iconReact.js

🚀 AI Marketing Automation Developer Intern


AI-First | High Ownership | Long-Term Opportunity


📍 About the Role


We are building an AI-first marketing and communications engine across multiple products and brands.

This role is for someone who wants to use AI to eliminate manual work, not do more of it.

This is not a traditional marketing internship.

It is a builder role focused on automation, experimentation, and systems thinking.


🧠 How We Work


  • AI-first, automation-first mindset
  • We focus on outcomes, not activity
  • You will work independently on clearly defined objectives
  • Minimal meetings, maximum ownership
  • Trial, iterate, break, fix, and improve
  • What you build is expected to be production-ready, not just a demo

We use modern AI tools (including Cursor and LLMs) and expect you to learn fast and apply faster.


✅ Who This Is For


This role is a strong fit if you:

  • Think in terms of systems and leverage
  • Enjoy solving open-ended problems
  • Are comfortable with ambiguity
  • Like experimenting until something works
  • Want to work in a real AI-first environment, not just talk about AI

Background matters less than mindset.

Engineering, tech-savvy marketing, or self-taught AI backgrounds all work.


❌ Who This Is NOT For


  • Manual or repetitive marketing work
  • Copy-paste or template-only roles
  • People who need detailed step-by-step instructions


🌱 Growth & Long-Term Path


This is a long-term internship, not a short project.

Interns who:

  • Consistently deliver
  • Show ownership
  • Fit into our AI-first work culture

👉 Will be converted to full-time roles.


Hiring and conversion decisions are made jointly by the founders and the automation team lead.


🕒 Commitment

  • 20–30 hours per week minimum
  • Fully remote
  • Flexible working hours (output > hours)


💡 How to Apply


Send:

A short note on why this role excites you

Any proof of:

  • AI tools you’ve used
  • Automation you’ve attempted
  • Projects you’ve built (academic, personal, or professional)

No formal resume required if your work speaks for itself.

Read more
Tonomo
Remote only
7 - 12 yrs
$18K - $21.6K / yr
Artificial Intelligence (AI)
skill iconFlutter
skill iconAndroid Development
skill iconiOS App Development
skill iconPython
+16 more

The Mission Tonomo is revolutionizing e-commerce with an intelligent, autonomous platform powered by IoT and AI. We are in the Beta phase, rapidly iterating based on user feedback. We need an "Unblocker"—a senior engineer who owns the mobile experience but can dive into the Python backend to build the endpoints they need to move fast.

The Engineering Culture We believe in AI-Augmented Engineering. We expect you to use tools like Cursor, Copilot, Gemini, GPT-4 and alike, to handle boilerplate code, allowing you to focus on complex native bridges, system architecture, and "on-the-spot" bug resolution.

Core Responsibilities

  • Flutter Mastery: Lead the development of our cross-platform Beta app (Android, iOS, and Web) using Flutter.
  • Backend Independence: Build and modify REST APIs and microservices in Python (FastAPI) to unblock frontend features.
  • AI coding: tools like Cursor, Copilot, Gemini, GPT-4 and alike
  • Agile Troubleshooting: Fix critical UI and logical bugs "on the spot" as reported by users. Experience with UI/UX best practices.
  • Performance & Debugging: Proactively monitor app health, experienced with Sentry, Firebase Crashlytics, and Flutter DevTools
  • IoT & Integration: Work with IoT telemetry protocols (MQTT) and integrate third-party services for payments (Stripe) and Firebase.
  • Native Depth: Develop custom plugins and MethodChannels to bridge Flutter with native iOS/Android functionalities.
  • Dashboard Ownership: Own dashboards end-to-end. Design and build internal dashboards for: Business Intelligence. System health and operational metrics. IoT and backend activity insights.
  • Frontend Development Build modern, responsive web dashboards using React (or similar). Implement advanced data visualizations. Focus on clarity, performance, and usability for non-technical stakeholders.
  • BI & Data Integration: Integrate dashboards with: Backend APIs (Python / FastAPI). Databases (PostgreSQL). Analytics / metrics sources (Grafana, Prometheus, or BI tools). Work with product & ops to define what should be measured.
  • Monitoring & Insights: Build visual views on top of monitoring data (Grafana or embedded views). Help translate raw metrics into actionable insights. Support ad-hoc analysis and investigation workflows.
  • Execution & Iteration: Move fast in a startup environment: iterate dashboards based on real feedback. Improve data quality, consistency, and trust over time.

Technical Requirements

  • Mobile Experience: 7+ years in mobile development with at least 5 highly distributed apps published.
  • The Stack: * Frontend: Expert Flutter/Dart skills
  • Backend: Proficient Python developer with experience in FastAPI, SQLAlchemy, and PostgreSQL.
  • Data & Backend Awareness: Comfortable consuming REST APIs and working with structured data.
  • Ability to collaborate on schema design and API contracts.
  • BI / Analytics (Nice to Have): Experience with BI tools or platforms (Grafana, Metabase, Superset, Looker, etc.).
  • Understanding of KPIs, funnels, and business metrics.
  • Experience embedding dashboards or analytics into web apps.
  • Architecture: Mastery of design patterns for both mobile (MVVM/MVC) and backend microservices.
  • Infrastructure: Experience with Google Cloud Platform and IoT telemetry (mandatory).
  • Execution: Proactive attitude toward learning and the ability to "own" a feature from DB schema to UI implementation.
  • Experience with Atlassian Jira

 

Soft skills:

·      Self-Directed Ownership: flags blockers early and suggests improvements without being asked. You are well experienced professional... You don't wait for a Jira ticket to be perfect; you ask the right questions and move the needle forward

·      Transparency: Extreme honesty about timelines—if a task is more complex than estimated, you communicate it immediately, not at the deadline.

·      Clear communicator with engineers and non-technical stakeholders.

 

The Deal

  • Part-time Retainer: 100 hours per month.
  • Rate: $15 – $18 USD per hour (Performance-based).
  • Impact: Direct partnership with the founding team in a fast-paced, AI-driven startup.
  • Location: We value the stability and focus of Tier-2 rockstars Kochi, Indore, Jaipur, or Ahmedabad and alike.

How to Apply If you are a self-starter who codes with AI and can bridge the gap between frontend and backend, send your resume and links to your 3 best live apps

Read more
Product company

Product company

Agency job
via Trinity consulting by Priyanka G
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹4L - ₹8L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Retrieval Augmented Generation (RAG)
skill iconDocker
skill iconKubernetes
+1 more

Experience: 3+ years


Responsibilities:


  • Build, train and fine tune ML models
  • Develop features to improve model accuracy and outcomes.
  • Deploy models into production using Docker, kubernetes and cloud services.
  • Proficiency in Python, MLops, expertise in data processing and large scale data set.
  • Hands on experience in Cloud AI/ML services.
  • Exposure in RAG Architecture
Read more
Procedure

at Procedure

4 candid answers
3 recruiters
Adithya K
Posted by Adithya K
Remote only
5 - 10 yrs
₹40L - ₹60L / yr
Software Development
skill iconAmazon Web Services (AWS)
skill iconPython
TypeScript
skill iconPostgreSQL
+3 more

Procedure is hiring for Drover.


This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.


About Drover

Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.


We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.


Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.


We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.


About The Role

As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.


Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.


What You'll Do

  • Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
  • Design and implement services to support wearable devices, mobile app, and backend API
  • Implement data processing and storage pipelines
  • Create and maintain Infrastructure-as-Code
  • Support the engineering team across all aspects of early-stage development -- after all, this is a startup


Requirements

  • 5+ years of experience developing cloud architecture on AWS
  • In-depth understanding of various AWS services, especially those related to IoT
  • Expertise in cloud-hosted, event-driven, serverless architectures
  • Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
  • Experience with networking and socket programming
  • Experience with Kubernetes or similar orchestration platforms
  • Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
  • Familiarity with relational databases (PostgreSQL)
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD)


Nice To Have

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field


Read more
KGiSL MICROCOLLEGE
Hiring Recruitment
Posted by Hiring Recruitment
Remote, Erode
1 - 3 yrs
₹1L - ₹3.5L / yr
skill iconHTML/CSS
JAWA
skill iconPython
Artificial Intelligence (AI)

Salary: ₹3.5 LPA( Based on the performance)

Experience: 1–3 Years (ONLY FOR FEMALES)


We are looking for a Technical Trainer skilled in HTML, Java, Python, and AI to conduct technical trainer. The trainer will create learning materials, deliver sessions, assess student performance, and support learners throughout the training. Strong communication skills and the ability to explain technical concepts clearly are essential.

Read more
Capital Squared
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
MLOps
DevOps
Google Cloud Platform (GCP)
CI/CD
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines


OVERVIEW

We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.


The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.


CORE TECHNICAL REQUIREMENTS

Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.


Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.


CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.


Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.


PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.


Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.


WHAT YOU WILL OWN

Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.


Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.


VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.


Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.


Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.


Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.


WHAT SUCCESS LOOKS LIKE

Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.


ENGINEERING STANDARDS

Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.


Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.


Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.


Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.


CURRENT ENVIRONMENT

GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.


WHAT WE ARE LOOKING FOR

Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.


Calm Under Pressure: When production breaks, you diagnose methodically.


Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.


Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.


EDUCATION

University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
Data engineering
Databases
skill iconPython
SQL
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data


OVERVIEW

We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.


The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.


CORE TECHNICAL REQUIREMENTS

Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.


SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.


Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.


Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.


WHAT YOU WILL BUILD

Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.


Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.


Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.


Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.

Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.


DOMAIN EXPERIENCE

Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.


Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.


High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.


ENGINEERING STANDARDS

Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.


Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.


Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.


Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.


TECHNICAL ENVIRONMENT

PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.


WHAT WE ARE LOOKING FOR

Attention to Detail: You notice when something is slightly off and investigate rather than ignore.


Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.


Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.


Long-Term Orientation: You build systems you will maintain for years.


Communication: You document clearly, explain data issues to non-engineers, and surface problems early.


EDUCATION

University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Remote only
5 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
cicd
skill iconPython
+1 more

Job Description: DevOps Engineer

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.


Role Summary:

We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in

AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This

role involves designing and implementing scalable infrastructure, improving system

reliability, and driving automation across our cloud ecosystem.


Key Responsibilities:

• Architect, implement, and manage scalable, secure, and resilient cloud

infrastructure on AWS

• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,

and monitoring

• Deploy and manage Kubernetes clusters and containerized microservices

• Define and implement infrastructure as code using

Terraform/CloudFormation

• Monitor production and staging environments using tools like CloudWatch,

Prometheus, and Grafana

• Support MongoDB and MySQL database administration and optimization

• Ensure high availability, performance tuning, and cost optimization

• Guide and mentor junior engineers, and enforce DevOps best practices

• Drive system security, compliance, and audit readiness in cloud environments

• Collaborate with engineering, product, and QA teams to streamline release

processes


Required Qualifications:

• 5+ years of DevOps/Infrastructure experience in production-grade environments

• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.

• Proven experience with Kubernetes and Docker in production

• Proficient with Terraform, CloudFormation, or similar IaC tools

• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or

similar

• Advanced scripting in Python, Bash, or Go

• Solid understanding of networking, firewalls, DNS, and security protocols

• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)

• Experience with MongoDB and MySQL in cloud environments

Preferred Qualifications:

• AWS Certified DevOps Engineer or Solutions Architect

• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD

• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green

Deployments

• Background in high-availability systems and incident response

• Prior experience in a SaaS, ML, or hospitality-tech environment


Tools and Technologies You’ll Use:

• Cloud: AWS

• Containers: Docker, Kubernetes, Helm

• CI/CD: Jenkins, GitHub Actions

• IaC: Terraform, CloudFormation

• Monitoring: Prometheus, Grafana, CloudWatch

• Databases: MongoDB, MySQL

• Scripting: Bash, Python

• Collaboration: Git, Jira, Confluence, Slack


Why Join Us?

• Competitive salary and performance bonuses.

• Remote-friendly work culture.

• Opportunity to work on cutting-edge tech in AI and ML.

• Collaborative, high-growth startup environment.

• For more information, visit http://www.lodgiq.com

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort