

Capital Squared
https://capitalsquared.aiAbout
Company social profiles
Jobs at Capital Squared
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data
OVERVIEW
We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.
The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.
SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.
Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.
Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.
WHAT YOU WILL BUILD
Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.
Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.
Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.
Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.
Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.
DOMAIN EXPERIENCE
Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.
Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.
High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.
ENGINEERING STANDARDS
Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.
Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.
Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.
Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.
TECHNICAL ENVIRONMENT
PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.
WHAT WE ARE LOOKING FOR
Attention to Detail: You notice when something is slightly off and investigate rather than ignore.
Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.
Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.
Long-Term Orientation: You build systems you will maintain for years.
Communication: You document clearly, explain data issues to non-engineers, and surface problems early.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Full-Stack Machine Learning Engineer
Role: Full-Time, Long-Term Required: Python Preferred: C++
OVERVIEW
We are seeking a versatile ML engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build sophisticated production systems and grow with a small, focused team. You will work across the entire stack—from data ingestion and feature engineering through model training, validation, and deployment.
The ideal candidate combines strong software engineering fundamentals with deep ML expertise, particularly in time series forecasting and quantitative applications. You should be comfortable operating independently, making architectural decisions, and owning systems end-to-end.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency writing clean, production-grade code—not just notebooks. Deep understanding of NumPy, Pandas, and their performance characteristics. You know when to use vectorized operations, understand memory management for large datasets, and can profile and optimize bottlenecks. Experience with async programming and multiprocessing is valuable.
Machine Learning (Required): Hands-on experience building and deploying ML systems in production. This goes beyond training models—you understand the full lifecycle: data validation, feature engineering, model selection, hyperparameter optimization, validation strategies, monitoring, and maintenance.
Specific experience we value: gradient boosting frameworks (LightGBM, XGBoost, CatBoost), time series forecasting, probabilistic prediction and uncertainty quantification, feature selection and dimensionality reduction, cross-validation strategies for non-IID data, model calibration.
You should understand overfitting deeply—not just as a concept but as something you actively defend against through proper validation, regularization, and architectural choices.
Data Pipelines (Required): Design and implement robust pipelines handling real-world messiness: missing data, late arrivals, schema changes, upstream failures. You understand idempotency, exactly-once semantics, and backfill strategies. Experience with workflow orchestration (Airflow, Prefect, Dagster) expected. Comfortable with ETL/ELT patterns, incremental vs full recomputation, data quality monitoring, database design and query optimization (PostgreSQL preferred), time series data at scale.
C++ (Preferred): Experience valuable for performance-critical components. Writing efficient C++ and interfacing with Python (pybind11, Cython) is a significant advantage.
HIGHLY DESIRABLE: MULTI-AGENT ORCHESTRATION
We are building systems leveraging LLM-based automation. Experience with multi-agent frameworks highly desirable: LangChain, LangGraph, or similar agent frameworks; designing reliable AI pipelines with error handling and fallbacks; prompt engineering and output parsing; managing context and state across agent interactions. You do not need to be an expert, but genuine interest and hands-on experience will set you apart.
DOMAIN EXPERIENCE: FINANCIAL DATA AND CRYPTO
Preference for candidates with experience in quantitative finance, algorithmic trading, or fintech; cryptocurrency markets and their unique characteristics; financial time series data and forecasting systems; market microstructure, volatility, and regime dynamics. This helps you understand why reproducibility is non-negotiable, why validation must account for temporal structure, and why production reliability cannot be an afterthought.
ENGINEERING STANDARDS
Code Quality: Readable, maintainable code others can modify. Proper version control (meaningful commits, branches, code review). Testing where appropriate. Documentation: docstrings, READMEs, decision records.
Production Mindset: Think about failure modes before they happen. Build in observability: logging, metrics, alerting. Design for reproducibility—same inputs produce same outputs.
Systems Thinking: Consider component interactions, not just isolated behavior. Understand tradeoffs: speed vs accuracy, flexibility vs simplicity. Zoom between architecture and implementation.
WHAT WE ARE LOOKING FOR
Self-Direction: Given a problem and context, you break it down, identify the path forward, and execute. You ask questions when genuinely blocked, not when you could find the answer yourself.
Long-Term Orientation: You think in years, not months. You make decisions considering future maintainability.
Intellectual Honesty: You acknowledge uncertainty and distinguish between what you know versus guess. When something fails, you dig into why.
Communication: You explain complex concepts clearly and document your reasoning.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Physics, Engineering. Equivalent demonstrated expertise through work also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a production ML system you built, (3) Links to relevant work if available, (4) Availability and timezone.
Similar companies
About the company
Tech Prescient delivers cutting-edge software product development and technology services, with a core focus on Identity and Data solutions for modern enterprises.
At the forefront of Digital Engineering and Enterprise Modernization, we accelerate innovation with our AI powered platforms, Identity Confluence and Data Confluence.
With deep technical expertise and domain insight, we help enterprises unlock smarter, faster, and more secure outcomes built for what's next.
Our vision is to be the most reliable and trusted technology partner for our customers. We aspire to be a globally admired company in our category and be recognized for speed of execution, quality of deliverables and high customer satisfaction.
We partner with our clients at all stages including ideation, design, development, testing, deployment, and support.
Jobs
5
About the company
Quantiphi is an award-winning AI-first digital engineering company driven by the desire to reimagine and realize transformational opportunities at the heart of the business. Since its inception in 2013, Quantiphi has solved the toughest and most complex business problems by combining deep industry experience, disciplined cloud, and data-engineering practices, and cutting-edge artificial intelligence research to achieve accelerated and quantifiable business results.
Jobs
8
About the company
enParadigm is one of the world's leading experiential learning and talent intelligence companies. We leverage Generative AI & Immersive AI solutions to create hyper-personalised, immersive experiences, driving business impact and behavioural change across levels and functions.
We have been recognized among the fastest growing tech companies in APAC by Deloitte as part of the Deloitte Tech Fast 500 APAC program. We leverage our proprietary simulations, and a rigorous sustained-learning approach. Learn more about our work. We have worked with 500+ organisations around the world such as Coca-Cola, Infosys, P&G, Societe Generale, Colgate-Palmolive, WNS, Citibank, etc, to help drive growth and leadership.
Jobs
3
About the company
Tarento is a unique blend of Nordic efficiency and Indian technological depth, operating from offices in Sweden, Finland, Norway, and India. With its largest technology hub in Bangalore and strong consulting and data management capabilities in Finland, the company helps organisations navigate digital transformation with no-hassle, high-quality services across enterprise applications, data & information management, and custom engineering solutions.
Founded in 2009 and strengthened through strategic milestones—including becoming part of the Acando Group and acquiring Finnish data management consultancy DATPRO—Tarento has built a comprehensive capability stack that bridges technology and business. Today, the company supports clients across the Nordics and India, solving core business challenges through a powerful combination of advanced technology skills, deep data expertise, and reliable long-term application management services.
Jobs
2
About the company
Jobs
18
About the company
Jobs
12
About the company
TheCodersHub is a startup that offers services in #androiddevelopment, #webdevelopment, #softwaredevelopment, #mobiledevelopment, and #iosdevelopment. While still in the early stages, we are focused on growth and innovation within the tech industry.
Jobs
6
About the company
Jobs
7
About the company
Discover BrightPrice - the B2B pricing solution that helps SAP clients optimize pricing, improve bottom-line, and gain powerful insights.
Jobs
1
About the company
Improving is a leading IT professional services firm committed to helping companies achieve lasting success through modern technology. With core expertise in AI, Data, and Applications, we specialize in transforming legacy systems, building cloud-native platforms, and delivering intelligent, future-ready solutions for today’s complex business needs. Improving’s leaders are equally committed to fostering a great place to work that is inclusive and purpose-centered, empowering Improvers to bring their whole selves to work. Our team is known for its collaborative approach and long-term partnerships that prioritize measurable outcomes. By combining technical excellence with strategic insight, Improving enables all stakeholders to grow, adapt, and lead in an ever-evolving digital landscape.
Jobs
5








