Building a Trading System From Scratch
Reading Notes · Systematic Trading — Part 3
This is Part Three of a series on Robert Carver's Systematic Trading. Part One covered the psychological case for rules-based systems. Part Two examined Carver's toolbox — why data mining fails and why handcrafting beats Markowitz. Here we get to the engineering.
There's a moment in every technical book where the author stops explaining why and starts explaining how. In Systematic Trading, that moment arrives at Chapter 5, and the shift is almost physical. The earlier chapters — about cognitive biases, about the 37-year proof horizon for trading rules, about the dangers of optimization — are compelling, but they're also somewhat comfortable. Abstract arguments don't require you to do anything. Part Three requires you to do quite a lot.
Chapters 5 through 12 are the mechanical heart of Carver's framework. They describe a complete system: how to select instruments, how to generate and scale forecasts, how to combine them, how to target volatility, how to size positions, how to build a diversified portfolio, and how to think about trading speed and minimum capital. This is not financial advice — it's an engineering blueprint, and the distinction matters. Carver is explicit throughout that he's teaching a way of thinking about systematic trading, not telling anyone what to trade.
What makes this section worth studying carefully is that the pieces are genuinely modular. You can implement a minimal version — one instrument, one rule — and scale up incrementally. Most quantitative frameworks punish you for partial implementation. This one doesn't.
Disclaimer: Nothing in this post constitutes financial advice. This is a reading note, not a trading recommendation.
The Architecture: One System, Three Practitioners
Carver opens Part Three by reminding the reader that the framework serves three different practitioner types: the asset-allocating investor (no forecasting, no leverage, static portfolio of ETFs), the semi-automatic trader (manual forecasting, systematic risk management), and the staunch systems trader (fully algorithmic, forecasting and all).
This isn't a marketing segmentation. It's a genuine architectural choice. The same position-sizing formula, the same volatility-targeting logic, the same diversification multiplier — all three types use it. The difference is where human judgment enters. The asset allocator sets forecasts to a constant +10 (always long). The semi-automatic trader sets forecasts manually, based on their own directional view. The systems trader derives forecasts algorithmically. Everything downstream is identical.
The insight is that risk management and position sizing are separable from forecasting. Most retail traders never make this separation. They fold their market view, their position size, and their risk tolerance into a single intuitive number — "I'll buy 100 shares" — and the three concerns become inseparable, and therefore unexaminable. Carver's framework forces you to decompose the decision.
Forecasts: A Universal Scale
The forecast is Carver's unit of market opinion. Whatever your source — a trend-following rule, a carry calculation, a manual read of macro conditions — it gets translated into a number on a standardized scale. The expected average absolute value of a forecast is 10. So +10 means "average buy." +20 is the ceiling. −20 is the floor. There is no +100 for your highest-conviction trade of the year.
That ceiling is more important than it sounds. The reasoning: in a normal distribution, you'd only see forecast values above 20 about 5% of the time. There's simply not enough historical evidence that signals of that magnitude translate into proportionally larger returns. Markets also tend to reverse sharply at extremes. Capping at 20 is conservative, but Carver argues conservatism at the forecast level is exactly where you want it.
The mechanics of forecast generation are covered in Chapters 6 and 7. Carver uses EWMAC (Exponentially Weighted Moving Average Crossover) as his primary illustration — a trend-following rule that calculates the difference between a fast and slow moving average, then scales the result to the standardized scale. He's careful to distinguish between the raw forecast (the unscaled moving average difference) and the scaled forecast (mapped to the ±20 range using a scalar derived from historical data).
This scaling step is non-trivial. Without it, you can't combine forecasts from different rules or different instruments — they'd be in incomparable units. Volatility standardization is what makes the whole system coherent.
Combining Forecasts: Why Simple Beats Complex
Chapter 8 addresses one of the most counterintuitive results in quantitative finance: a portfolio of simple, weakly correlated rules almost always outperforms a single sophisticated rule.
The mechanism is straightforward once you see it. Combining uncorrelated forecasts reduces variance without proportionally reducing expected return. If you have two rules with the same expected Sharpe ratio, and they're uncorrelated, the combined portfolio has a higher Sharpe ratio than either rule alone. Adding more rules continues to help, with diminishing returns.
Carver formalizes this with the diversification multiplier for forecasts — the same mathematical concept he uses for instruments. Given N forecast variations with a correlation matrix H and weights W summing to 1, the multiplier is:
1 ÷ √(W × H × Wᵀ)This multiplier scales the combined forecast back up to maintain the same expected absolute value of 10. Without it, combining forecasts would reduce your effective signal strength.
On weighting: Carver advocates handcrafting here, not optimization. He groups similar rules together (fast trend-followers, slow trend-followers, carry rules) and assigns weights within groups first, then between groups. It's pencil-and-paper work, grounded in common sense rather than numerical optimization that would overfit to historical noise. The same critique that dismantled Markowitz in Part Two applies here.
Volatility Targeting: The Central Idea
This is where theory converts to engineering — and where the framework's real insight becomes visible.
The problem: you want to take a consistent amount of risk across all the instruments in your portfolio. But crude oil futures and Japanese government bond futures are completely different beasts. Sizing both at "100 contracts" is meaningless. Sizing both at "2% of capital" is almost as meaningless, because 2% of capital buys very different amounts of volatility in different instruments.
Carver's solution is cash volatility targeting. You decide in advance how much annual volatility (in cash terms) you want your portfolio to generate. Say your account is £500,000 and you want 25% annualized volatility — that's a £125,000 annual cash volatility target. Divide by 16 (the square root of 256 trading days) to get the daily target: roughly £7,800.
Now, for any given instrument, you can calculate the daily cash volatility of one "block" (one futures contract, one lot, one share). That's the price volatility (standard deviation of daily percentage returns) multiplied by the block value (how much the position changes in cash terms when the price moves 1%). The ratio of your daily cash volatility target to the instrument's volatility — that's the volatility scalar.
Position size = volatility scalar × forecast ÷ 10
That last division by 10 is because the expected average absolute forecast is 10, so dividing normalizes it. A forecast of +10 gives you one "full" volatility-scaled position. A forecast of +5 gives you half. A forecast of −10 gives you a short position of the same size.
What this achieves is genuinely elegant. A position in crude oil and a position in German Bunds will now both contribute the same expected amount of daily cash volatility to your portfolio, regardless of how different the underlying contracts are. Risk becomes the unit of account, not capital percentage or number of contracts.
The volatility target itself is set using the Half-Kelly criterion. Carver walks through the math: if your expected pre-cost Sharpe ratio is 0.75, the Kelly-optimal risk level is 75% annual volatility. Half-Kelly gives you 37%. This sounds high, but Carver's argument is that you should be pessimistic about your own Sharpe ratio estimate (it's almost certainly higher than reality due to optimism and look-ahead bias), and that cutting the mathematically optimal level in half provides meaningful protection against estimation error and bad luck.
Position Sizing: The Full Formula
Chapter 10 assembles the complete position-sizing sequence. In full:
- Instrument price volatility: standard deviation of daily percentage price returns
- Block value: cash value of one block when price moves 1%
- Instrument currency volatility: price volatility × block value
- Volatility scalar: daily cash volatility target ÷ instrument currency volatility
- Subsystem position: volatility scalar × forecast ÷ 10
For a portfolio with multiple instruments and multiple rules, each rule generates a subsystem position for each instrument. These are combined using the forecast weights and diversification multiplier. The result is a target position, expressed in number of blocks.
Carver then introduces position inertia: if the current held position is within 10% of the target position, don't trade. This is a cost-reduction rule, not a performance rule. The daily fluctuations in volatility estimates and forecasts generate small position adjustments that, in aggregate, produce significant transaction costs without adding meaningful signal. Ignoring adjustments smaller than 10% of the current position eliminates most of that friction.
Building the Portfolio: Instrument Diversification
Chapter 11 applies the same logic to instruments that Chapter 8 applied to forecasts. More uncorrelated instruments → higher portfolio Sharpe ratio. The instrument diversification multiplier is calculated identically:
1 ÷ √(W × H × Wᵀ)Where H is now the correlation matrix of instrument returns (not forecast values) and W is the instrument weights. Carver caps this multiplier at 2.5 — a hard limit to prevent extreme leverage during market stress, when correlations tend to spike unexpectedly toward 1.
The instrument weight selection follows the same handcrafting logic: group instruments by asset class (equities, bonds, commodities, FX), weight within groups first based on correlation clusters, then between groups based on common-sense diversification. Carver provides look-up tables in the appendices.
One practical note that Carver emphasizes: the minimum capital requirement scales with the number of instruments. Each instrument has a minimum block size (one futures contract, one lot). If your position sizing math says you need 0.3 contracts, you can't hold 0.3 contracts — you round to zero or one. For a diversified portfolio to be implemented at all, you need enough capital that rounding errors don't dominate. Carver works through specific examples with realistic futures contract sizes; the minimums are not trivial.
Speed and Costs: The Speed Limit
Chapter 12 addresses a constraint that many systematic traders ignore until it's too late. Trading costs — commissions, spreads, slippage — are a direct drag on returns. They're also a function of how fast you trade. A system that rebalances daily pays dramatically more than one that rebalances monthly.
Carver formalizes this as a speed limit: never spend more than one-third of your expected annual Sharpe ratio on trading costs. For a staunch systems trader with a pre-cost expected SR of 0.40, the cost limit is 0.13 SR units annually. For asset allocators and semi-automatic traders, it's 0.08 SR units.
To enforce the limit, he calculates standardized costs: the round-trip cost of trading one block (commissions plus half the bid-offer spread, doubled), divided by the annualized instrument cash volatility. This gives a cost expressed in the same units as Sharpe ratio, which can then be compared against the speed limit.
The practical implication: many fast trend-following rules — those with short EWMAC spans, for example — are unprofitable after costs for retail traders. You simply can't trade fast enough to justify the expense. Carver's framework selects trading rules not just by their pre-cost Sharpe ratio but by their post-cost performance, accounting for the minimum instrument block size and realistic execution costs.
What the Framework Actually Is
Reading through Chapters 5 to 12 in sequence, something becomes clear that's not obvious from a chapter-by-chapter description. The framework is a cascade of risk-normalizing transformations.
Forecasts normalize market opinion onto a universal scale. Volatility scaling normalizes instrument risk so positions across different assets carry comparable cash risk. Diversification multipliers normalize for correlation, so adding more rules or instruments doesn't just add complexity — it adds risk-adjusted return. Position inertia normalizes trading frequency to avoid cost drag. Speed limits normalize the relationship between rule speed and transaction costs.
At every stage, the goal is to make things comparable — across time, across assets, across rule types — so that decisions can be made on a consistent basis. This is what Carver means when he says the framework is modular. Each normalization is independent. You can implement them incrementally. And because they're all expressed in the same units (Sharpe ratio, cash volatility, or standardized forecasts), the pieces fit together without friction.
The volatility targeting concept is the key that unlocks this. Once you understand that risk (not capital percentage, not number of contracts) is the right unit for position sizing, the rest of the framework follows naturally. It's the kind of insight that feels obvious in hindsight and genuinely isn't obvious at all beforehand.
Whether this blueprint is implementable for any given reader depends heavily on capital size, instrument access, and broker costs. The appendices contain the exact formulas and look-up tables needed to run the calculations. What Part Three gives you, at minimum, is a rigorous way to think about what a systematic position actually means.
Part Four of this series will cover Carver's real-world application diaries — how the framework behaves during the 2008 crash, the 2014 oil collapse, and other stress events.
This post reflects my reading of Robert Carver's Systematic Trading (Harriman House, 2015). Nothing here is financial advice. I am a reader studying this material, not a professional trader, and none of this should be construed as a recommendation to buy, sell, or hold any financial instrument.
Leave a comment ✎