Here’s a clear, factual summary of the working paper “Mimicking Finance” recently posted to the National Bureau of Economic Research (NBER) Working Paper series (NBER Working Paper No. 34849), Issue Date: February 2026:
📄 Mimicking Finance — NBER Working Paper 34849 (Feb 2026)
Authors:
Lauren H. Cohen (Harvard Business School & NBER) Yiwen Lu (The Wharton School) Quoc H. Nguyen (DePaul University)
Summary by ChatGPT
One paragraph takeaway
The paper measures predictability by asking whether a machine, trained only on past behavior and public data, can replicate a manager’s future trading decisions—and then studies what it means when it can’t.
Core Idea:
The paper uses state-of-the-art artificial intelligence and machine learning techniques to analyze and predict the behavior of financial agents—especially mutual fund managers—based on past behavior. It focuses on distinguishing predictable behavior (that can be “mimicked” by models) from genuinely novel decision-making.
🔍 Key Findings
📊 Predictability of Fund Manager Behavior
The AI/ML models in the study can predict about 71 % of mutual fund managers’ trade directions without observing the managers themselves. For some managers, nearly all trades in a quarter are predictable. Predictability is higher for managers with longer trading histories and in less competitive fund categories. Managers with larger personal ownership stakes in their funds tend to behave less predictably.
📈 Implications for Performance
Managers with less predictable trading significantly outperform peers. Conversely, the most predictable managers tend to underperform. Within individual portfolios, positions that are harder to predict outperform those that are easier to forecast. When looking across all managers, stocks whose position changes are least predictable generate higher future returns than those with more predictable changes.
🧠 Conceptual Contribution
The authors propose a framework for classifying financial decision-making into:
Predictable components, which might be replicated or “mimicked” using machine learning, and Novel responses, which are less susceptible to algorithmic replication.
This helps in assessing what parts of manager behavior genuinely add value (innovation) versus what parts can be forecasted by machines.
🧩 Broader Context
The paper sits at the intersection of:
Behavioral finance, by exploring actual decision patterns of investors; AI/ML in finance, where predictive models are applied to large financial datasets; and Portfolio management research, contributing new evidence on the limits and capabilities of active management.
📌 Summary
In essence, Mimicking Finance argues that a large share of asset managers’ trading behavior is systematically predictable using advanced AI models, and that the degree of predictability has strong implications for investment performance—challenging traditional views about where value is added in active management.
If you want, I can also help you understand the methodology they use to measure predictability or dive into how this might change the way active management is evaluated in financial economics.
Here’s a clear, method-focused explanation of how the paper measures “predictability”—without assuming results beyond what the authors actually test.
How “predictability” is measured in Mimicking Finance
At a high level, the methodology asks a simple but powerful question:
How much of a professional investor’s trading can be forecast using only information that was available before the trade?
To answer this, the authors build a machine-learning “mimic” of financial decision-making and then evaluate how well it reproduces real trades.
1. The prediction target: trade direction
The core object being predicted is whether a manager increases, decreases, or exits a position in a given stock over a reporting period.
This is a classification problem, not a return-forecasting problem. The model is not told outcomes or performance—only behavior.
This distinction matters: the paper studies decision rules, not skill defined by returns.
2. Information set: what the model is allowed to see
The model is restricted to ex-ante observable data, such as:
Past holdings and past trades by the same manager Stock characteristics (size, momentum, valuation proxies, liquidity) Fund characteristics (style, category, constraints) Market-wide signals available at the time
Crucially:
No future prices No private information No manager identity signals beyond historical behavior
This keeps the exercise economically clean: could an outsider have predicted this trade at the time?
3. The “mimic” model
The authors train modern machine-learning classifiers (details vary, but think along the lines of ensemble or regularized nonlinear models) to:
Learn patterns in how managers react to information Capture both linear rules (“if X then buy”) and nonlinear habits (“this manager tends to rebalance in these conditions”)
Each manager effectively gets a behavioral fingerprint learned from their own history.
4. Out-of-sample testing (the key step)
Predictability is measured out of sample:
Train the model on a manager’s past behavior Ask it to predict the manager’s future trades Score accuracy relative to naïve benchmarks (e.g., random or style-only predictions)
This avoids overfitting and ensures the model is capturing stable decision structure, not noise.
5. The predictability metric
For each manager (and even each position), the authors compute:
Prediction accuracy: how often the model gets the trade direction right Sometimes framed as: Fraction of trades correctly predicted Or probability mass assigned to the realized action
This produces a continuous measure of “how mimickable” a manager is.
High predictability = behavior closely follows repeatable rules
Low predictability = behavior deviates from learned patterns
6. Linking predictability to outcomes
Only after predictability is measured do the authors study:
Whether less-predictable managers outperform Whether hard-to-predict trades do better than easy-to-predict ones Whether predictability correlates with incentives, competition, or constraints
If you want, next we can:
This sequencing is important: predictability is measured independently of performance, then related back to it.
Why this methodology is powerful
This separates decision structure from returns Tests whether “skill” shows up as rule-breaking, not rule-following Treats AI as a benchmark observer, not an oracle
Map this directly to active vs passive debates Connect it to AI commoditizing “process alpha” Or translate it into implications for asset-management jobs and fees
What it does not claim
That AI can replace all fund managers That predictability alone causes underperformance That low predictability always equals skill (it could also be noise)
