The relationship between risk and return is the most fundamental concept in finance. In efficient markets, higher expected returns require taking on higher risk. This is why Treasury bills yield less than corporate bonds, which yield less than equities. The risk premium, the additional return earned for bearing risk beyond the risk-free rate, is the compensation investors demand for accepting uncertainty. Understanding how to measure, decompose, and manage this tradeoff is the central challenge of investing.
Risk in financial markets takes many forms. Market risk (systematic risk) affects all securities and cannot be diversified away. It is measured by beta in the CAPM framework. Idiosyncratic risk (unsystematic risk) is specific to individual companies and can be reduced through diversification. A well-diversified portfolio of 30 or more stocks eliminates most idiosyncratic risk, leaving the investor primarily exposed to market risk. Other important risk categories include liquidity risk, credit risk, and inflation risk.
Standard deviation is the most commonly used measure of total risk. It captures the dispersion of returns around their mean. However, standard deviation treats upside and downside volatility equally, which does not align with most investors' experience of risk. Downside risk measures, such as semi-deviation (which only considers returns below the mean) and maximum drawdown (the largest peak-to-trough decline), often provide a more intuitive picture of the pain an investor might endure.
The Sharpe ratio, developed by William Sharpe, is the standard metric for risk-adjusted returns. It divides the excess return (portfolio return minus risk-free rate) by the standard deviation of returns. A Sharpe ratio of 1.0 means you earn one unit of excess return per unit of risk. Values above 1.0 are generally considered good for long-only strategies, while systematic hedge funds often target Sharpe ratios of 2.0 or higher. The Sortino ratio is a modification that uses only downside deviation in the denominator, penalizing only harmful volatility.
Value at Risk (VaR) is a widely used risk metric in institutional finance. It estimates the maximum loss over a given time period at a specified confidence level. For example, a one-day 95% VaR of $1 million means there is a 5% chance the portfolio will lose more than $1 million in a single day. While VaR is intuitive and widely adopted, it has significant limitations: it does not describe the magnitude of losses beyond the VaR threshold, and it can underestimate risk during periods of market stress when correlations increase.
The concept of risk-adjusted returns is crucial because raw returns are meaningless without context. A strategy that returns 20% per year with 40% annual volatility is far less attractive than one returning 12% with 8% volatility. By normalizing returns for the amount of risk taken, investors can make apples-to-apples comparisons across strategies with very different risk profiles.