Positional Option Trading. Euan Sinclair

Чтение книги онлайн.

Читать онлайн книгу Positional Option Trading - Euan Sinclair страница 14

Positional Option Trading - Euan Sinclair

Скачать книгу

that looks like a good trading idea we need to ask, “Why is this trade available to me?” Sometimes the answer is obvious. Market-makers get a first look in exchange for providing liquidity. Latency arbitrage is available to those who make the necessary investments in technology. ETF arbitrage is available to those with the capital and legal status to become authorized participants. But often a trade with positive edge is available to anyone who is interested. Remembering the joke about the economists, “Why is this money sitting on the ground?” Risk premia can often be identified by looking at historical data, but behavioral finance can help to identify real inefficiencies. For example, post-earnings announcement drift can be explained in terms of investor underreaction. Together with historical data, this gives me enough confidence to believe that the edge is real. The data suggest the trade, but the psychological reason gives a theoretical justification.

      Technical analysis is the study of price and volume to predict returns.

      Technical Analysis

      Aronson (2007) categorized technical analysis as either subjective or objective. It is a useful distinction.

      Some things that are intrinsically subjective are Japanese candlesticks, Elliot waves, Gann angles, trend lines, and patterns (flags, pennant, head, and shoulders, etc.). These aren't methods. In the most charitable interpretation, they are a framework for (literally) looking at the market. It is possible that using these methods can help the trader implicitly learn to predict the market. But more realistically, subjective technical analysis is almost certainly garbage. I can't prove the ideas don't work. No-one can. They are unfalsifiable because they aren't clearly defined. But plenty of circumstantial evidence exists that this analysis is worthless. None of the large trading firms or banks has desks devoted to this stuff. They have operations based on stat arb, risk arb, market-making, spreading, yield curve trading, and volatility. No reputable, large firm has a Japanese candlestick group.

      As an ex-boss of mine once said, “That isn't analysis. That is guessing.”

      Any method can be applied subjectively, but only some can be applied objectively. Aronson (2007) defines objective technical analysis as “well-defined repeatable procedures that issue unambiguous signals.” These signals can then be tested against historical data and have their efficacy measured. This is essentially quantitative analysis.

      It seems likely that some of these approaches can be used to make money in stocks and futures. But each individual signal will be very weak and to make any consistent money various signals will need to be combined. This is the basis of statistical arbitrage. This is not within the scope of this book.

      However, we do need to be aware of a bad classic mistake when doing quantitative analysis of price or return data: data mining.

      This mistake isn't only made by traders. Academics also fall into the trap. The first published report of this was Ioannidis (2005). Subsequently, Harvey et al. (2016) and Hou et al. (2017) discussed the impact of data mining on the study of financial anomalies.

      There are a few ways to avoid this trap:

       The best performer out of a sample of back-tested rules will be positively biased. Even if the underlying premise is correct, the future performance of the rule will be worse than the in-sample results.

       The size of this bias decreases with larger in-sample data sets.

       The larger the number of rules (including parameters), the higher the bias.

       Test the best rule on out-of-sample data. This gives a better idea of its true performance.

       The ideal situation is when there is a large data set and few tested rules.

      Even after applying these rules, it is prudent to apply a bias correcting method.

      The simplest is Bonferroni's correction. This scales any statistical significance number by dividing by the number of rules tested. So, if your test for significance at the 95% confidence level (5% rejection) shows the best rule is significant, but the rule is the best performer of 100 rules, the adjusted rejection level would be 5%/100 or 0.005%. So, in this case, a t-score of 2 for the best rule doesn't indicate a 95% confidence level. We would need a t-score of 2.916, corresponding to a 99.5% level for the single rule. This test is simple but not powerful. It will be overly conservative and skeptical of good rules. When used for developing trading strategies this is a strength.

      A more advanced test is White's reality check (WRC). This is a bootstrapping method that produces the appropriate sampling distribution for testing the significance of the best strategy. The test has been patented and commercial software packages that implement the test can be bought. However, the basic algorithm can be illustrated with a simple example.

       Using sampling with replacement, generate a series of 100 returns from the historical data.

       Apply the strategies (A and B) to this ahistorical data to get the pseudo-strategies A' and B'.

       Subtract the mean return of A from A' and B from B'.

       Calculate the average return of the return-adjusted strategies, A” and B”.

       The larger of the returns of A” and B” is the first data point of our sample distribution.

       Repeat the process N times to generate a complete distribution. This is the sampling distribution of the statistic, maximum average return of the two rules with an expected return of zero.

       The p-value (probability of our best rule being truly the better of the two) is the proportion of the sampling distribution whose values exceed the returns of A, that is, 2%.

      A

Скачать книгу