62 KiB
Task / Learning¶
In this exercise, we build the Kaufmann Adaptive Moving Average (KAMA) Indicator from scratch using VectorBT Pro's (VBT) Indicator Factory.
Sources:
- https://school.stockcharts.com/doku.php?id=technical_indicators:kaufman_s_adaptive_moving_average
- https://www.youtube.com/watch?v=fYlB8mSirJg&t=315s
- https://vectorbt.pro/documentation/indicators/development/
- https://vectorbt.pro/tutorials/signal-development/index-locked-81f8af74-7fff-42e5-8b64-bb57c63e3f7c
Background:
KAMA is an adaptive moving average developed by Perry J. Kaufman that adapts its window length to "market noise". Thereby, the actual window length used to calculate the respective KAMA value depends on the "efficiency" of price moves within a lookback period. Generally, KAMA tracks prices more closely where new trends establish. At the same time, it keeps distance to the price action as the trend evolves through phases of corrections. As a result, KAMA generates fewer volatility-caused exit signals compared to SMA or EMA. Such characteristics of a moving average indiciator are especially fovourable for trend-following strategies.
The KAMA indicator comes shipped with TA-LIB (cf. https://www.ta-lib.org/function.html) and is thus ready to be used with a one-liner of VBT's TA Lib parser (https://vectorbt.pro/documentation/indicators/parsers/). However, such implementation has a shortcoming: TA Lib does not allow us to adjust parameters such as the fast and slow smoothing parameters. As VTB users, we do not like such restrictions. Instead, we want to run our own parameter combinations to see if we can do better than Perry, or at least produce a perfectly overfitted equity curve. Hence, we will rebuild the KAMA indicator from scratch, which will enable us to access and backtest all KAMA parameters. Let's do it.
Import and Configure VBT / sample data¶
import vectorbtpro as vbt import numpy as np import pandas as pd from itertools import product from numba import njit vbt.settings.set_theme('dark')
start = '2013-01-01 UTC' # crypto is in UTC end = '2022-06-09 UTC' timeframe = '1d' cols = ['Open', 'High', 'Low', 'Close', 'Volume'] data = vbt.BinanceData.fetch('BTCUSDT', start=start, end=end, timeframe=timeframe, limit=100000) ohlcv = data.get(cols)
0it [00:00, ?it/s]
For comparison: KAMA from TA_Lib¶
TA_KAMA = vbt.talib('KAMA')
Lets see, which parameters the TA-Lib "standard" KAMA takes. We can easyily check this using format_func:
print(vbt.format_func(TA_KAMA.run))
KAMA.run(
close,
timeperiod=Default(value=30),
timeframe=Default(value=None),
short_name='kama',
hide_params=None,
hide_default=True,
**kwargs
):
Run `KAMA` indicator.
* Inputs: `close`
* Parameters: `timeperiod`, `timeframe`
* Outputs: `real`
Pass a list of parameter names as `hide_params` to hide their column levels.
Set `hide_default` to False to show the column levels of the parameters with a default value.
Other keyword arguments are passed to `vectorbtpro.indicators.factory.run_pipeline`.
We can see that the only parameters that we may adjust are timeperiod (equals to length for calculating the Efficiency Ratio) and the timeframe. However, there are more paramaters, that we might be interested in (see below).
Building our own KAMA indicator¶
KAMA Calculation in general¶
... Basic information from school.stockcharts.com:
KAMA is calculated as follows:
- KAMA = Prior KAMA + SC x (Price - Prior KAMA)
Before calculating KAMA, we need to calculate the Efficiency Ratio (ER) and the Smoothing Constant (SC).
The settings recommended by Perry Kaufman are KAMA(10,2,30):
- 10 is the number of periods for the Efficiency Ratio (ER).
- 2 is the number of periods for the fastest EMA constant.
- 30 is the number of periods for the slowest EMA constant.
As noted above, we may not adjust the fastest and the slowest EMA constant with TA_Lib's KAMA. However, even Perry Kaufmann - while warning to overoptimise - suggests at least to adjust the fastest EMA constant (to 3), where appropriate. The reason is that the value of "2" results in an EMA length of 4 (see below), which is blazingly reactive and may be "over-adaptive" in the end. To better understand this, let's explore the calculation steps in more detail:
Step 1: Efficiency Ratio (ER)
The ER is basically the price change adjusted for the daily volatility. Formula:
ER = Change/Volatility
Change = ABS(Close - Close (10 periods ago))
Volatility = Sum10(ABS(Close - Prior Close))
ABS stands for Absolute Value. Volatility is the sum of the absolute value of the last ten price changes (Close - Prior Close).
- In statistical terms, the Efficiency Ratio tells us the fractal efficiency of price changes.
- ER fluctuates between 1 and 0, but these extremes are the exception, not the norm. ER would be 1 if prices moved up 10 consecutive periods or down 10 consecutive periods.
- ER would be zero if price is unchanged over the 10 periods.
Step 2: Smoothing Constant (SC)
The smoothing constant uses the ER and two smoothing parameters based on an exponential moving average. Formula:
SC = [ER x (fastest SC - slowest SC) + slowest SC]2
SC = [ER x (2/(2+1) - 2/(30+1)) + 2/(30+1)]2
As you may have noticed, the Smoothing Constant is using the smoothing constants for an exponential moving average in its formula:
- (2/30+1) is the smoothing constant for a 30-period EMA. The Fastest SC is the smoothing constant for shorter EMA (2-periods).
- The slowest SC is the smoothing constant for the slowest EMA (30-periods). Note that the “2” at the end is to square the equation. This means, that KAMA - in its standard application - ranges between EMA 4 (2x2) and 900 (30x30)
As a sidenote: The term "smoothing constant" is somewhat an incorrect label, as the SC is anything but constant. As a programmer, you would rather call it a smoothing variable.
Example implementation in Pinescript¶
Lets also look at a Pinescript (TradingView) implementation example. Below is HPotter's code (cf. https://www.tradingview.com/script/RsdQHpdq-Kaufman-Moving-Average-Adaptive-KAMA/):
study(title="Kaufman Moving Average Adaptive (KAMA)", shorttitle="Kaufman Moving Average Adaptive (KAMA)", overlay = true) Length = input(21, minval=1) xPrice = close xvnoise = abs(xPrice - xPrice[1]) nAMA = 0.0 nfastend = 0.666 nslowend = 0.0645 nsignal = abs(xPrice - xPrice[Length]) nnoise = sum(xvnoise, Length) nefratio = iff(nnoise != 0, nsignal / nnoise, 0) nsmooth = pow(nefratio * (nfastend - nslowend) + nslowend, 2) nAMA := nz(nAMA[1]) + nsmooth * (xPrice - nz(nAMA[1])) plot(nAMA, color=color.blue, title="KAMA")
Implementation in VBT¶
Having collected all that material, we feel confident to build this indicator under VBT.
Let's first convert our close data to a numpy_array (this step will be done by VBT automatically later on):
close_np = ohlcv['Close'].to_numpy()
1. Efficiency Ratio (ER)¶
We then start with the ER. The ER requires us to perform the following calculations (see above):
- Change = ABS(Close - Close (10 periods ago))
- Volatility = Sum10(ABS(Close - Prior Close))
- ER = Change/Volatility
The standard lookback period to calculate the ER is 10. So lets assign it.
er_lookback = 10
To calculate the ER, we need to perform two shifting operations:
- The close of 10 bars ago (see above: Change = ABS(Close - Close (10 periods ago)))
- The close of 1 bar ago (see above: Volatility = Sum10(ABS(Close - Prior Close)))
Lets build our first blocks:
Code Block:
close_er_lb = vbt.nb.fshift_1d_nb(close_np, n=er_lookback) # er_lb stands for efficiency_ratio_lookback close_prior = vbt.nb.fshift_1d_nb(close_np, n=1) # Note that the er_lookback is not relevant to this operation, but for the window of the rolling sum (see above)
Lets see if we handled things correctly:
vDF1 = pd.DataFrame(close_np) # Create a Dataframe from our numpy array vDF1['close_prior'] = close_prior # add our shifted data as a column (shift: 1 period ago) vDF1['close_er_lb'] = close_er_lb # add our shifted data as another column (shift: 10 periods ago = ER lookback parameter) vDF1[0:11] # Display Dataframe's row 0-10
| 0 | close_prior | close_er_lb | |
|---|---|---|---|
| 0 | 4285.08 | NaN | NaN |
| 1 | 4108.37 | 4285.08 | NaN |
| 2 | 4139.98 | 4108.37 | NaN |
| 3 | 4086.29 | 4139.98 | NaN |
| 4 | 4016.00 | 4086.29 | NaN |
| 5 | 4040.00 | 4016.00 | NaN |
| 6 | 4114.01 | 4040.00 | NaN |
| 7 | 4316.01 | 4114.01 | NaN |
| 8 | 4280.68 | 4316.01 | NaN |
| 9 | 4337.44 | 4280.68 | NaN |
| 10 | 4310.01 | 4337.44 | 4285.08 |
We see that our shifts work out properly. Now we can easily calculate "Change" and "Volatility".
Code Block:
change = np.abs(close_np - close_er_lb) # Get the difference between the current closing price and the closing price 10 periods (ER lookback) ago - only positive numbers allowed. vola_in = np.abs(close_np - close_prior) # Get the difference between current closing price and prior closing price - only positive numbers allowed. volatility = vbt.nb.rolling_sum_1d_nb(arr=vola_in, window=er_lookback) # calculate the rolling sum of 1 day differences for a period of 10 (ER lookback) ER = change / volatility
Lets validate.
vDF2 = pd.DataFrame(close_np) vDF2['close_prior'] = close_prior vDF2['change_1_bar'] = vola_in vDF2['close_er_lb'] = close_er_lb vDF2['change_10_bars'] = change vDF2['Volatility'] = volatility vDF2['Efficieny Ratio (ER)'] = ER vDF2[10:21]
| 0 | close_prior | change_1_bar | close_er_lb | change_10_bars | Volatility | Efficieny Ratio (ER) | |
|---|---|---|---|---|---|---|---|
| 10 | 4310.01 | 4337.44 | 27.43 | 4285.08 | 24.93 | 751.83 | 0.033159 |
| 11 | 4386.69 | 4310.01 | 76.68 | 4108.37 | 278.32 | 651.80 | 0.427002 |
| 12 | 4587.48 | 4386.69 | 200.79 | 4139.98 | 447.50 | 820.98 | 0.545080 |
| 13 | 4555.14 | 4587.48 | 32.34 | 4086.29 | 468.85 | 799.63 | 0.586334 |
| 14 | 4724.89 | 4555.14 | 169.75 | 4016.00 | 708.89 | 899.09 | 0.788453 |
| 15 | 4834.91 | 4724.89 | 110.02 | 4040.00 | 794.91 | 985.11 | 0.806925 |
| 16 | 4472.14 | 4834.91 | 362.77 | 4114.01 | 358.13 | 1273.87 | 0.281135 |
| 17 | 4509.08 | 4472.14 | 36.94 | 4316.01 | 193.07 | 1108.81 | 0.174124 |
| 18 | 4100.11 | 4509.08 | 408.97 | 4280.68 | 180.57 | 1482.45 | 0.121805 |
| 19 | 4366.47 | 4100.11 | 266.36 | 4337.44 | 29.03 | 1692.05 | 0.017157 |
| 20 | 4619.77 | 4366.47 | 253.30 | 4310.01 | 309.76 | 1917.92 | 0.161508 |
Looks good. But lets further validate: If our code is conceptually correct, ER should have no value less and no value greater than 1 (see above reg. indicator description).
print(np.nanmin(ER)) # nanmin and nanmax ignore NaN values print(np.nanmax(ER))
0.0 0.9995041128046777
Rock solid. Lets sum up the code block so far and wrap it into our first numba_complied function:
Function to calculate Efficiency Ratio (ER) - Final Code Block¶
er_lookback = 10 @njit def get_ER_nb (close, er_lookback): close_np = close close_er_lb = vbt.nb.fshift_1d_nb(close_np, n=er_lookback) close_prior = vbt.nb.fshift_1d_nb(close_np, n=1) change = np.abs(close_np - close_er_lb) vola_in = np.abs(close_np - close_prior) volatility = vbt.nb.rolling_sum_1d_nb(arr=vola_in, window=er_lookback) ER = change / volatility return ER
2. Smoothing Constant (SC)¶
Lets move on to the SC. To recap what we need to code here:
The formula for the SC is as follows:
- SC = [ER x (fastest SC - slowest SC) + slowest SC]2
With standard settings (2, 30):
- SC = [ER x (2/(2+1) - 2/(30+1)) + 2/(30+1)]2
To make our indicator parametrizable, we will not hardcode these values (2,30), but instead assign them as standard parameters:
fast_sc_period = 2 slow_sc_period = 30
Let's check our values for fastest and slowest:
fast_sc = 2 / (fast_sc_period +1) print(fast_sc)
0.6666666666666666
The result equates to the code of HPotter ("nfastend"). However, the excel sheet embedded at school.stockcharts.com gives a value of 0.0645 for "fastest". Is something wrong with our calculation?
- Let's calculate slow_sc to check this.
slow_sc = 2 / (slow_sc_period +1) print(slow_sc)
0.06451612903225806
Ok. The result corresponds HPotter's input value ("nslowend") and is (except for rounding) identical to school.stockcharts "slowest". Confusion: Who's right here?
- Downloading the embedded exceel sheet from school.stockharts.com reveals that stockcharts.com apparently confused fast and slow in the excel sheet.
- I was able to further verify this by applying some other KAMA scripts in TradingView. Result: We're good.
Function to calculate the Smoothing Constant (ER) - Final Code Block¶
er_lookback = 10 fast_sc_period = 2 slow_sc_period = 30 @njit def get_SC_nb (close, er_lookback, fast_sc_period, slow_sc_period): close_np = close close_er_lb = vbt.nb.fshift_1d_nb(close_np, n=er_lookback) close_prior = vbt.nb.fshift_1d_nb(close_np, n=1) change = np.abs(close_np - close_er_lb) vola_in = np.abs(close_np - close_prior) volatility = vbt.nb.rolling_sum_1d_nb(arr=vola_in, window=er_lookback) ER = change / volatility # New Code starts here fast_sc = 2 / (fast_sc_period +1) slow_sc = 2 / (slow_sc_period +1) SC = np.square(ER * (fast_sc - slow_sc) + slow_sc) # see above. The formula is SC = [ER x (fastest SC - slowest SC) + slowest SC]2 return SC
And again: lets validate.
test_SC = get_SC_nb(close=close_np, er_lookback=er_lookback, fast_sc_period=fast_sc_period, slow_sc_period=slow_sc_period) vDF3 = vDF2 vDF3['SC'] = test_SC
print(np.nanargmin(test_SC)) # Lets get the row of the lowest SC - nanargmin ignores the nan values for us. print(np.nanargmax(test_SC)) # And the row of the max SC
384 1443
vDF3[384:385] # min ER value
| 0 | close_prior | change_1_bar | close_er_lb | change_10_bars | Volatility | Efficieny Ratio (ER) | SC | |
|---|---|---|---|---|---|---|---|---|
| 384 | 6700.0 | 7359.06 | 659.06 | 6700.0 | 0.0 | 1578.64 | 0.0 | 0.004162 |
vDF3[1443:1444] # min ER value
| 0 | close_prior | change_1_bar | close_er_lb | change_10_bars | Volatility | Efficieny Ratio (ER) | SC | |
|---|---|---|---|---|---|---|---|---|
| 1443 | 42206.37 | 40016.48 | 2189.89 | 29790.35 | 12416.02 | 12422.18 | 0.999504 | 0.444046 |
Well - looks good. Apparently, the SC (unlike the ER) does not range between 0 and 1.
- This seems to be confirmed by the excel spreadsheet obtained from school.stockcharts.com.
- Yes, we have lost some faith in this source - but we can cross-validate our indicator later, vs. the TA-Lib implementation.
3. KAMA Calculation¶
So lets code the last bit and puzzle our pieces together.
Current KAMA = Prior KAMA + SC x (Price - Prior KAMA)
We have two challenges here:
- Generally, we need to access the prior value for calculating the current value of the very same parameter that we are calculating. The issue with this type of calculation is that we won't be able to do this using shifting (i tried to implement that in various ways). Instead we will have to use a for loop, that numba will speed up for us later on.
- Specifically, we need to deal with of our first KAMA value seperately: We cannot perform any KAMA calculation without having a "prior" KAMA. But what is our "prior" KAMA in case of our first calculation?
a) Input for our first KAMA calculation¶
Let's start with our initial value issue. This is were the online documentation got rather silent. school.stockcharts mentions:
"Since we need an initial value to start the calculation, the first KAMA is just a simple moving average. The following calculations are based on the formula below."
First Approach: SMA for initial value
I couldn't figure out any such step from the provided formulas or HPotter's Code - but it does not sound too weird to use an SMA for the initial value (but wouldn't be an EMA more close to the nature of KAMA?). So, lets do it:
kama = np.empty(close_np.shape, close_np.dtype) # create an empty array for kama values that has the same length as the close array kama[:] = np.nan # assign nan to all values first_value_range = close_np[0:er_lookback] # get the values of our pre-calculation range for SMA. first_value = np.sum(first_value_range) / er_lookback # The SMA is the sum of all closing prices within this range, divided by the size of the range (=lookback period) kama[er_lookback-1] = first_value kama[er_lookback-1]
4172.386
Second Approach: 0.0 als initial value
It seems like in HPotter's code, the initial value of the KAMA is 0.0. However, if we recap the formula (Prior KAMA + SC x (Price - Prior KAMA)) this does not seem to make sense:
zero_approach = 0.0 + SC[10] * (close_np[10] - 0.0) zero_approach
30.76209029303158
This calculation apparently disqualifies. Remember that all further values will build upon this initial value - a value of 30 is rather useless.
Third approach: Use last close as initial value
A further approach might be to set the last close price as initial value:
kama[er_lookback-1] = close_np[er_lookback-1] kama[er_lookback-1]
4337.44
This might have the benefit, to start the KAMA calculation at the recent price level; however, as there is no averaging at all, a lot of weight would be given to the most recent candle before KAMA calculation.
Conclusion:
Approach 1 and 3 seem acceptable, approach 2 disqualifies. I tend to favour approach one, since using a SMA for the initial calculation is also a necessary step within EMA calculation:
"Suppose that you want to use 20 days as the number of observations for the EMA. Then, you must wait until the 20th day to obtain the SMA. On the 21st day, you can then use the SMA from the previous day as the first EMA for yesterday."
Way forward:
But hey, we do not have to make this decision. Let's leave it to the user, by providing our indicator with another parameter useSMA that accepts true or false.
b) The KAMA Loop¶
The loop itself is rather simple. It starts, once the first SC is available, i.e. with SC[er_lookback]. To verify:
SC[er_lookback]
0.007137359378059814
SC[er_lookback-1]
nan
Code Block:
for i in range (er_lookback, close_np.shape[0]): kama[i] = kama[i-1] + SC[i] * (close_np[i] - kama[i-1])
kama[0:11]
array([ nan, nan, nan, nan,
nan, nan, nan, nan,
nan, 4337.44 , 4337.24422223])
Looks good. Remember that our first SC is based on a very low ER value, so it makes sense for the price to remain nearly unchanged.
c) KAMA Function - Final Code Block¶
So let's put those pieces together.
@njit def KAMA_nb (close, er_lookback=10, fast_sc_period=2, slow_sc_period=30, useSMA=True): close_np = close close_er_lb = vbt.nb.fshift_1d_nb(close_np, n=er_lookback) close_prior = vbt.nb.fshift_1d_nb(close_np, n=1) change = np.abs(close_np - close_er_lb) vola_in = np.abs(close_np - close_prior) volatility = vbt.nb.rolling_sum_1d_nb(arr=vola_in, window=er_lookback) ER = change / volatility fast_sc = 2 / (fast_sc_period +1) slow_sc = 2 / (slow_sc_period +1) SC = np.square(ER * (fast_sc - slow_sc) + slow_sc) # New Code starts here KAMA = np.empty(close_np.shape, close_np.dtype) KAMA[:] = np.nan if useSMA==True: first_value_range = close_np[0:er_lookback] KAMA[er_lookback-1] = np.sum(first_value_range) / er_lookback else: KAMA[er_lookback-1] = close_np[er_lookback-1] for i in range (er_lookback, close_np.shape[0]): KAMA[i] = KAMA[i-1] + SC[i] * (close_np[i] - KAMA[i-1]) return KAMA
4. TA Lib Cross-Validation¶
Lets run both of our options and cross-validate our results vs. the KAMA standard implementation of TA-Lib.
KAMA_SMA = KAMA_nb (close=close_np, er_lookback=10, fast_sc_period=2, slow_sc_period=30, useSMA=True) KAMA_CLOSE = KAMA_nb (close=close_np, er_lookback=10, fast_sc_period=2, slow_sc_period=30, useSMA=False) TALIB_KAMA = vbt.talib('KAMA').run(close=close_np, timeperiod=10).real # Note that the standard setting for time period is 30 so we need to adjust it to 10 to get comparable results.
vDF4 = vDF2 vDF4['K_SMA'] = KAMA_SMA vDF4['K_CLOSE'] = KAMA_CLOSE vDF4['K_TALIB'] = TALIB_KAMA vDF4[9:21]
| 0 | close_prior | change_1_bar | close_er_lb | change_10_bars | Volatility | Efficieny Ratio (ER) | SC | K_SMA | K_CLOSE | K_TALIB | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 9 | 4337.44 | 4280.68 | 56.76 | NaN | NaN | NaN | NaN | NaN | 4172.386000 | 4337.440000 | NaN |
| 10 | 4310.01 | 4337.44 | 27.43 | 4285.08 | 24.93 | 751.83 | 0.033159 | 0.007137 | 4173.368272 | 4337.244222 | 4337.244222 |
| 11 | 4386.69 | 4310.01 | 76.68 | 4108.37 | 278.32 | 651.80 | 0.427002 | 0.103450 | 4195.436303 | 4342.359364 | 4342.359364 |
| 12 | 4587.48 | 4386.69 | 200.79 | 4139.98 | 447.50 | 820.98 | 0.545080 | 0.154242 | 4255.905893 | 4380.167253 | 4380.167253 |
| 13 | 4555.14 | 4587.48 | 32.34 | 4086.29 | 468.85 | 799.63 | 0.586334 | 0.174371 | 4308.083576 | 4410.677386 | 4410.677386 |
| 14 | 4724.89 | 4555.14 | 169.75 | 4016.00 | 708.89 | 899.09 | 0.788453 | 0.290827 | 4429.301960 | 4502.058764 | 4502.058764 |
| 15 | 4834.91 | 4724.89 | 110.02 | 4040.00 | 794.91 | 985.11 | 0.806925 | 0.302947 | 4552.179836 | 4602.895160 | 4602.895160 |
| 16 | 4472.14 | 4834.91 | 362.77 | 4114.01 | 358.13 | 1273.87 | 0.281135 | 0.054663 | 4547.804589 | 4595.747642 | 4595.747642 |
| 17 | 4509.08 | 4472.14 | 36.94 | 4316.01 | 193.07 | 1108.81 | 0.174124 | 0.028684 | 4546.693797 | 4593.261631 | 4593.261631 |
| 18 | 4100.11 | 4509.08 | 408.97 | 4280.68 | 180.57 | 1482.45 | 0.121805 | 0.019006 | 4538.206161 | 4583.888941 | 4583.888941 |
| 19 | 4366.47 | 4100.11 | 266.36 | 4337.44 | 29.03 | 1692.05 | 0.017157 | 0.005602 | 4537.244081 | 4582.670943 | 4582.670943 |
| 20 | 4619.77 | 4366.47 | 253.30 | 4310.01 | 309.76 | 1917.92 | 0.161508 | 0.026169 | 4539.403704 | 4583.641789 | 4583.641789 |
Rock solid. We can see that
- our results perfectly macth TA Lib when the last close is used as an initial value (see row 9 and row 10 in K_Close and K_TALIB)
- using an SMA as initial value has some serious impact on the KAMA calculation when going forward - and we wouldn't be able to use this alternative, if we sticked to the TA Lib standard integration.
Maybe you tell me, what Perry would prefer? If you guys leave me hanging, I might need to get myself a copy of his 1995 book.
5. Feeding the Indicator Factory¶
We are almost done. The logic and function is fully complete. All that is left to us is to feed VBT's indicator factory to perpetuate our work in a VBT class.
KAMA = vbt.IF( class_name='KAMA', short_name='kma', input_names=['close'], param_names=['er_lookback', 'fast_sc_period', 'slow_sc_period', 'useSMA'], output_names=['KAMA'] ).with_apply_func( KAMA_nb, takes_1d=True, er_lookback=10, fast_sc_period=2, slow_sc_period=30, useSMA=False )
KAMA_results = KAMA.run(close=ohlcv['Close']).KAMA
KAMA_results[9:21]
Open time 2017-08-26 00:00:00+00:00 4337.440000 2017-08-27 00:00:00+00:00 4337.244222 2017-08-28 00:00:00+00:00 4342.359364 2017-08-29 00:00:00+00:00 4380.167253 2017-08-30 00:00:00+00:00 4410.677386 2017-08-31 00:00:00+00:00 4502.058764 2017-09-01 00:00:00+00:00 4602.895160 2017-09-02 00:00:00+00:00 4595.747642 2017-09-03 00:00:00+00:00 4593.261631 2017-09-04 00:00:00+00:00 4583.888941 2017-09-05 00:00:00+00:00 4582.670943 2017-09-06 00:00:00+00:00 4583.641789 Freq: D, Name: Close, dtype: float64
Wow - that was the easiest part, right? Nice and smooth
Speed Test¶
Last not least, we want to evaluate how fast our indicator got. For this purpose, we'll create a simply crossover strategy and backtest 200 lookback periods versus the TA Lib implementation (thats the only thing comparable for the TA Lib implementation, remember?).
Basic Function to test parameters with our KAMA indicator:
def test_our_KAMA(close=ohlcv['Close'], window=2): kama = KAMA.run(close=close, er_lookback=window).KAMA entries = close.vbt.crossed_above(kama) exits = close.vbt.crossed_below(kama) pf = vbt.Portfolio.from_signals( close=close, entries=entries, exits=exits, size=100, size_type='value', init_cash='auto') return pf.stats([ 'total_return', 'win_rate', 'profit_factor', 'max_dd', 'total_trades' ])
Test Run:
test_run = test_our_KAMA()
test_run
Total Return [%] 360.743458 Win Rate [%] 31.279621 Profit Factor 1.924447 Max Drawdown [%] 23.960212 Total Trades 211 dtype: object
Basic Function to test parameters with TA Lib KAMA indicator:
def test_TALIB(close=ohlcv['Close'], window=2): kama = vbt.talib('KAMA').run(close=close, timeperiod=window).real entries = close.vbt.crossed_above(kama) exits = close.vbt.crossed_below(kama) pf = vbt.Portfolio.from_signals( close=close, entries=entries, exits=exits, size=100, size_type='value', init_cash='auto') return pf.stats([ 'total_return', 'win_rate', 'profit_factor', 'max_dd', 'total_trades' ])
test_run2 = test_TALIB()
test_run2
Total Return [%] 360.743458 Win Rate [%] 31.279621 Profit Factor 1.924447 Max Drawdown [%] 23.960212 Total Trades 211 dtype: object
Prepare backtest setup
windows = range(2,203) th_combs = list(product(windows))
%%timeit comb_stats = [ test_our_KAMA(window=window) for window in th_combs ]
10.3 s ± 62.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit comb_stats = [ test_TALIB(window=window) for window in th_combs ]
9.81 s ± 713 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Great. We are almost as fast as a hardcoded C++! And there sure is some room for optimisation.