Compare commits
111 Commits
feature-ac
...
feature/mu
| Author | SHA1 | Date | |
|---|---|---|---|
| 30931f60d4 | |||
| 79a033a633 | |||
| a10d5b8a64 | |||
| 30044dc4ea | |||
| 30048364bb | |||
| badd5b87dd | |||
| 8e5a56a28c | |||
| 591a9643eb | |||
| 41b3c0d839 | |||
| dff6680098 | |||
| 88dd8e84dd | |||
| 45f022c16b | |||
| 9fb6794997 | |||
| 20e38fe223 | |||
| f2ab00559a | |||
| fc10cf3907 | |||
| 2f15b0b2a7 | |||
| 0bda14409d | |||
| d7bde54533 | |||
| ad45b424f7 | |||
| ee5c1ebae1 | |||
| 702328a242 | |||
| 132391c915 | |||
| 15948ea863 | |||
| 63c2f7e748 | |||
| 031b2427b9 | |||
| 6b2a4bb066 | |||
| c3d22e439f | |||
| 8f87764fc9 | |||
| 074b6feaf8 | |||
| 919ddf2238 | |||
| dfbda326ea | |||
| eff4770692 | |||
| e54683c69f | |||
| db22d47f72 | |||
| fb75ed2c35 | |||
| 878092fe93 | |||
| 801ce61c9d | |||
| 0d49327cca | |||
| f92d8c2f5e | |||
| ce6dc58764 | |||
| b4ac17585b | |||
| 0f65ce3dc3 | |||
| d3236d27a6 | |||
| 5136279eb5 | |||
| d63a6b7897 | |||
| a9db7e087f | |||
| a96cf19fd7 | |||
| 17cb63f792 | |||
| ca1172c61c | |||
| f884c16f07 | |||
| d0920daa16 | |||
| 884f377ebc | |||
| a16b3c1571 | |||
| d15581e35c | |||
| ca3565132d | |||
| 73fef65309 | |||
| 3494177ac5 | |||
| 855e4379a3 | |||
| 0d65ae6ea1 | |||
| 67aab2a1be | |||
| 2ba42430a3 | |||
| d3cb2fa760 | |||
| ed6285dcf5 | |||
| 7eadf6c165 | |||
| 04cf2e2ba2 | |||
| 2ba492ead2 | |||
| a3b182fd45 | |||
| 6e30ee92a0 | |||
| 576b2445f8 | |||
| da34775708 | |||
| 90afa29f34 | |||
| 8991733278 | |||
| c213342353 | |||
| a3cab14bdd | |||
| f3d2b403bd | |||
| 32e77a4cb9 | |||
| 8456e6d739 | |||
| 14e6501ac8 | |||
| c03cf054e8 | |||
| c1145fec5b | |||
| 5d47a7ac58 | |||
| cd461c701e | |||
| a7df38c61b | |||
| b21bd9487a | |||
| c3b466c4c0 | |||
| 0909fa947f | |||
| 77faa919c0 | |||
| 17b9859a73 | |||
| 85d4916320 | |||
| a70e2adf45 | |||
| 527c3139f2 | |||
| 5bbb95eeac | |||
| 3158cdb68b | |||
| 5cc3a1c318 | |||
| 232f32467e | |||
| 523905ece6 | |||
| ac11c37e77 | |||
| 90b202cfdd | |||
| 8abebcc910 | |||
| 5a5e94eeb5 | |||
| 01ff23907f | |||
| 6cdc0a45c5 | |||
| d38bf0600f | |||
| 0f0b816c7a | |||
| 7344e49591 | |||
| 116700f3e4 | |||
| d06faa4c9b | |||
| 95cd7ead8a | |||
| 8e1fa604a5 | |||
| db210e6be7 |
1
CODEOWNERS
Normal file
1
CODEOWNERS
Normal file
@ -0,0 +1 @@
|
||||
* @drew2323
|
||||
128
README.md
Normal file
128
README.md
Normal file
@ -0,0 +1,128 @@
|
||||
# V2TRADING - Advanced Algorithmic Trading Platform
|
||||
|
||||
## Overview
|
||||
Custom-built algorithmic trading platform for research, backtesting and live trading. Trading engine capable of processing tick data, providing custom aggregation, managing trades, and supporting backtesting in a highly accurate and efficient manner.
|
||||
|
||||
## Key Features
|
||||
- **Trading Engine**: At the core of the platform is a trading engine that processes tick data in real time. This engine is responsible for aggregating data and managing the execution of trades, ensuring precision and speed in trade placement and execution.
|
||||
|
||||
- **High-Fidelity Backtesting Environment**: ability to backtest strategies with 1:1 precision - meaning a tick-by-tick backtesting. This level of precision in backtesting, down to millisecond accuracy, mirrors live trading environments and is vital for developing and testing high-frequency trading strategies.
|
||||
|
||||
- **Custom Data Aggregation:** The platform includes a data aggregator that allows for custom aggregation rules. This flexibility supports a variety of data analysis approaches, including non-time based bars and other unique criteria.
|
||||
|
||||
- **Indicators** Contains inbuild [tulipy](https://tulipindicators.org/list) [ta-lib](https://ta-lib.github.io/ta-lib-python/) and templates for custom build multioutputs stateful indicators.
|
||||
|
||||
- **Machine Learning Integration:** Recently, the platform has expanded to incorporate machine learning capabilities. This includes modules for both training and inference, supporting the complete ML lifecycle. These ML models can be utilized within trading strategies for classification and exploiting statistical advantages.
|
||||
|
||||
**Technology Stack**
|
||||
**Backend and API:** The backbone of the platform is built with Python, utilizing libraries such as FastAPI, NumPy, Keras, and JAX, ensuring high performance and scalability.
|
||||
**Frontend:** The client-side is developed with Vanilla JavaScript and jQuery, employing LightweightCharts for charting purposes. Additional modules enhance the platform's functionality. The frontend is slated for a future refactoring to modern frameworks like Vue.js and Vuetify for a more robust user interface.
|
||||
|
||||
While the platform is fully functional and growing, ongoing development is planned, particularly in the realm of frontend enhancements and further integration of advanced machine learning techniques.
|
||||
|
||||
**Contributions**
|
||||
Contributions to this project are welcome. Whether it's improving the frontend, enhancing the backend capabilities, or experimenting with new trading strategies and machine learning models, your input can help take this platform to the next level.
|
||||
|
||||
This repository represents a sophisticated and evolving tool for algorithmic traders, offering precision, speed, and a level of customization that is unparalleled in open-source systems. Join us in shaping the future of algorithmic trading.
|
||||
|
||||
<p align="center">
|
||||
Main screen with entry/exit points and stoploss lines<br>
|
||||
<img width="700" alt="Main screen with entry/exit points and stoploss lines" src="https://github.com/drew2323/v2trading/assets/28433232/751d5b0e-ef64-453f-8e76-89a39db679c5">
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Main screen with tick based indicators<br>
|
||||
<img width="700" alt="Main screen with tick based indicators" src="https://github.com/drew2323/v2trading/assets/28433232/4bf6128c-9b36-4e88-9da1-5a33319976a1">
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Indicator editor<br>
|
||||
<img width="700" alt="Indicator editor" src="https://github.com/drew2323/v2trading/assets/28433232/cc417393-7b88-4eea-afcb-3a00402d0a8d">
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Strategy editor<br>
|
||||
<img width="700" alt="Strategy editor" src="https://github.com/drew2323/v2trading/assets/28433232/74f67e7a-1efc-4f63-b763-7827b2337b6a">
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Strategy analytical tools<br>
|
||||
<img width="700" alt="Strategy analytical tools" src="https://github.com/drew2323/v2trading/assets/28433232/4bf8b3c3-e430-4250-831a-e5876bb6b743">
|
||||
</p>
|
||||
|
||||
|
||||
# Installation Instructions
|
||||
This document outlines the steps for installing and setting up the necessary environment for the application. These instructions are applicable for both Windows and Linux operating systems. Please follow the steps carefully to ensure a smooth setup.
|
||||
|
||||
## Prerequisites
|
||||
Before beginning the installation process, ensure the following prerequisites are met:
|
||||
|
||||
- TA-Lib Library:
|
||||
- Windows: Download and build the TA-Lib library. Install Visual Studio Community with the Visual C++ feature. Navigate to `C:\ta-lib\c\make\cdr\win32\msvc` in the command prompt and build the library using the available makefile.
|
||||
- Linux: Install TA-Lib using your distribution's package manager or compile from source following the instructions available on the TA-Lib GitHub repository.
|
||||
|
||||
- Alpaca Paper Trading Account: Create an account at [Alpaca Markets](https://alpaca.markets/) and generate `API_KEY` and `SECRET_KEY` for your paper trading account.
|
||||
|
||||
## Installation Steps
|
||||
**Clone the Repository:** Clone the remote repository to your local machine.
|
||||
`git clone git@github.com:drew2323/v2trading.git <name_of_local_folder>`
|
||||
|
||||
**Install Python:** Ensure Python 3.10.11 is installed on your system.
|
||||
|
||||
**Create a Virtual Environment:** Set up a Python virtual environment.
|
||||
`python -m venv <path_to_venv_folder>`
|
||||
|
||||
**Activate Virtual Environment:**
|
||||
- Windows: `source ./<venv_folder>/Scripts/activate`
|
||||
- Linux: `source ./<venv_folder>/bin/activate`
|
||||
|
||||
**Install Dependencies:** Install the program requirements.
|
||||
pip install -r requirements.txt
|
||||
Note: It's permissible to comment out references to `keras` and `tensorflow` modules, as well as the `ml-room` repository in `requirements.txt`.
|
||||
|
||||
**Environment Variables:** In `run.sh`, modify the `VIRTUAL_ENV_DIR` and `PYTHON_TO_USE` variables as necessary.
|
||||
|
||||
**Data Directory:** Navigate to `DATA_DIR` and create folders: `aggcache`, `tradecache`, and `models`.
|
||||
|
||||
**Media and Static Folders:** Create `media` and `static` folders one level above the repository directory. Also create `.env` file there.
|
||||
|
||||
**Database Setup:** Create the `v2trading.db` file using SQL commands from `v2trading_create_db.sql`.
|
||||
```
|
||||
import sqlite3
|
||||
with open("v2trading_create_db.sql", "r") as f:
|
||||
sql_statements = f.read()
|
||||
conn = sqlite3.connect('v2trading.db')
|
||||
cursor = conn.cursor()
|
||||
cursor.executescript(sql_statements)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
```
|
||||
Ensure the `config_table` is not empty by making an initial entry.
|
||||
|
||||
**Start the Application:** Run `main.py` in VSCode to start the application.
|
||||
|
||||
**Accessing the Application:** If the uvicorn server runs successfully at `http://0.0.0.0:8000`, access the application at `http://localhost:8000/static/`.
|
||||
|
||||
**Database Configuration:** Add dynamic button and JS configurations to the `config_table` in `v2trading.db` via the "Config" section on the main page.
|
||||
Please replace placeholders (e.g., `<name_of_local_folder>`, `<path_to_venv_folder>`) with your actual paths and details. Follow these instructions to ensure the application is set up correctly and ready for use.
|
||||
|
||||
## Environmental variables
|
||||
Trading platform can support N different accounts. Their API keys are stored as environmental variables in .env file located in the root directory.
|
||||
Account for trading api is selected when each strategy is run. However for realtime websocket data), always ACCOUNT1 is used for all strategies. The data point selection (iex vs sip) is set by LIVE_DATA_FEED environment variable.
|
||||
|
||||
.env file should contain:
|
||||
|
||||
```
|
||||
ACCOUNT1_LIVE_API_KEY=<ACCOUNT1_LIVE_API_KEY>
|
||||
ACCOUNT1_LIVE_SECRET_KEY=<ACCOUNT1_LIVE_SECRET_KEY>
|
||||
ACCOUNT1_LIVE_FEED=sip
|
||||
ACCOUNT1_PAPER_API_KEY=<ACCOUNT1_PAPER_API_KEY>
|
||||
ACCOUNT1_PAPER_SECRET_KEY=<ACCOUNT1_PAPER_SECRET_KEY>
|
||||
ACCOUNT1_PAPER_FEED=sip
|
||||
ACCOUNT2_PAPER_API_KEY=<ACCOUNT2_PAPER_API_KEY>
|
||||
ACCOUNT2_PAPER_SECRET_KEY=ACCOUNT2_PAPER_SECRET_KEY<>
|
||||
ACCOUNT2_PAPER_FEED=iex
|
||||
WEB_API_KEY=<pass-for-webapi>
|
||||
```
|
||||
|
||||
|
||||
51
_run_scheduler.sh
Executable file
51
_run_scheduler.sh
Executable file
@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Approach: (https://chat.openai.com/c/43be8685-b27b-4e3b-bd18-0856f8d23d7e)
|
||||
# cron runs this script every minute New York in range of 9:20 - 16:20 US time
|
||||
# Also this scripts writes the "heartbeat" message to log file, so the user knows
|
||||
#that cron is running
|
||||
|
||||
# Installation steps required:
|
||||
#chmod +x run_scheduler.sh
|
||||
#install tzdata package: sudo apt-get install tzdata
|
||||
#crontab -e
|
||||
#CRON_TZ=America/New_York
|
||||
# * 9-16 * * 1-5 /home/david/v2trading/run_scheduler.sh
|
||||
#
|
||||
# (Runs every minute of every hour on every day-of-week from Monday to Friday) US East time
|
||||
|
||||
# Path to the Python script
|
||||
PYTHON_SCRIPT="v2realbot/scheduler/scheduler.py"
|
||||
|
||||
# Log file path
|
||||
LOG_FILE="job.log"
|
||||
|
||||
# Timezone for New York
|
||||
TZ='America/New_York'
|
||||
NY_DATE_TIME=$(TZ=$TZ date +'%Y-%m-%d %H:%M:%S')
|
||||
echo "NY_DATE_TIME: $NY_DATE_TIME"
|
||||
|
||||
# Check if log file exists, create it if it doesn't
|
||||
if [ ! -f "$LOG_FILE" ]; then
|
||||
touch "$LOG_FILE"
|
||||
fi
|
||||
|
||||
# Check the last line of the log file
|
||||
LAST_LINE=$(tail -n 1 "$LOG_FILE")
|
||||
|
||||
# Cron trigger message
|
||||
CRON_TRIGGER="Cron trigger: $NY_DATE_TIME"
|
||||
|
||||
# Update the log
|
||||
if [[ "$LAST_LINE" =~ "Cron trigger:".* ]]; then
|
||||
# Replace the last line with the new trigger message
|
||||
sed -i '' '$ d' "$LOG_FILE"
|
||||
echo "$CRON_TRIGGER" >> "$LOG_FILE"
|
||||
else
|
||||
# Append a new cron trigger message
|
||||
echo "$CRON_TRIGGER" >> "$LOG_FILE"
|
||||
fi
|
||||
|
||||
|
||||
# FOR DEBUG - Run the Python script and append output to log file
|
||||
python3 "$PYTHON_SCRIPT" >> "$LOG_FILE" 2>&1
|
||||
7
deployall.sh
Executable file
7
deployall.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Navigate to your git repository directory
|
||||
|
||||
# Execute git commands
|
||||
git push deploytest master
|
||||
git push deploy master
|
||||
@ -1,18 +1,24 @@
|
||||
absl-py==2.0.0
|
||||
alpaca==1.0.0
|
||||
alpaca-py==0.7.1
|
||||
altair==4.2.2
|
||||
anyio==3.6.2
|
||||
appdirs==1.4.4
|
||||
appnope==0.1.3
|
||||
asttokens==2.2.1
|
||||
astunparse==1.6.3
|
||||
attrs==22.2.0
|
||||
better-exceptions==0.3.3
|
||||
bleach==6.0.0
|
||||
blinker==1.5
|
||||
cachetools==5.3.0
|
||||
CD==1.1.0
|
||||
certifi==2022.12.7
|
||||
chardet==5.1.0
|
||||
charset-normalizer==3.0.1
|
||||
click==8.1.3
|
||||
colorama==0.4.6
|
||||
comm==0.1.4
|
||||
contourpy==1.0.7
|
||||
cycler==0.11.0
|
||||
dash==2.9.1
|
||||
@ -20,35 +26,82 @@ dash-bootstrap-components==1.4.1
|
||||
dash-core-components==2.0.0
|
||||
dash-html-components==2.0.0
|
||||
dash-table==5.0.0
|
||||
dateparser==1.1.8
|
||||
decorator==5.1.1
|
||||
defusedxml==0.7.1
|
||||
dill==0.3.7
|
||||
dm-tree==0.1.8
|
||||
entrypoints==0.4
|
||||
exceptiongroup==1.1.3
|
||||
executing==1.2.0
|
||||
fastapi==0.95.0
|
||||
filelock==3.13.1
|
||||
Flask==2.2.3
|
||||
flatbuffers==23.5.26
|
||||
fonttools==4.39.0
|
||||
fpdf2==2.7.6
|
||||
gast==0.4.0
|
||||
gitdb==4.0.10
|
||||
GitPython==3.1.31
|
||||
google-auth==2.23.0
|
||||
google-auth-oauthlib==1.0.0
|
||||
google-pasta==0.2.0
|
||||
grpcio==1.58.0
|
||||
h11==0.14.0
|
||||
h5py==3.10.0
|
||||
icecream==2.1.3
|
||||
idna==3.4
|
||||
imageio==2.31.6
|
||||
importlib-metadata==6.1.0
|
||||
ipython==8.17.2
|
||||
ipywidgets==8.1.1
|
||||
itsdangerous==2.1.2
|
||||
jax==0.4.23
|
||||
jaxlib==0.4.23
|
||||
jedi==0.19.1
|
||||
Jinja2==3.1.2
|
||||
joblib==1.3.2
|
||||
jsonschema==4.17.3
|
||||
jupyterlab-widgets==3.0.9
|
||||
keras==3.0.2
|
||||
keras-core==0.1.7
|
||||
keras-nightly==3.0.3.dev2024010203
|
||||
keras-nlp-nightly==0.7.0.dev2024010203
|
||||
keras-tcn @ git+https://github.com/drew2323/keras-tcn.git@4bddb17a02cb2f31c9fe2e8f616b357b1ddb0e11
|
||||
kiwisolver==1.4.4
|
||||
libclang==16.0.6
|
||||
llvmlite==0.39.1
|
||||
Markdown==3.4.3
|
||||
markdown-it-py==2.2.0
|
||||
MarkupSafe==2.1.2
|
||||
matplotlib==3.8.2
|
||||
matplotlib-inline==0.1.6
|
||||
mdurl==0.1.2
|
||||
ml-dtypes==0.3.1
|
||||
mlroom @ git+https://github.com/drew2323/mlroom.git@692900e274c4e0542d945d231645c270fc508437
|
||||
mplfinance==0.12.10b0
|
||||
msgpack==1.0.4
|
||||
mypy-extensions==1.0.0
|
||||
namex==0.0.7
|
||||
newtulipy==0.4.6
|
||||
numpy==1.24.2
|
||||
numba==0.56.4
|
||||
numpy==1.23.5
|
||||
oauthlib==3.2.2
|
||||
opt-einsum==3.3.0
|
||||
orjson==3.9.10
|
||||
packaging==23.0
|
||||
pandas==1.5.3
|
||||
param==1.13.0
|
||||
parso==0.8.3
|
||||
patsy==0.5.6
|
||||
pexpect==4.8.0
|
||||
Pillow==9.4.0
|
||||
plotly==5.13.1
|
||||
prompt-toolkit==3.0.39
|
||||
proto-plus==1.22.2
|
||||
protobuf==3.20.3
|
||||
ptyprocess==0.7.0
|
||||
pure-eval==0.2.2
|
||||
pyarrow==11.0.0
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
@ -56,41 +109,72 @@ pyct==0.5.0
|
||||
pydantic==1.10.5
|
||||
pydeck==0.8.0
|
||||
Pygments==2.14.0
|
||||
pyinstrument==4.5.3
|
||||
Pympler==1.0.1
|
||||
pyparsing==3.0.9
|
||||
pyrsistent==0.19.3
|
||||
pysos==1.3.0
|
||||
python-dateutil==2.8.2
|
||||
python-dotenv==1.0.0
|
||||
python-multipart==0.0.6
|
||||
pytz==2022.7.1
|
||||
pytz-deprecation-shim==0.1.0.post0
|
||||
pyviz-comms==2.2.1
|
||||
PyWavelets==1.5.0
|
||||
PyYAML==6.0
|
||||
requests==2.28.2
|
||||
regex==2023.10.3
|
||||
requests==2.31.0
|
||||
requests-oauthlib==1.3.1
|
||||
rich==13.3.1
|
||||
rsa==4.9
|
||||
schedule==1.2.1
|
||||
scikit-learn==1.3.2
|
||||
scipy==1.11.2
|
||||
seaborn==0.12.2
|
||||
semver==2.13.0
|
||||
six==1.16.0
|
||||
smmap==5.0.0
|
||||
sniffio==1.3.0
|
||||
sseclient-py==1.7.2
|
||||
stack-data==0.6.3
|
||||
starlette==0.26.1
|
||||
statsmodels==0.14.1
|
||||
streamlit==1.20.0
|
||||
structlog==23.1.0
|
||||
TA-Lib==0.4.28
|
||||
tb-nightly==2.16.0a20240102
|
||||
tenacity==8.2.2
|
||||
tensorboard==2.15.1
|
||||
tensorboard-data-server==0.7.1
|
||||
tensorflow-addons==0.23.0
|
||||
tensorflow-estimator==2.15.0
|
||||
tensorflow-io-gcs-filesystem==0.34.0
|
||||
termcolor==2.3.0
|
||||
tf-estimator-nightly==2.14.0.dev2023080308
|
||||
tf-nightly==2.16.0.dev20240101
|
||||
tf_keras-nightly==2.16.0.dev2023123010
|
||||
threadpoolctl==3.2.0
|
||||
tinydb==4.7.1
|
||||
tinydb-serialization==2.1.0
|
||||
tinyflux==0.4.0
|
||||
toml==0.10.2
|
||||
tomli==2.0.1
|
||||
toolz==0.12.0
|
||||
tornado==6.2
|
||||
tqdm==4.65.0
|
||||
traitlets==5.13.0
|
||||
typeguard==2.13.3
|
||||
typing_extensions==4.5.0
|
||||
tzdata==2023.2
|
||||
tzlocal==4.3
|
||||
urllib3==1.26.14
|
||||
uvicorn==0.21.1
|
||||
-e git+https://github.com/drew2323/v2trading.git@b58639454be921f9f0c9dd1880491cfcfdfdf3b7#egg=v2realbot
|
||||
validators==0.20.0
|
||||
wcwidth==0.2.9
|
||||
webencodings==0.5.1
|
||||
websockets==10.4
|
||||
Werkzeug==2.2.3
|
||||
widgetsnbextension==4.0.9
|
||||
wrapt==1.14.1
|
||||
zipp==3.15.0
|
||||
|
||||
@ -1,21 +1,34 @@
|
||||
absl-py==2.0.0
|
||||
alpaca==1.0.0
|
||||
alpaca-py==0.7.1
|
||||
alpaca-py==0.18.1
|
||||
altair==4.2.2
|
||||
annotated-types==0.6.0
|
||||
anyio==3.6.2
|
||||
appdirs==1.4.4
|
||||
appnope==0.1.3
|
||||
APScheduler==3.10.4
|
||||
argon2-cffi==23.1.0
|
||||
argon2-cffi-bindings==21.2.0
|
||||
arrow==1.3.0
|
||||
asttokens==2.2.1
|
||||
astunparse==1.6.3
|
||||
async-lru==2.0.4
|
||||
attrs==22.2.0
|
||||
Babel==2.15.0
|
||||
beautifulsoup4==4.12.3
|
||||
better-exceptions==0.3.3
|
||||
bleach==6.0.0
|
||||
blinker==1.5
|
||||
bottle==0.12.25
|
||||
cachetools==5.3.0
|
||||
CD==1.1.0
|
||||
certifi==2022.12.7
|
||||
cffi==1.16.0
|
||||
chardet==5.1.0
|
||||
charset-normalizer==3.0.1
|
||||
click==8.1.3
|
||||
colorama==0.4.6
|
||||
comm==0.1.4
|
||||
contourpy==1.0.7
|
||||
cycler==0.11.0
|
||||
dash==2.9.1
|
||||
@ -23,90 +36,189 @@ dash-bootstrap-components==1.4.1
|
||||
dash-core-components==2.0.0
|
||||
dash-html-components==2.0.0
|
||||
dash-table==5.0.0
|
||||
dateparser==1.1.8
|
||||
debugpy==1.8.1
|
||||
decorator==5.1.1
|
||||
defusedxml==0.7.1
|
||||
dill==0.3.7
|
||||
dm-tree==0.1.8
|
||||
entrypoints==0.4
|
||||
exceptiongroup==1.1.3
|
||||
executing==1.2.0
|
||||
fastapi==0.95.0
|
||||
fastapi==0.109.2
|
||||
fastjsonschema==2.19.1
|
||||
filelock==3.13.1
|
||||
Flask==2.2.3
|
||||
flatbuffers==23.5.26
|
||||
fonttools==4.39.0
|
||||
fpdf2==2.7.6
|
||||
fqdn==1.5.1
|
||||
gast==0.4.0
|
||||
gitdb==4.0.10
|
||||
GitPython==3.1.31
|
||||
google-auth==2.23.0
|
||||
google-auth-oauthlib==1.0.0
|
||||
google-pasta==0.2.0
|
||||
greenlet==3.0.3
|
||||
grpcio==1.58.0
|
||||
h11==0.14.0
|
||||
h5py==3.9.0
|
||||
h5py==3.10.0
|
||||
html2text==2024.2.26
|
||||
httpcore==1.0.5
|
||||
httpx==0.27.0
|
||||
humanize==4.9.0
|
||||
icecream==2.1.3
|
||||
idna==3.4
|
||||
imageio==2.31.6
|
||||
importlib-metadata==6.1.0
|
||||
ipykernel==6.29.4
|
||||
ipython==8.17.2
|
||||
ipywidgets==8.1.1
|
||||
isoduration==20.11.0
|
||||
itables==2.0.1
|
||||
itsdangerous==2.1.2
|
||||
jax==0.4.23
|
||||
jaxlib==0.4.23
|
||||
jedi==0.19.1
|
||||
Jinja2==3.1.2
|
||||
joblib==1.3.2
|
||||
jsonschema==4.17.3
|
||||
keras==2.13.1
|
||||
json5==0.9.25
|
||||
jsonpointer==2.4
|
||||
jsonschema==4.22.0
|
||||
jsonschema-specifications==2023.12.1
|
||||
jupyter-events==0.10.0
|
||||
jupyter-lsp==2.2.5
|
||||
jupyter_client==8.6.1
|
||||
jupyter_core==5.7.2
|
||||
jupyter_server==2.14.0
|
||||
jupyter_server_terminals==0.5.3
|
||||
jupyterlab==4.1.8
|
||||
jupyterlab-widgets==3.0.9
|
||||
jupyterlab_pygments==0.3.0
|
||||
jupyterlab_server==2.27.1
|
||||
kaleido==0.2.1
|
||||
keras==3.0.2
|
||||
keras-core==0.1.7
|
||||
keras-nightly==3.0.3.dev2024010203
|
||||
keras-nlp-nightly==0.7.0.dev2024010203
|
||||
keras-tcn @ git+https://github.com/drew2323/keras-tcn.git@4bddb17a02cb2f31c9fe2e8f616b357b1ddb0e11
|
||||
kiwisolver==1.4.4
|
||||
libclang==16.0.6
|
||||
lightweight-charts @ git+https://github.com/drew2323/lightweight-charts-python@10fd42f785182edfbf6b46a19a4ef66e85985a23
|
||||
llvmlite==0.39.1
|
||||
Markdown==3.4.3
|
||||
markdown-it-py==2.2.0
|
||||
MarkupSafe==2.1.2
|
||||
matplotlib==3.7.1
|
||||
matplotlib==3.8.2
|
||||
matplotlib-inline==0.1.6
|
||||
mdurl==0.1.2
|
||||
mistune==3.0.2
|
||||
ml-dtypes==0.3.1
|
||||
mlroom @ git+https://github.com/drew2323/mlroom.git@692900e274c4e0542d945d231645c270fc508437
|
||||
mplfinance==0.12.10b0
|
||||
msgpack==1.0.4
|
||||
mypy-extensions==1.0.0
|
||||
namex==0.0.7
|
||||
nbclient==0.10.0
|
||||
nbconvert==7.16.4
|
||||
nbformat==5.10.4
|
||||
nest-asyncio==1.6.0
|
||||
newtulipy==0.4.6
|
||||
numpy==1.24.2
|
||||
notebook_shim==0.2.4
|
||||
numba==0.56.4
|
||||
numpy==1.23.5
|
||||
oauthlib==3.2.2
|
||||
opt-einsum==3.3.0
|
||||
orjson==3.9.10
|
||||
overrides==7.7.0
|
||||
packaging==23.0
|
||||
pandas==1.5.3
|
||||
pandas==2.2.1
|
||||
pandocfilters==1.5.1
|
||||
param==1.13.0
|
||||
parso==0.8.3
|
||||
patsy==0.5.6
|
||||
pexpect==4.8.0
|
||||
Pillow==9.4.0
|
||||
plotly==5.13.1
|
||||
platformdirs==4.2.0
|
||||
plotly==5.22.0
|
||||
prometheus_client==0.20.0
|
||||
prompt-toolkit==3.0.39
|
||||
proto-plus==1.22.2
|
||||
protobuf==3.20.3
|
||||
proxy-tools==0.1.0
|
||||
psutil==5.9.8
|
||||
ptyprocess==0.7.0
|
||||
pure-eval==0.2.2
|
||||
pyarrow==11.0.0
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.22
|
||||
pyct==0.5.0
|
||||
pydantic==1.10.5
|
||||
pydantic==2.6.4
|
||||
pydantic_core==2.16.3
|
||||
pydeck==0.8.0
|
||||
Pygments==2.14.0
|
||||
pyinstrument==4.5.3
|
||||
Pympler==1.0.1
|
||||
pyobjc-core==10.3
|
||||
pyobjc-framework-Cocoa==10.3
|
||||
pyobjc-framework-Security==10.3
|
||||
pyobjc-framework-WebKit==10.3
|
||||
pyparsing==3.0.9
|
||||
pyrsistent==0.19.3
|
||||
pysos==1.3.0
|
||||
python-dateutil==2.8.2
|
||||
python-dotenv==1.0.0
|
||||
python-json-logger==2.0.7
|
||||
python-multipart==0.0.6
|
||||
pytz==2022.7.1
|
||||
pytz-deprecation-shim==0.1.0.post0
|
||||
pyviz-comms==2.2.1
|
||||
PyWavelets==1.5.0
|
||||
pywebview==5.1
|
||||
PyYAML==6.0
|
||||
pyzmq==25.1.2
|
||||
referencing==0.35.1
|
||||
regex==2023.10.3
|
||||
requests==2.31.0
|
||||
requests-oauthlib==1.3.1
|
||||
rfc3339-validator==0.1.4
|
||||
rfc3986-validator==0.1.1
|
||||
rich==13.3.1
|
||||
rpds-py==0.18.0
|
||||
rsa==4.9
|
||||
scikit-learn==1.3.1
|
||||
schedule==1.2.1
|
||||
scikit-learn==1.3.2
|
||||
scipy==1.11.2
|
||||
seaborn==0.12.2
|
||||
semver==2.13.0
|
||||
Send2Trash==1.8.3
|
||||
six==1.16.0
|
||||
smmap==5.0.0
|
||||
sniffio==1.3.0
|
||||
soupsieve==2.5
|
||||
SQLAlchemy==2.0.27
|
||||
sseclient-py==1.7.2
|
||||
starlette==0.26.1
|
||||
stack-data==0.6.3
|
||||
starlette==0.36.3
|
||||
statsmodels==0.14.1
|
||||
streamlit==1.20.0
|
||||
structlog==23.1.0
|
||||
TA-Lib==0.4.28
|
||||
tb-nightly==2.16.0a20240102
|
||||
tenacity==8.2.2
|
||||
tensorboard==2.13.0
|
||||
tensorboard==2.15.1
|
||||
tensorboard-data-server==0.7.1
|
||||
tensorflow==2.13.0
|
||||
tensorflow-estimator==2.13.0
|
||||
tensorflow-addons==0.23.0
|
||||
tensorflow-estimator==2.15.0
|
||||
tensorflow-io-gcs-filesystem==0.34.0
|
||||
termcolor==2.3.0
|
||||
terminado==0.18.1
|
||||
tf-estimator-nightly==2.14.0.dev2023080308
|
||||
tf-nightly==2.16.0.dev20240101
|
||||
tf_keras-nightly==2.16.0.dev2023123010
|
||||
threadpoolctl==3.2.0
|
||||
tinycss2==1.3.0
|
||||
tinydb==4.7.1
|
||||
tinydb-serialization==2.1.0
|
||||
tinyflux==0.4.0
|
||||
@ -115,15 +227,24 @@ tomli==2.0.1
|
||||
toolz==0.12.0
|
||||
tornado==6.2
|
||||
tqdm==4.65.0
|
||||
typing_extensions==4.5.0
|
||||
traitlets==5.13.0
|
||||
typeguard==2.13.3
|
||||
types-python-dateutil==2.9.0.20240316
|
||||
typing_extensions==4.9.0
|
||||
tzdata==2023.2
|
||||
tzlocal==4.3
|
||||
uri-template==1.3.0
|
||||
urllib3==1.26.14
|
||||
uvicorn==0.21.1
|
||||
#-e git+https://github.com/drew2323/v2trading.git@940348412f67ecd551ef8d0aaedf84452abf1320#egg=v2realbot
|
||||
-e git+https://github.com/drew2323/v2trading.git@1f85b271dba2b9baf2c61b591a08849e9d684374#egg=v2realbot
|
||||
validators==0.20.0
|
||||
vectorbtpro @ file:///Users/davidbrazda/Downloads/vectorbt.pro-2024.2.22
|
||||
wcwidth==0.2.9
|
||||
webcolors==1.13
|
||||
webencodings==0.5.1
|
||||
websockets==10.4
|
||||
websocket-client==1.7.0
|
||||
websockets==11.0.3
|
||||
Werkzeug==2.2.3
|
||||
wrapt==1.15.0
|
||||
widgetsnbextension==4.0.9
|
||||
wrapt==1.14.1
|
||||
zipp==3.15.0
|
||||
|
||||
243
requirements_newest.txt
Normal file
243
requirements_newest.txt
Normal file
@ -0,0 +1,243 @@
|
||||
absl-py
|
||||
alpaca
|
||||
alpaca-py
|
||||
altair
|
||||
annotated-types
|
||||
anyio
|
||||
appdirs
|
||||
appnope
|
||||
APScheduler
|
||||
argon2-cffi
|
||||
argon2-cffi-bindings
|
||||
arrow
|
||||
asttokens
|
||||
astunparse
|
||||
async-lru
|
||||
attrs
|
||||
Babel
|
||||
beautifulsoup4
|
||||
better-exceptions
|
||||
bleach
|
||||
blinker
|
||||
bottle
|
||||
cachetools
|
||||
CD
|
||||
certifi
|
||||
cffi
|
||||
chardet
|
||||
charset-normalizer
|
||||
click
|
||||
colorama
|
||||
comm
|
||||
contourpy
|
||||
cycler
|
||||
dash
|
||||
dash-bootstrap-components
|
||||
dash-core-components
|
||||
dash-html-components
|
||||
dash-table
|
||||
dateparser
|
||||
debugpy
|
||||
decorator
|
||||
defusedxml
|
||||
dill
|
||||
dm-tree
|
||||
entrypoints
|
||||
exceptiongroup
|
||||
executing
|
||||
fastapi
|
||||
fastjsonschema
|
||||
filelock
|
||||
Flask
|
||||
flatbuffers
|
||||
fonttools
|
||||
fpdf2
|
||||
fqdn
|
||||
gast
|
||||
gitdb
|
||||
GitPython
|
||||
google-auth
|
||||
google-auth-oauthlib
|
||||
google-pasta
|
||||
greenlet
|
||||
grpcio
|
||||
h11
|
||||
h5py
|
||||
html2text
|
||||
httpcore
|
||||
httpx
|
||||
humanize
|
||||
icecream
|
||||
idna
|
||||
imageio
|
||||
importlib-metadata
|
||||
ipykernel
|
||||
ipython
|
||||
ipywidgets
|
||||
isoduration
|
||||
itables
|
||||
itsdangerous
|
||||
jax
|
||||
jaxlib
|
||||
jedi
|
||||
Jinja2
|
||||
joblib
|
||||
json5
|
||||
jsonpointer
|
||||
jsonschema
|
||||
jsonschema-specifications
|
||||
jupyter-events
|
||||
jupyter-lsp
|
||||
jupyter_client
|
||||
jupyter_core
|
||||
jupyter_server
|
||||
jupyter_server_terminals
|
||||
jupyterlab
|
||||
jupyterlab-widgets
|
||||
jupyterlab_pygments
|
||||
jupyterlab_server
|
||||
kaleido
|
||||
keras
|
||||
keras-core
|
||||
keras-nightly
|
||||
keras-nlp-nightly
|
||||
keras-tcn @ git+https://github.com/drew2323/keras-tcn.git
|
||||
kiwisolver
|
||||
libclang
|
||||
lightweight-charts @ git+https://github.com/drew2323/lightweight-charts-python.git
|
||||
llvmlite
|
||||
Markdown
|
||||
markdown-it-py
|
||||
MarkupSafe
|
||||
matplotlib
|
||||
matplotlib-inline
|
||||
mdurl
|
||||
mistune
|
||||
ml-dtypes
|
||||
mlroom @ git+https://github.com/drew2323/mlroom.git
|
||||
mplfinance
|
||||
msgpack
|
||||
mypy-extensions
|
||||
namex
|
||||
nbclient
|
||||
nbconvert
|
||||
nbformat
|
||||
nest-asyncio
|
||||
newtulipy
|
||||
notebook_shim
|
||||
numba
|
||||
numpy
|
||||
oauthlib
|
||||
opt-einsum
|
||||
orjson
|
||||
overrides
|
||||
packaging
|
||||
pandas
|
||||
pandocfilters
|
||||
param
|
||||
parso
|
||||
patsy
|
||||
pexpect
|
||||
Pillow
|
||||
platformdirs
|
||||
plotly
|
||||
prometheus_client
|
||||
prompt-toolkit
|
||||
proto-plus
|
||||
protobuf
|
||||
proxy-tools
|
||||
psutil
|
||||
ptyprocess
|
||||
pure-eval
|
||||
pyarrow
|
||||
pyasn1
|
||||
pyasn1-modules
|
||||
pycparser
|
||||
pyct
|
||||
pydantic
|
||||
pydantic_core
|
||||
pydeck
|
||||
Pygments
|
||||
pyinstrument
|
||||
pyparsing
|
||||
pyrsistent
|
||||
pysos
|
||||
python-dateutil
|
||||
python-dotenv
|
||||
python-json-logger
|
||||
python-multipart
|
||||
pytz
|
||||
pytz-deprecation-shim
|
||||
pyviz-comms
|
||||
PyWavelets
|
||||
pywebview
|
||||
PyYAML
|
||||
pyzmq
|
||||
referencing
|
||||
regex
|
||||
requests
|
||||
requests-oauthlib
|
||||
rfc3339-validator
|
||||
rfc3986-validator
|
||||
rich
|
||||
rpds-py
|
||||
rsa
|
||||
schedule
|
||||
scikit-learn
|
||||
scipy
|
||||
seaborn
|
||||
semver
|
||||
Send2Trash
|
||||
six
|
||||
smmap
|
||||
sniffio
|
||||
soupsieve
|
||||
SQLAlchemy
|
||||
sseclient-py
|
||||
stack-data
|
||||
starlette
|
||||
statsmodels
|
||||
streamlit
|
||||
structlog
|
||||
TA-Lib
|
||||
tb-nightly
|
||||
tenacity
|
||||
tensorboard
|
||||
tensorboard-data-server
|
||||
tensorflow-addons
|
||||
tensorflow-estimator
|
||||
tensorflow-io-gcs-filesystem
|
||||
termcolor
|
||||
terminado
|
||||
tf-estimator-nightly
|
||||
tf-nightly
|
||||
tf_keras-nightly
|
||||
threadpoolctl
|
||||
tinycss2
|
||||
tinydb
|
||||
tinydb-serialization
|
||||
tinyflux
|
||||
toml
|
||||
tomli
|
||||
toolz
|
||||
tornado
|
||||
tqdm
|
||||
traitlets
|
||||
typeguard
|
||||
types-python-dateutil
|
||||
typing_extensions
|
||||
tzdata
|
||||
tzlocal
|
||||
uri-template
|
||||
urllib3
|
||||
uvicorn
|
||||
validators
|
||||
wcwidth
|
||||
webcolors
|
||||
webencodings
|
||||
websocket-client
|
||||
websockets
|
||||
Werkzeug
|
||||
widgetsnbextension
|
||||
wrapt
|
||||
zipp
|
||||
BIN
res_pred_act.png
Normal file
BIN
res_pred_act.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 26 KiB |
BIN
res_target.png
Normal file
BIN
res_target.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 20 KiB |
104044
research/basic.ipynb
Normal file
104044
research/basic.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
410
research/get_trades_at_once.ipynb
Normal file
410
research/get_trades_at_once.ipynb
Normal file
@ -0,0 +1,410 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Loading trades and vectorized aggregation\n",
|
||||
"Describes how to fetch trades (remote/cached) and use new vectorized aggregation to aggregate bars of given type (time, volume, dollar) and resolution\n",
|
||||
"\n",
|
||||
"`fetch_trades_parallel` enables to fetch trades of given symbol and interval, also can filter conditions and minimum size. return `trades_df`\n",
|
||||
"`aggregate_trades` acceptss `trades_df` and ressolution and type of bars (VOLUME, TIME, DOLLAR) and return aggregated ohlcv dataframe `ohlcv_df`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">Activating profile profile1\n",
|
||||
"</pre>\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"Activating profile profile1\n"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"trades_df-BAC-2024-01-11T09:30:00-2024-01-12T16:00:00.parquet\n",
|
||||
"trades_df-SPY-2024-01-01T09:30:00-2024-05-14T16:00:00.parquet\n",
|
||||
"ohlcv_df-BAC-2024-01-11T09:30:00-2024-01-12T16:00:00.parquet\n",
|
||||
"ohlcv_df-SPY-2024-01-01T09:30:00-2024-05-14T16:00:00.parquet\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import pandas as pd\n",
|
||||
"import numpy as np\n",
|
||||
"from numba import jit\n",
|
||||
"from alpaca.data.historical import StockHistoricalDataClient\n",
|
||||
"from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR\n",
|
||||
"from alpaca.data.requests import StockTradesRequest\n",
|
||||
"from v2realbot.enums.enums import BarType\n",
|
||||
"import time\n",
|
||||
"from datetime import datetime\n",
|
||||
"from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, zoneNY, send_to_telegram, fetch_calendar_data\n",
|
||||
"import pyarrow\n",
|
||||
"from v2realbot.loader.aggregator_vectorized import fetch_daily_stock_trades, fetch_trades_parallel, generate_time_bars_nb, aggregate_trades\n",
|
||||
"import vectorbtpro as vbt\n",
|
||||
"import v2realbot.utils.config_handler as cfh\n",
|
||||
"\n",
|
||||
"vbt.settings.set_theme(\"dark\")\n",
|
||||
"vbt.settings['plotting']['layout']['width'] = 1280\n",
|
||||
"vbt.settings.plotting.auto_rangebreaks = True\n",
|
||||
"# Set the option to display with pagination\n",
|
||||
"pd.set_option('display.notebook_repr_html', True)\n",
|
||||
"pd.set_option('display.max_rows', 20) # Number of rows per page\n",
|
||||
"# pd.set_option('display.float_format', '{:.9f}'.format)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"#trade filtering\n",
|
||||
"exclude_conditions = cfh.config_handler.get_val('AGG_EXCLUDED_TRADES') #standard ['C','O','4','B','7','V','P','W','U','Z','F']\n",
|
||||
"minsize = 100\n",
|
||||
"\n",
|
||||
"symbol = \"SPY\"\n",
|
||||
"#datetime in zoneNY \n",
|
||||
"day_start = datetime(2024, 1, 1, 9, 30, 0)\n",
|
||||
"day_stop = datetime(2024, 1, 14, 16, 00, 0)\n",
|
||||
"day_start = zoneNY.localize(day_start)\n",
|
||||
"day_stop = zoneNY.localize(day_stop)\n",
|
||||
"#filename of trades_df parquet, date are in isoformat but without time zone part\n",
|
||||
"dir = DATA_DIR + \"/notebooks/\"\n",
|
||||
"#parquet interval cache contains exclude conditions and minsize filtering\n",
|
||||
"file_trades = dir + f\"trades_df-{symbol}-{day_start.strftime('%Y-%m-%dT%H:%M:%S')}-{day_stop.strftime('%Y-%m-%dT%H:%M:%S')}-{exclude_conditions}-{minsize}.parquet\"\n",
|
||||
"#file_trades = dir + f\"trades_df-{symbol}-{day_start.strftime('%Y-%m-%dT%H:%M:%S')}-{day_stop.strftime('%Y-%m-%dT%H:%M:%S')}.parquet\"\n",
|
||||
"file_ohlcv = dir + f\"ohlcv_df-{symbol}-{day_start.strftime('%Y-%m-%dT%H:%M:%S')}-{day_stop.strftime('%Y-%m-%dT%H:%M:%S')}-{exclude_conditions}-{minsize}.parquet\"\n",
|
||||
"\n",
|
||||
"#PRINT all parquet in directory\n",
|
||||
"import os\n",
|
||||
"files = [f for f in os.listdir(dir) if f.endswith(\".parquet\")]\n",
|
||||
"for f in files:\n",
|
||||
" print(f)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"NOT FOUND. Fetching from remote\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"ename": "KeyboardInterrupt",
|
||||
"evalue": "",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
|
||||
"Cell \u001b[0;32mIn[2], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m trades_df \u001b[38;5;241m=\u001b[39m \u001b[43mfetch_daily_stock_trades\u001b[49m\u001b[43m(\u001b[49m\u001b[43msymbol\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mday_start\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mday_stop\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mexclude_conditions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mexclude_conditions\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mminsize\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mminsize\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mforce_remote\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_retries\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m5\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbackoff_factor\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 2\u001b[0m trades_df\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/v2realbot/loader/aggregator_vectorized.py:200\u001b[0m, in \u001b[0;36mfetch_daily_stock_trades\u001b[0;34m(symbol, start, end, exclude_conditions, minsize, force_remote, max_retries, backoff_factor)\u001b[0m\n\u001b[1;32m 198\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m attempt \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mrange\u001b[39m(max_retries):\n\u001b[1;32m 199\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 200\u001b[0m tradesResponse \u001b[38;5;241m=\u001b[39m \u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_stock_trades\u001b[49m\u001b[43m(\u001b[49m\u001b[43mstockTradeRequest\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 201\u001b[0m is_empty \u001b[38;5;241m=\u001b[39m \u001b[38;5;129;01mnot\u001b[39;00m tradesResponse[symbol]\n\u001b[1;32m 202\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRemote fetched: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mis_empty\u001b[38;5;132;01m=}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m, start, end)\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/data/historical/stock.py:144\u001b[0m, in \u001b[0;36mStockHistoricalDataClient.get_stock_trades\u001b[0;34m(self, request_params)\u001b[0m\n\u001b[1;32m 141\u001b[0m params \u001b[38;5;241m=\u001b[39m request_params\u001b[38;5;241m.\u001b[39mto_request_fields()\n\u001b[1;32m 143\u001b[0m \u001b[38;5;66;03m# paginated get request for market data api\u001b[39;00m\n\u001b[0;32m--> 144\u001b[0m raw_trades \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_data_get\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 145\u001b[0m \u001b[43m \u001b[49m\u001b[43mendpoint_data_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtrades\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 146\u001b[0m \u001b[43m \u001b[49m\u001b[43mendpoint_asset_class\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstocks\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 147\u001b[0m \u001b[43m \u001b[49m\u001b[43mapi_version\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mv2\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 148\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mparams\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 149\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 151\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_use_raw_data:\n\u001b[1;32m 152\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m raw_trades\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/data/historical/stock.py:338\u001b[0m, in \u001b[0;36mStockHistoricalDataClient._data_get\u001b[0;34m(self, endpoint_asset_class, endpoint_data_type, api_version, symbol_or_symbols, limit, page_limit, extension, **kwargs)\u001b[0m\n\u001b[1;32m 335\u001b[0m params[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlimit\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m actual_limit\n\u001b[1;32m 336\u001b[0m params[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpage_token\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m page_token\n\u001b[0;32m--> 338\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mparams\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mapi_version\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mapi_version\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 340\u001b[0m \u001b[38;5;66;03m# TODO: Merge parsing if possible\u001b[39;00m\n\u001b[1;32m 341\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m extension \u001b[38;5;241m==\u001b[39m DataExtensionType\u001b[38;5;241m.\u001b[39mSNAPSHOT:\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/common/rest.py:221\u001b[0m, in \u001b[0;36mRESTClient.get\u001b[0;34m(self, path, data, **kwargs)\u001b[0m\n\u001b[1;32m 210\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mget\u001b[39m(\u001b[38;5;28mself\u001b[39m, path: \u001b[38;5;28mstr\u001b[39m, data: Union[\u001b[38;5;28mdict\u001b[39m, \u001b[38;5;28mstr\u001b[39m] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m HTTPResult:\n\u001b[1;32m 211\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Performs a single GET request\u001b[39;00m\n\u001b[1;32m 212\u001b[0m \n\u001b[1;32m 213\u001b[0m \u001b[38;5;124;03m Args:\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 219\u001b[0m \u001b[38;5;124;03m dict: The response\u001b[39;00m\n\u001b[1;32m 220\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m--> 221\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mGET\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/common/rest.py:129\u001b[0m, in \u001b[0;36mRESTClient._request\u001b[0;34m(self, method, path, data, base_url, api_version)\u001b[0m\n\u001b[1;32m 127\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m retry \u001b[38;5;241m>\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[1;32m 128\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 129\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_one_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mopts\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mretry\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 130\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m RetryException:\n\u001b[1;32m 131\u001b[0m time\u001b[38;5;241m.\u001b[39msleep(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_retry_wait)\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/common/rest.py:193\u001b[0m, in \u001b[0;36mRESTClient._one_request\u001b[0;34m(self, method, url, opts, retry)\u001b[0m\n\u001b[1;32m 174\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_one_request\u001b[39m(\u001b[38;5;28mself\u001b[39m, method: \u001b[38;5;28mstr\u001b[39m, url: \u001b[38;5;28mstr\u001b[39m, opts: \u001b[38;5;28mdict\u001b[39m, retry: \u001b[38;5;28mint\u001b[39m) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28mdict\u001b[39m:\n\u001b[1;32m 175\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Perform one request, possibly raising RetryException in the case\u001b[39;00m\n\u001b[1;32m 176\u001b[0m \u001b[38;5;124;03m the response is 429. Otherwise, if error text contain \"code\" string,\u001b[39;00m\n\u001b[1;32m 177\u001b[0m \u001b[38;5;124;03m then it decodes to json object and returns APIError.\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 191\u001b[0m \u001b[38;5;124;03m dict: The response data\u001b[39;00m\n\u001b[1;32m 192\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m--> 193\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_session\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mopts\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 195\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 196\u001b[0m response\u001b[38;5;241m.\u001b[39mraise_for_status()\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/requests/sessions.py:589\u001b[0m, in \u001b[0;36mSession.request\u001b[0;34m(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)\u001b[0m\n\u001b[1;32m 584\u001b[0m send_kwargs \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m 585\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtimeout\u001b[39m\u001b[38;5;124m\"\u001b[39m: timeout,\n\u001b[1;32m 586\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mallow_redirects\u001b[39m\u001b[38;5;124m\"\u001b[39m: allow_redirects,\n\u001b[1;32m 587\u001b[0m }\n\u001b[1;32m 588\u001b[0m send_kwargs\u001b[38;5;241m.\u001b[39mupdate(settings)\n\u001b[0;32m--> 589\u001b[0m resp \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msend\u001b[49m\u001b[43m(\u001b[49m\u001b[43mprep\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43msend_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 591\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m resp\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/requests/sessions.py:703\u001b[0m, in \u001b[0;36mSession.send\u001b[0;34m(self, request, **kwargs)\u001b[0m\n\u001b[1;32m 700\u001b[0m start \u001b[38;5;241m=\u001b[39m preferred_clock()\n\u001b[1;32m 702\u001b[0m \u001b[38;5;66;03m# Send the request\u001b[39;00m\n\u001b[0;32m--> 703\u001b[0m r \u001b[38;5;241m=\u001b[39m \u001b[43madapter\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msend\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 705\u001b[0m \u001b[38;5;66;03m# Total elapsed time of the request (approximately)\u001b[39;00m\n\u001b[1;32m 706\u001b[0m elapsed \u001b[38;5;241m=\u001b[39m preferred_clock() \u001b[38;5;241m-\u001b[39m start\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/requests/adapters.py:486\u001b[0m, in \u001b[0;36mHTTPAdapter.send\u001b[0;34m(self, request, stream, timeout, verify, cert, proxies)\u001b[0m\n\u001b[1;32m 483\u001b[0m timeout \u001b[38;5;241m=\u001b[39m TimeoutSauce(connect\u001b[38;5;241m=\u001b[39mtimeout, read\u001b[38;5;241m=\u001b[39mtimeout)\n\u001b[1;32m 485\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 486\u001b[0m resp \u001b[38;5;241m=\u001b[39m \u001b[43mconn\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43murlopen\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 487\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 488\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 489\u001b[0m \u001b[43m \u001b[49m\u001b[43mbody\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbody\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 490\u001b[0m \u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 491\u001b[0m \u001b[43m \u001b[49m\u001b[43mredirect\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 492\u001b[0m \u001b[43m \u001b[49m\u001b[43massert_same_host\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 493\u001b[0m \u001b[43m \u001b[49m\u001b[43mpreload_content\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 494\u001b[0m \u001b[43m \u001b[49m\u001b[43mdecode_content\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 495\u001b[0m \u001b[43m \u001b[49m\u001b[43mretries\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmax_retries\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 496\u001b[0m \u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 497\u001b[0m \u001b[43m \u001b[49m\u001b[43mchunked\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mchunked\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 498\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 500\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (ProtocolError, \u001b[38;5;167;01mOSError\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m err:\n\u001b[1;32m 501\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mConnectionError\u001b[39;00m(err, request\u001b[38;5;241m=\u001b[39mrequest)\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py:703\u001b[0m, in \u001b[0;36mHTTPConnectionPool.urlopen\u001b[0;34m(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)\u001b[0m\n\u001b[1;32m 700\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_prepare_proxy(conn)\n\u001b[1;32m 702\u001b[0m \u001b[38;5;66;03m# Make the request on the httplib connection object.\u001b[39;00m\n\u001b[0;32m--> 703\u001b[0m httplib_response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_make_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 704\u001b[0m \u001b[43m \u001b[49m\u001b[43mconn\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 705\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 706\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 707\u001b[0m \u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout_obj\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 708\u001b[0m \u001b[43m \u001b[49m\u001b[43mbody\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mbody\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 709\u001b[0m \u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 710\u001b[0m \u001b[43m \u001b[49m\u001b[43mchunked\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mchunked\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 711\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 713\u001b[0m \u001b[38;5;66;03m# If we're going to release the connection in ``finally:``, then\u001b[39;00m\n\u001b[1;32m 714\u001b[0m \u001b[38;5;66;03m# the response doesn't need to know about the connection. Otherwise\u001b[39;00m\n\u001b[1;32m 715\u001b[0m \u001b[38;5;66;03m# it will also try to release it and we'll have a double-release\u001b[39;00m\n\u001b[1;32m 716\u001b[0m \u001b[38;5;66;03m# mess.\u001b[39;00m\n\u001b[1;32m 717\u001b[0m response_conn \u001b[38;5;241m=\u001b[39m conn \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m release_conn \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py:449\u001b[0m, in \u001b[0;36mHTTPConnectionPool._make_request\u001b[0;34m(self, conn, method, url, timeout, chunked, **httplib_request_kw)\u001b[0m\n\u001b[1;32m 444\u001b[0m httplib_response \u001b[38;5;241m=\u001b[39m conn\u001b[38;5;241m.\u001b[39mgetresponse()\n\u001b[1;32m 445\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 446\u001b[0m \u001b[38;5;66;03m# Remove the TypeError from the exception chain in\u001b[39;00m\n\u001b[1;32m 447\u001b[0m \u001b[38;5;66;03m# Python 3 (including for exceptions like SystemExit).\u001b[39;00m\n\u001b[1;32m 448\u001b[0m \u001b[38;5;66;03m# Otherwise it looks like a bug in the code.\u001b[39;00m\n\u001b[0;32m--> 449\u001b[0m \u001b[43msix\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mraise_from\u001b[49m\u001b[43m(\u001b[49m\u001b[43me\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m)\u001b[49m\n\u001b[1;32m 450\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (SocketTimeout, BaseSSLError, SocketError) \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 451\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_raise_timeout(err\u001b[38;5;241m=\u001b[39me, url\u001b[38;5;241m=\u001b[39murl, timeout_value\u001b[38;5;241m=\u001b[39mread_timeout)\n",
|
||||
"File \u001b[0;32m<string>:3\u001b[0m, in \u001b[0;36mraise_from\u001b[0;34m(value, from_value)\u001b[0m\n",
|
||||
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py:444\u001b[0m, in \u001b[0;36mHTTPConnectionPool._make_request\u001b[0;34m(self, conn, method, url, timeout, chunked, **httplib_request_kw)\u001b[0m\n\u001b[1;32m 441\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m:\n\u001b[1;32m 442\u001b[0m \u001b[38;5;66;03m# Python 3\u001b[39;00m\n\u001b[1;32m 443\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 444\u001b[0m httplib_response \u001b[38;5;241m=\u001b[39m \u001b[43mconn\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mgetresponse\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 445\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 446\u001b[0m \u001b[38;5;66;03m# Remove the TypeError from the exception chain in\u001b[39;00m\n\u001b[1;32m 447\u001b[0m \u001b[38;5;66;03m# Python 3 (including for exceptions like SystemExit).\u001b[39;00m\n\u001b[1;32m 448\u001b[0m \u001b[38;5;66;03m# Otherwise it looks like a bug in the code.\u001b[39;00m\n\u001b[1;32m 449\u001b[0m six\u001b[38;5;241m.\u001b[39mraise_from(e, \u001b[38;5;28;01mNone\u001b[39;00m)\n",
|
||||
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1375\u001b[0m, in \u001b[0;36mHTTPConnection.getresponse\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 1373\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 1374\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m-> 1375\u001b[0m \u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbegin\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1376\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mConnectionError\u001b[39;00m:\n\u001b[1;32m 1377\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mclose()\n",
|
||||
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:318\u001b[0m, in \u001b[0;36mHTTPResponse.begin\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 316\u001b[0m \u001b[38;5;66;03m# read until we get a non-100 response\u001b[39;00m\n\u001b[1;32m 317\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[0;32m--> 318\u001b[0m version, status, reason \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_read_status\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 319\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m status \u001b[38;5;241m!=\u001b[39m CONTINUE:\n\u001b[1;32m 320\u001b[0m \u001b[38;5;28;01mbreak\u001b[39;00m\n",
|
||||
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:279\u001b[0m, in \u001b[0;36mHTTPResponse._read_status\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 278\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_read_status\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[0;32m--> 279\u001b[0m line \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mreadline\u001b[49m\u001b[43m(\u001b[49m\u001b[43m_MAXLINE\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124miso-8859-1\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 280\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(line) \u001b[38;5;241m>\u001b[39m _MAXLINE:\n\u001b[1;32m 281\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m LineTooLong(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstatus line\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
|
||||
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py:705\u001b[0m, in \u001b[0;36mSocketIO.readinto\u001b[0;34m(self, b)\u001b[0m\n\u001b[1;32m 703\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[1;32m 704\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 705\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_sock\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrecv_into\u001b[49m\u001b[43m(\u001b[49m\u001b[43mb\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 706\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m timeout:\n\u001b[1;32m 707\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_timeout_occurred \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n",
|
||||
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1274\u001b[0m, in \u001b[0;36mSSLSocket.recv_into\u001b[0;34m(self, buffer, nbytes, flags)\u001b[0m\n\u001b[1;32m 1270\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m flags \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[1;32m 1271\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[1;32m 1272\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnon-zero flags not allowed in calls to recv_into() on \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;241m%\u001b[39m\n\u001b[1;32m 1273\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m)\n\u001b[0;32m-> 1274\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread\u001b[49m\u001b[43m(\u001b[49m\u001b[43mnbytes\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbuffer\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1275\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 1276\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28msuper\u001b[39m()\u001b[38;5;241m.\u001b[39mrecv_into(buffer, nbytes, flags)\n",
|
||||
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1130\u001b[0m, in \u001b[0;36mSSLSocket.read\u001b[0;34m(self, len, buffer)\u001b[0m\n\u001b[1;32m 1128\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 1129\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m buffer \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m-> 1130\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_sslobj\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mlen\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbuffer\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1131\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 1132\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_sslobj\u001b[38;5;241m.\u001b[39mread(\u001b[38;5;28mlen\u001b[39m)\n",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"trades_df = fetch_daily_stock_trades(symbol, day_start, day_stop, exclude_conditions=exclude_conditions, minsize=minsize, force_remote=False, max_retries=5, backoff_factor=1)\n",
|
||||
"trades_df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Either load trades or ohlcv from parquet if exists\n",
|
||||
"\n",
|
||||
"#trades_df = fetch_trades_parallel(symbol, day_start, day_stop, exclude_conditions=exclude_conditions, minsize=50, max_workers=20) #exclude_conditions=['C','O','4','B','7','V','P','W','U','Z','F'])\n",
|
||||
"# trades_df.to_parquet(file_trades, engine='pyarrow', compression='gzip')\n",
|
||||
"\n",
|
||||
"trades_df = pd.read_parquet(file_trades,engine='pyarrow')\n",
|
||||
"ohlcv_df = aggregate_trades(symbol=symbol, trades_df=trades_df, resolution=1, type=BarType.TIME)\n",
|
||||
"ohlcv_df.to_parquet(file_ohlcv, engine='pyarrow', compression='gzip')\n",
|
||||
"\n",
|
||||
"# ohlcv_df = pd.read_parquet(file_ohlcv,engine='pyarrow')\n",
|
||||
"# trades_df = pd.read_parquet(file_trades,engine='pyarrow')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#list all files is dir directory with parquet extension\n",
|
||||
"dir = DATA_DIR + \"/notebooks/\"\n",
|
||||
"import os\n",
|
||||
"files = [f for f in os.listdir(dir) if f.endswith(\".parquet\")]\n",
|
||||
"file_name = \"\"\n",
|
||||
"ohlcv_df = pd.read_parquet(file_ohlcv,engine='pyarrow')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ohlcv_df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import seaborn as sns\n",
|
||||
"# Calculate daily returns\n",
|
||||
"ohlcv_df['returns'] = ohlcv_df['close'].pct_change().dropna()\n",
|
||||
"#same as above but pct_change is from 3 datapoints back, but only if it is the same date, else na\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Plot the probability distribution curve\n",
|
||||
"plt.figure(figsize=(10, 6))\n",
|
||||
"sns.histplot(df['returns'].dropna(), kde=True, stat='probability', bins=30)\n",
|
||||
"plt.title('Probability Distribution of Daily Returns')\n",
|
||||
"plt.xlabel('Daily Returns')\n",
|
||||
"plt.ylabel('Probability')\n",
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import pandas as pd\n",
|
||||
"import numpy as np\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"from sklearn.preprocessing import StandardScaler\n",
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"from sklearn.metrics import accuracy_score\n",
|
||||
"\n",
|
||||
"# Define the intervals from 5 to 20 s, returns for each interval\n",
|
||||
"#maybe use rolling window?\n",
|
||||
"intervals = range(5, 21, 5)\n",
|
||||
"\n",
|
||||
"# Create columns for percentage returns\n",
|
||||
"rolling_window = 50\n",
|
||||
"\n",
|
||||
"# Normalize the returns using rolling mean and std\n",
|
||||
"for N in intervals:\n",
|
||||
" column_name = f'returns_{N}'\n",
|
||||
" rolling_mean = ohlcv_df[column_name].rolling(window=rolling_window).mean()\n",
|
||||
" rolling_std = ohlcv_df[column_name].rolling(window=rolling_window).std()\n",
|
||||
" ohlcv_df[f'norm_{column_name}'] = (ohlcv_df[column_name] - rolling_mean) / rolling_std\n",
|
||||
"\n",
|
||||
"# Display the dataframe with normalized return columns\n",
|
||||
"ohlcv_df\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Calculate the sum of the normalized return columns for each row\n",
|
||||
"ohlcv_df['sum_norm_returns'] = ohlcv_df[[f'norm_returns_{N}' for N in intervals]].sum(axis=1)\n",
|
||||
"\n",
|
||||
"# Sort the DataFrame based on the sum of normalized returns in descending order\n",
|
||||
"df_sorted = ohlcv_df.sort_values(by='sum_norm_returns', ascending=False)\n",
|
||||
"\n",
|
||||
"# Display the top rows with the highest sum of normalized returns\n",
|
||||
"df_sorted\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Drop initial rows with NaN values due to pct_change\n",
|
||||
"ohlcv_df.dropna(inplace=True)\n",
|
||||
"\n",
|
||||
"# Plotting the probability distribution curves\n",
|
||||
"plt.figure(figsize=(14, 8))\n",
|
||||
"for N in intervals:\n",
|
||||
" sns.kdeplot(ohlcv_df[f'returns_{N}'].dropna(), label=f'Returns {N}', fill=True)\n",
|
||||
"\n",
|
||||
"plt.title('Probability Distribution of Percentage Returns')\n",
|
||||
"plt.xlabel('Percentage Return')\n",
|
||||
"plt.ylabel('Density')\n",
|
||||
"plt.legend()\n",
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import seaborn as sns\n",
|
||||
"# Plot the probability distribution curve\n",
|
||||
"plt.figure(figsize=(10, 6))\n",
|
||||
"sns.histplot(ohlcv_df['returns'].dropna(), kde=True, stat='probability', bins=30)\n",
|
||||
"plt.title('Probability Distribution of Daily Returns')\n",
|
||||
"plt.xlabel('Daily Returns')\n",
|
||||
"plt.ylabel('Probability')\n",
|
||||
"plt.show()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#show only rows from ohlcv_df where returns > 0.005\n",
|
||||
"ohlcv_df[ohlcv_df['returns'] > 0.0005]\n",
|
||||
"\n",
|
||||
"#ohlcv_df[ohlcv_df['returns'] < -0.005]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#ohlcv where index = date 2024-03-13 and between hour 12\n",
|
||||
"\n",
|
||||
"a = ohlcv_df.loc['2024-03-13 12:00:00':'2024-03-13 13:00:00']\n",
|
||||
"a"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ohlcv_df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"trades_df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ohlcv_df.info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"trades_df.to_parquet(\"trades_df-spy-0111-0111.parquett\", engine='pyarrow', compression='gzip')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"trades_df.to_parquet(\"trades_df-spy-111-0516.parquett\", engine='pyarrow', compression='gzip', allow_truncated_timestamps=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ohlcv_df.to_parquet(\"ohlcv_df-spy-111-0516.parquett\", engine='pyarrow', compression='gzip')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"basic_data = vbt.Data.from_data(vbt.symbol_dict({symbol: ohlcv_df}), tz_convert=zoneNY)\n",
|
||||
"vbt.settings['plotting']['auto_rangebreaks'] = True\n",
|
||||
"basic_data.ohlcv.plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#access just BCA\n",
|
||||
"#df_filtered = df.loc[\"BAC\"]"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
1526
research/indcross_parametrized.ipynb
Normal file
1526
research/indcross_parametrized.ipynb
Normal file
File diff suppressed because one or more lines are too long
557
research/ohlc_persistance_test.ipynb
Normal file
557
research/ohlc_persistance_test.ipynb
Normal file
File diff suppressed because one or more lines are too long
1602
research/prepare_aggregatied_data.ipynb
Normal file
1602
research/prepare_aggregatied_data.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
26673
research/rsi_alpaca.ipynb
Normal file
26673
research/rsi_alpaca.ipynb
Normal file
File diff suppressed because one or more lines are too long
1553
research/strat1/strat1_v1_MULTI.ipynb
Normal file
1553
research/strat1/strat1_v1_MULTI.ipynb
Normal file
File diff suppressed because one or more lines are too long
1570
research/strat1/strat1_v1_SINGLE.ipynb
Normal file
1570
research/strat1/strat1_v1_SINGLE.ipynb
Normal file
File diff suppressed because one or more lines are too long
1553
research/strat_LINREG_MULTI/v1_MULTI.ipynb
Normal file
1553
research/strat_LINREG_MULTI/v1_MULTI.ipynb
Normal file
File diff suppressed because one or more lines are too long
40483
research/strat_LINREG_MULTI/v1_SINGLE.ipynb
Normal file
40483
research/strat_LINREG_MULTI/v1_SINGLE.ipynb
Normal file
File diff suppressed because one or more lines are too long
1536
research/strat_ORDER_IMBALANCE/v1_MULTI.ipynb
Normal file
1536
research/strat_ORDER_IMBALANCE/v1_MULTI.ipynb
Normal file
File diff suppressed because one or more lines are too long
1569572
research/strat_ORDER_IMBALANCE/v1_SINGLE.ipynb
Normal file
1569572
research/strat_ORDER_IMBALANCE/v1_SINGLE.ipynb
Normal file
File diff suppressed because one or more lines are too long
1699
research/strat_ORDER_IMBALANCE/v2_SINGLE.ipynb
Normal file
1699
research/strat_ORDER_IMBALANCE/v2_SINGLE.ipynb
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
932
research/strat_SUPERTREND/SUPERTREND_v1_MULTI.ipynb
Normal file
932
research/strat_SUPERTREND/SUPERTREND_v1_MULTI.ipynb
Normal file
@ -0,0 +1,932 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from v2realbot.tools.loadbatch import load_batch\n",
|
||||
"from v2realbot.utils.utils import zoneNY\n",
|
||||
"import pandas as pd\n",
|
||||
"import numpy as np\n",
|
||||
"import vectorbtpro as vbt\n",
|
||||
"from itables import init_notebook_mode, show\n",
|
||||
"import datetime\n",
|
||||
"from itertools import product\n",
|
||||
"from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR\n",
|
||||
"\n",
|
||||
"init_notebook_mode(all_interactive=True)\n",
|
||||
"\n",
|
||||
"vbt.settings.set_theme(\"dark\")\n",
|
||||
"vbt.settings['plotting']['layout']['width'] = 1280\n",
|
||||
"vbt.settings.plotting.auto_rangebreaks = True\n",
|
||||
"# Set the option to display with pagination\n",
|
||||
"pd.set_option('display.notebook_repr_html', True)\n",
|
||||
"pd.set_option('display.max_rows', 10) # Number of rows per page\n",
|
||||
"\n",
|
||||
"# Define the market open and close times\n",
|
||||
"market_open = datetime.time(9, 30)\n",
|
||||
"market_close = datetime.time(16, 0)\n",
|
||||
"entry_window_opens = 1\n",
|
||||
"entry_window_closes = 370\n",
|
||||
"\n",
|
||||
"forced_exit_start = 380\n",
|
||||
"forced_exit_end = 390\n",
|
||||
"\n",
|
||||
"#LOAD FROM PARQUET\n",
|
||||
"#list all files is dir directory with parquet extension\n",
|
||||
"dir = DATA_DIR + \"/notebooks/\"\n",
|
||||
"import os\n",
|
||||
"files = [f for f in os.listdir(dir) if f.endswith(\".parquet\")]\n",
|
||||
"#print('\\n'.join(map(str, files)))\n",
|
||||
"file_name = \"ohlcv_df-SPY-2024-01-01T09:30:00-2024-05-14T16:00:00.parquet\"\n",
|
||||
"ohlcv_df = pd.read_parquet(dir+file_name,engine='pyarrow')\n",
|
||||
"basic_data = vbt.Data.from_data(vbt.symbol_dict({\"SPY\": ohlcv_df}), tz_convert=zoneNY)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#parameters (primary y line, secondary y line, close)\n",
|
||||
"def plot_2y_close(priminds, secinds, close):\n",
|
||||
" fig = vbt.make_subplots(rows=1, cols=1, shared_xaxes=True, specs=[[{\"secondary_y\": True}]], vertical_spacing=0.02, subplot_titles=(\"MOM\", \"Price\" ))\n",
|
||||
" close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False), trace_kwargs=dict(line=dict(color=\"blue\")))\n",
|
||||
" for ind in priminds:\n",
|
||||
" ind.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False))\n",
|
||||
" for ind in secinds:\n",
|
||||
" ind.plot(fig=fig, add_trace_kwargs=dict(secondary_y=True))\n",
|
||||
" return fig\n",
|
||||
"\n",
|
||||
"# close = basic_data.xloc[\"09:30\":\"10:00\"].close"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PIPELINE - FOR - LOOP\n",
|
||||
"\n",
|
||||
"#indicator parameters\n",
|
||||
"mom_timeperiod = list(range(2, 12))\n",
|
||||
"\n",
|
||||
"#uzavreni okna od 1 do 200\n",
|
||||
"#entry_window_closes = list(range(2, 50, 3))\n",
|
||||
"entry_window_closes = [5, 10, 30, 45]\n",
|
||||
"#entry_window_closes = 30\n",
|
||||
"#threshold entries parameters\n",
|
||||
"#long\n",
|
||||
"mom_th = np.round(np.arange(0.01, 0.5 + 0.02, 0.02),4).tolist()#-0.02\n",
|
||||
"# short\n",
|
||||
"#mom_th = np.round(np.arange(-0.01, -0.3 - 0.02, -0.02),4).tolist()#-0.02\n",
|
||||
"roc_th = np.round(np.arange(-0.2, -0.8 - 0.05, -0.05),4).tolist()#-0.2\n",
|
||||
"#print(mom_th, roc_th)\n",
|
||||
"\n",
|
||||
"#portfolio simulation parameters\n",
|
||||
"sl_stop =np.round(np.arange(0.02/100, 0.7/100, 0.05/100),4).tolist()\n",
|
||||
"tp_stop = np.round(np.arange(0.02/100, 0.7/100, 0.05/100),4).tolist()\n",
|
||||
"\n",
|
||||
"combs = list(product(mom_timeperiod, mom_th, roc_th, sl_stop, tp_stop))\n",
|
||||
"\n",
|
||||
"@vbt.parameterized(merge_func = \"concat\", random_subset = 2000, show_progress=True) \n",
|
||||
"def test_strat(entry_window_closes=60,\n",
|
||||
" mom_timeperiod=2,\n",
|
||||
" mom_th=-0.04,\n",
|
||||
" #roc_th=-0.2,\n",
|
||||
" sl_stop=0.19/100,\n",
|
||||
" tp_stop=0.19/100):\n",
|
||||
" # mom_timeperiod=2\n",
|
||||
" # mom_th=-0.06\n",
|
||||
" # roc_th=-0.2\n",
|
||||
" # sl_stop=0.04/100\n",
|
||||
" # tp_stop=0.04/100\n",
|
||||
"\n",
|
||||
" momshort = vbt.indicator(\"talib:MOM\").run(basic_data.close, timeperiod=mom_timeperiod, short_name = \"slope_short\")\n",
|
||||
" rocp = vbt.indicator(\"talib:ROC\").run(basic_data.close, short_name = \"rocp\")\n",
|
||||
" #rate of change + momentum\n",
|
||||
"\n",
|
||||
" #momshort.plot rocp.real_crossed_below(roc_th) & \n",
|
||||
" #short_signal = momshort.real_crossed_below(mom_th)\n",
|
||||
" long_signal = momshort.real_crossed_above(mom_th)\n",
|
||||
" # print(\"short signal\")\n",
|
||||
" # print(short_signal.value_counts())\n",
|
||||
"\n",
|
||||
" #forced_exit = pd.Series(False, index=close.index)\n",
|
||||
" forced_exit = basic_data.symbol_wrapper.fill(False)\n",
|
||||
" #entry_window_open = pd.Series(False, index=close.index)\n",
|
||||
" entry_window_open= basic_data.symbol_wrapper.fill(False)\n",
|
||||
"\n",
|
||||
" #print(entry_window_closes, \"entry window closes\")\n",
|
||||
" # Calculate the time difference in minutes from market open for each timestamp\n",
|
||||
" elapsed_min_from_open = (forced_exit.index.hour - market_open.hour) * 60 + (forced_exit.index.minute - market_open.minute)\n",
|
||||
"\n",
|
||||
" entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
|
||||
"\n",
|
||||
" #print(entry_window_open.value_counts())\n",
|
||||
"\n",
|
||||
" forced_exit[(elapsed_min_from_open >= forced_exit_start) & (elapsed_min_from_open < forced_exit_end)] = True\n",
|
||||
" #short_entries = (short_signal & entry_window_open)\n",
|
||||
" #short_exits = forced_exit\n",
|
||||
" entries = (long_signal & entry_window_open)\n",
|
||||
" exits = forced_exit\n",
|
||||
" #long_entries.info()\n",
|
||||
" #number of trues and falses in long_entries\n",
|
||||
" #print(short_exits.value_counts())\n",
|
||||
" #print(short_entries.value_counts())\n",
|
||||
"\n",
|
||||
" #fig = plot_2y_close([],[momshort, rocp], close)\n",
|
||||
" #short_signal.vbt.signals.plot_as_entries(close, fig=fig, add_trace_kwargs=dict(secondary_y=False))\n",
|
||||
" #print(sl_stop)\n",
|
||||
" #tsl_th=sl_stop, \n",
|
||||
" #short_entries=short_entries, short_exits=short_exits,\n",
|
||||
" pf = vbt.Portfolio.from_signals(close=basic_data.close, entries=entries, exits=exits, tsl_stop=sl_stop, tp_stop = tp_stop, fees=0.0167/100, freq=\"1s\", price=\"close\") #sl_stop=sl_stop, tp_stop = sl_stop,\n",
|
||||
" \n",
|
||||
" return pf.stats([\n",
|
||||
" 'total_return',\n",
|
||||
" 'max_dd', \n",
|
||||
" 'total_trades', \n",
|
||||
" 'win_rate', \n",
|
||||
" 'expectancy'\n",
|
||||
" ])\n",
|
||||
"\n",
|
||||
"pf_results = test_strat(vbt.Param(entry_window_closes),\n",
|
||||
" vbt.Param(mom_timeperiod),\n",
|
||||
" vbt.Param(mom_th),\n",
|
||||
" #vbt.Param(roc_th)\n",
|
||||
" vbt.Param(sl_stop),\n",
|
||||
" vbt.Param(tp_stop, condition=\"tp_stop > sl_stop\"))\n",
|
||||
"pf_results = pf_results.unstack(level=-1)\n",
|
||||
"pf_results.sort_values(by=[\"Total Return [%]\", \"Max Drawdown [%]\"], ascending=[False, True])\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#pf_results.load(\"10tiscomb.pickle\")\n",
|
||||
"#pf_results.info()\n",
|
||||
"\n",
|
||||
"vbt.save(pf_results, \"8tiscomb_tsl.pickle\")\n",
|
||||
"\n",
|
||||
"# pf_results = vbt.load(\"8tiscomb_tsl.pickle\")\n",
|
||||
"# pf_results\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# parallel_coordinates method¶\n",
|
||||
"\n",
|
||||
"# attach_px_methods.<locals>.plot_func(\n",
|
||||
"# *args,\n",
|
||||
"# layout=None,\n",
|
||||
"# **kwargs\n",
|
||||
"# )\n",
|
||||
"\n",
|
||||
"# pf_results.vbt.px.parallel_coordinates() #ocdf\n",
|
||||
"\n",
|
||||
"res = pf_results.reset_index()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf_results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import pandas as pd\n",
|
||||
"from sklearn.decomposition import PCA\n",
|
||||
"from sklearn.preprocessing import StandardScaler\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"\n",
|
||||
"# Assuming pf_results is your DataFrame\n",
|
||||
"# Convert columns to numeric, assuming NaNs where conversion fails\n",
|
||||
"metrics = ['Total Return [%]', 'Max Drawdown [%]', 'Total Trades']\n",
|
||||
"for metric in metrics:\n",
|
||||
" pf_results[metric] = pd.to_numeric(pf_results[metric], errors='coerce')\n",
|
||||
"\n",
|
||||
"# Handle missing values, for example filling with the median\n",
|
||||
"pf_results['Max Drawdown [%]'].fillna(pf_results['Max Drawdown [%]'].median(), inplace=True)\n",
|
||||
"\n",
|
||||
"# Extract the metrics into a new DataFrame\n",
|
||||
"data_for_pca = pf_results[metrics]\n",
|
||||
"\n",
|
||||
"# Standardize the data before applying PCA\n",
|
||||
"scaler = StandardScaler()\n",
|
||||
"data_scaled = scaler.fit_transform(data_for_pca)\n",
|
||||
"\n",
|
||||
"# Apply PCA\n",
|
||||
"pca = PCA(n_components=2) # Adjust components as needed\n",
|
||||
"principal_components = pca.fit_transform(data_scaled)\n",
|
||||
"\n",
|
||||
"# Create a DataFrame with the principal components\n",
|
||||
"pca_results = pd.DataFrame(data=principal_components, columns=['PC1', 'PC2'])\n",
|
||||
"\n",
|
||||
"# Visualize the results\n",
|
||||
"plt.figure(figsize=(8,6))\n",
|
||||
"plt.scatter(pca_results['PC1'], pca_results['PC2'], alpha=0.5)\n",
|
||||
"plt.xlabel('Principal Component 1')\n",
|
||||
"plt.ylabel('Principal Component 2')\n",
|
||||
"plt.title('PCA of Strategy Optimization Results')\n",
|
||||
"plt.grid(True)\n",
|
||||
"plt.savefig(\"ddd.png\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check if there is any unnamed level and rename it\n",
|
||||
"if None in df.index.names:\n",
|
||||
" # Generate new names list replacing None with 'stat'\n",
|
||||
" new_names = ['stat' if name is None else name for name in df.index.names]\n",
|
||||
" df.index.set_names(new_names, inplace=True)\n",
|
||||
"\n",
|
||||
"rs= df\n",
|
||||
"\n",
|
||||
"rs.info()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# # Now, 'stat' is the name of the previously unnamed level\n",
|
||||
"\n",
|
||||
"# # Filter for 'Total Return' assuming it is a correct identifier in the 'stat' level\n",
|
||||
"# total_return_series = df.xs('Total Return [%]', level='stat')\n",
|
||||
"\n",
|
||||
"# # Sort the Series to get the largest 'Total Return' values\n",
|
||||
"# sorted_series = total_return_series.sort_values(ascending=False)\n",
|
||||
"\n",
|
||||
"# # Print the sorted filtered data\n",
|
||||
"# sorted_series.head(20)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sorted_series.vbt.save()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#df.info()\n",
|
||||
"total_return_series = df.xs('Total Return [%]')\n",
|
||||
"sorted_series = total_return_series.sort_values(ascending=False)\n",
|
||||
"\n",
|
||||
"# Display the top N entries, e.g., top 5\n",
|
||||
"sorted_series.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"comb_stats_df.nlargest(10, 'Total Return [%]')\n",
|
||||
"#stats_df.info()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"8\t-0.06\t-0.2\t0.0028\t0.0048\t4.156254\n",
|
||||
"4 -0.02 -0.25 0.0028 0.0048 0.84433\n",
|
||||
"3 -0.02 -0.25 0.0033 0.0023 Total Return [%] 0.846753\n",
|
||||
"#2\t-0.04\t-0.2\t0.0019\t0.0019\n",
|
||||
"# 2\t-0.04\t-0.2\t0.0019\t0.0019\t0.556919\t91\t60.43956\t0.00612\n",
|
||||
"# 2\t-0.04\t-0.25\t0.0019\t0.0019\t0.556919\t91\t60.43956\t0.00612\n",
|
||||
"# 2\t-0.04\t-0.3\t0.0019\t0.0019\t0.556919\t91\t60.43956\t0.00612\n",
|
||||
"# 2\t-0.04\t-0.35\t0.0019\t0.0019\t0.556919\t91\t60.43956\t0.00612\n",
|
||||
"# 2\t-0.04\t-0.4\t0.0019\t0.0019\t0.556919\t91\t60.43956\t0.00612\n",
|
||||
"# 2\t-0.04\t-0.2\t0.0019\t0.0017\t0.451338\t93\t63.44086\t0.004853\n",
|
||||
"# 2\t-0.04\t-0.25\t0.0019\t0.0017\t0.451338\t93\t63.44086\t0.004853\n",
|
||||
"# 2\t-0.04\t-0.3\t0.0019\t0.0017\t0.451338\t93\t63.44086\t0.004853\n",
|
||||
"# 2\t-0.04\t-0.35\t0.0019\t0.0017\t0.451338\t93\t63.44086\t0.004853\n",
|
||||
"# 2\t-0.04\t-0.4\t0.0019\t0.0017\t0.451338\t93\t63.44086\t0.004853"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"basic_data.symbols"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
">>> def apply_func(ts, entries, exits, fastw, sloww, minp=None):\n",
|
||||
"... fast_ma = vbt.nb.rolling_mean_nb(ts, fastw, minp=minp)\n",
|
||||
"... slow_ma = vbt.nb.rolling_mean_nb(ts, sloww, minp=minp)\n",
|
||||
"... entries[:] = vbt.nb.crossed_above_nb(fast_ma, slow_ma) \n",
|
||||
"... exits[:] = vbt.nb.crossed_above_nb(slow_ma, fast_ma)\n",
|
||||
"... return (fast_ma, slow_ma) \n",
|
||||
"\n",
|
||||
">>> CrossSig = vbt.IF(\n",
|
||||
"... class_name=\"CrossSig\",\n",
|
||||
"... input_names=['ts'],\n",
|
||||
"... in_output_names=['entries', 'exits'],\n",
|
||||
"... param_names=['fastw', 'sloww'],\n",
|
||||
"... output_names=['fast_ma', 'slow_ma']\n",
|
||||
"... ).with_apply_func(\n",
|
||||
"... apply_func,\n",
|
||||
"... in_output_settings=dict(\n",
|
||||
"... entries=dict(dtype=np.bool_), #initialize output with bool\n",
|
||||
"... exits=dict(dtype=np.bool_)\n",
|
||||
"... )\n",
|
||||
"... )\n",
|
||||
">>> cross_sig = CrossSig.run(ts2, 2, 4)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PIPELINE - parameters in one go\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"#TOTO prepsat do FOR-LOOPu\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"#indicator parameters\n",
|
||||
"mom_timeperiod = list(range(2, 6))\n",
|
||||
"\n",
|
||||
"#threshold entries parameters\n",
|
||||
"mom_th = np.round(np.arange(-0.02, -0.1 - 0.02, -0.02),4).tolist()#-0.02\n",
|
||||
"roc_th = np.round(np.arange(-0.2, -0.4 - 0.05, -0.05),4).tolist()#-0.2\n",
|
||||
"#print(mom_th, roc_th)\n",
|
||||
"#jejich product\n",
|
||||
"# mom_th_prod, roc_th_prod = zip(*product(mom_th, roc_th))\n",
|
||||
"\n",
|
||||
"# #convert threshold to vbt param\n",
|
||||
"# mom_th_index = vbt.Param(mom_th_prod, name='mom_th_th') \n",
|
||||
"# roc_th_index = vbt.Param(roc_th_prod, name='roc_th_th')\n",
|
||||
"\n",
|
||||
"mom_th = vbt.Param(mom_th, name='mom_th')\n",
|
||||
"roc_th = vbt.Param(roc_th, name='roc_th')\n",
|
||||
"\n",
|
||||
"#portfolio simulation parameters\n",
|
||||
"sl_stop = np.arange(0.03/100, 0.2/100, 0.02/100).tolist()\n",
|
||||
"# Using the round function\n",
|
||||
"sl_stop = [round(val, 4) for val in sl_stop]\n",
|
||||
"tp_stop = np.arange(0.03/100, 0.2/100, 0.02/100).tolist()\n",
|
||||
"# Using the round function\n",
|
||||
"tp_stop = [round(val, 4) for val in tp_stop]\n",
|
||||
"sl_stop = vbt.Param(sl_stop) #np.nan mean s no stoploss\n",
|
||||
"tp_stop = vbt.Param(tp_stop) #np.nan mean s no stoploss\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"#def test_mom(window=14, mom_th=0.2, roc_th=0.2, sl_stop=0.03/100, tp_stop=0.03/100):\n",
|
||||
"#close = basic_data.xloc[\"09:30\":\"10:00\"].close\n",
|
||||
"momshort = vbt.indicator(\"talib:MOM\").run(basic_data.get(\"Close\"), timeperiod=mom_timeperiod, short_name = \"slope_short\")\n",
|
||||
"\n",
|
||||
"#ht_trendline = vbt.indicator(\"talib:HT_TRENDLINE\").run(close, short_name = \"httrendline\")\n",
|
||||
"rocp = vbt.indicator(\"talib:ROC\").run(basic_data.get(\"Close\"), short_name = \"rocp\")\n",
|
||||
"#rate of change + momentum\n",
|
||||
"\n",
|
||||
"rocp_signal = rocp.real_crossed_below(mom_th)\n",
|
||||
"mom_signal = momshort.real_crossed_below(roc_th)\n",
|
||||
"\n",
|
||||
"#mom_signal\n",
|
||||
"print(rocp_signal.info())\n",
|
||||
"print(mom_signal.info())\n",
|
||||
"#print(rocp.real)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"short_signal = (mom_signal.vbt & rocp_signal)\n",
|
||||
"\n",
|
||||
"# #short_signal = (rocp.real_crossed_below(roc_th_index) & momshort.real_crossed_below(mom_th_index))\n",
|
||||
"# forced_exit = m1_data.symbol_wrapper.fill(False)\n",
|
||||
"# entry_window_open= m1_data.symbol_wrapper.fill(False)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# # Calculate the time difference in minutes from market open for each timestamp\n",
|
||||
"# elapsed_min_from_open = (forced_exit.index.hour - market_open.hour) * 60 + (forced_exit.index.minute - market_open.minute)\n",
|
||||
"\n",
|
||||
"# entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
|
||||
"# forced_exit[(elapsed_min_from_open >= forced_exit_start) & (elapsed_min_from_open < forced_exit_end)] = True\n",
|
||||
"# short_entries = (short_signal & entry_window_open)\n",
|
||||
"# short_exits = forced_exit\n",
|
||||
"# #long_entries.info()\n",
|
||||
"# #number of trues and falses in long_entries\n",
|
||||
"# #short_exits.value_counts()\n",
|
||||
"# #short_entries.value_counts()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# pf = vbt.Portfolio.from_signals(close=close, short_entries=short_entries, short_exits=short_exits, sl_stop=sl_stop, tp_stop = tp_stop, fees=0.0167/100, freq=\"1s\") #sl_stop=sl_stop, tp_stop = sl_stop,\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# filter dates"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#filter na dny\n",
|
||||
"dates_of_interest = pd.to_datetime(['2024-04-22']).tz_localize('US/Eastern')\n",
|
||||
"filtered_df = df.loc[df.index.normalize().isin(dates_of_interest)]\n",
|
||||
"\n",
|
||||
"df = filtered_df\n",
|
||||
"df.info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# import plotly.io as pio\n",
|
||||
"# pio.renderers.default = 'notebook'\n",
|
||||
"\n",
|
||||
"#naloadujeme do vbt symbol as column\n",
|
||||
"basic_data = vbt.Data.from_data({\"BAC\": df}, tz_convert=zoneNY)\n",
|
||||
"\n",
|
||||
"vbt.settings.plotting.auto_rangebreaks = True\n",
|
||||
"#basic_data.data[\"BAC\"].vbt.ohlcv.plot()\n",
|
||||
"\n",
|
||||
"#basic_data.data[\"BAC\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"m1_data = basic_data[['Open', 'High', 'Low', 'Close', 'Volume']]\n",
|
||||
"\n",
|
||||
"m1_data.data[\"BAC\"]\n",
|
||||
"#m5_data = m1_data.resample(\"5T\")\n",
|
||||
"\n",
|
||||
"#m5_data.data[\"BAC\"].head(10)\n",
|
||||
"\n",
|
||||
"# m15_data = m1_data.resample(\"15T\")\n",
|
||||
"\n",
|
||||
"# m15 = m15_data.data[\"BAC\"]\n",
|
||||
"\n",
|
||||
"# m15.vbt.ohlcv.plot()\n",
|
||||
"\n",
|
||||
"# m1_data.wrapper.index\n",
|
||||
"\n",
|
||||
"# m1_resampler = m1_data.wrapper.get_resampler(\"1T\")\n",
|
||||
"# m1_resampler.index_difference(reverse=True)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# m5_resampler.prettify()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# MOM indicator"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vbt.phelp(vbt.indicator(\"talib:ROCP\").run)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"vyuzití rychleho klesani na sekundove urovni behem open rush\n",
|
||||
"- MOM + ROC during open rush\n",
|
||||
"- short signal\n",
|
||||
"- pipeline kombinace thresholdu pro vstup mom_th, roc_th + hodnota sl_stop a tp_stop (pripadne trailing) - nalezeni optimalni kombinace atributu"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# fig = plot_2y_close([ht_trendline],[momshort, rocp], close)\n",
|
||||
"# short_signal.vbt.signals.plot_as_entries(close, fig=fig, add_trace_kwargs=dict(secondary_y=False)) \n",
|
||||
"\n",
|
||||
"#parameters (primary y line, secondary y line, close)\n",
|
||||
"def plot_2y_close(priminds, secinds, close):\n",
|
||||
" fig = vbt.make_subplots(rows=1, cols=1, shared_xaxes=True, specs=[[{\"secondary_y\": True}]], vertical_spacing=0.02, subplot_titles=(\"MOM\", \"Price\" ))\n",
|
||||
" close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False), trace_kwargs=dict(line=dict(color=\"blue\")))\n",
|
||||
" for ind in priminds:\n",
|
||||
" ind.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False))\n",
|
||||
" for ind in secinds:\n",
|
||||
" ind.plot(fig=fig, add_trace_kwargs=dict(secondary_y=True))\n",
|
||||
" return fig\n",
|
||||
"\n",
|
||||
"close = m1_data.xloc[\"09:30\":\"10:00\"].close\n",
|
||||
"momshort = vbt.indicator(\"talib:MOM\").run(close, timeperiod=3, short_name = \"slope_short\")\n",
|
||||
"ht_trendline = vbt.indicator(\"talib:HT_TRENDLINE\").run(close, short_name = \"httrendline\")\n",
|
||||
"rocp = vbt.indicator(\"talib:ROC\").run(close, short_name = \"rocp\")\n",
|
||||
"#rate of change + momentum\n",
|
||||
"short_signal = (rocp.real_crossed_below(-0.2) & momshort.real_crossed_below(-0.02))\n",
|
||||
"#indlong = vbt.indicator(\"talib:MOM\").run(close, timeperiod=10, short_name = \"slope_long\")\n",
|
||||
"fig = plot_2y_close([ht_trendline],[momshort, rocp], close)\n",
|
||||
"short_signal.vbt.signals.plot_as_entries(close, fig=fig, add_trace_kwargs=dict(secondary_y=False)) "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"close = m1_data.close\n",
|
||||
"#vbt.phelp(vbt.OLS.run)\n",
|
||||
"\n",
|
||||
"#oer steepmnes of regression line\n",
|
||||
"#talib.LINEARREG_SLOPE(close, timeperiod=timeperiod)\n",
|
||||
"#a také ON BALANCE VOLUME - http://5.161.179.223:8000/static/js/vbt/api/indicators/custom/obv/index.html\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"mom_ind = vbt.indicator(\"talib:MOM\") \n",
|
||||
"#vbt.phelp(mom_ind.run)\n",
|
||||
"\n",
|
||||
"mom = mom_ind.run(close, timeperiod=10)\n",
|
||||
"\n",
|
||||
"plot_2y_close(mom, close)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# defining ENTRY WINDOW and forced EXIT window"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#m1_data.data[\"BAC\"].info()\n",
|
||||
"import datetime\n",
|
||||
"# Define the market open and close times\n",
|
||||
"market_open = datetime.time(9, 30)\n",
|
||||
"market_close = datetime.time(16, 0)\n",
|
||||
"entry_window_opens = 2\n",
|
||||
"entry_window_closes = 30\n",
|
||||
"\n",
|
||||
"forced_exit_start = 380\n",
|
||||
"forced_exit_end = 390\n",
|
||||
"\n",
|
||||
"forced_exit = m1_data.symbol_wrapper.fill(False)\n",
|
||||
"entry_window_open= m1_data.symbol_wrapper.fill(False)\n",
|
||||
"\n",
|
||||
"# Calculate the time difference in minutes from market open for each timestamp\n",
|
||||
"elapsed_min_from_open = (forced_exit.index.hour - market_open.hour) * 60 + (forced_exit.index.minute - market_open.minute)\n",
|
||||
"\n",
|
||||
"entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
|
||||
"forced_exit[(elapsed_min_from_open >= forced_exit_start) & (elapsed_min_from_open < forced_exit_end)] = True\n",
|
||||
"\n",
|
||||
"#entry_window_open.info()\n",
|
||||
"# forced_exit.tail(100)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"close = m1_data.close\n",
|
||||
"\n",
|
||||
"#rsi = vbt.RSI.run(close, window=14)\n",
|
||||
"\n",
|
||||
"short_entries = (short_signal & entry_window_open)\n",
|
||||
"short_exits = forced_exit\n",
|
||||
"#long_entries.info()\n",
|
||||
"#number of trues and falses in long_entries\n",
|
||||
"#short_exits.value_counts()\n",
|
||||
"short_entries.value_counts()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def plot_rsi(close, entries, exits):\n",
|
||||
" fig = vbt.make_subplots(rows=1, cols=1, shared_xaxes=True, specs=[[{\"secondary_y\": True}]], vertical_spacing=0.02, subplot_titles=(\"RSI\", \"Price\" ))\n",
|
||||
" close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=True))\n",
|
||||
" #rsi.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False))\n",
|
||||
" entries.vbt.signals.plot_as_entries(close, fig=fig, add_trace_kwargs=dict(secondary_y=False)) \n",
|
||||
" exits.vbt.signals.plot_as_exits(close, fig=fig, add_trace_kwargs=dict(secondary_y=False)) \n",
|
||||
" return fig\n",
|
||||
"\n",
|
||||
"plot_rsi(close, short_entries, short_exits)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vbt.phelp(vbt.Portfolio.from_signals)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sl_stop = np.arange(0.03/100, 0.2/100, 0.02/100).tolist()\n",
|
||||
"# Using the round function\n",
|
||||
"sl_stop = [round(val, 4) for val in sl_stop]\n",
|
||||
"print(sl_stop)\n",
|
||||
"sl_stop = vbt.Param(sl_stop) #np.nan mean s no stoploss\n",
|
||||
"\n",
|
||||
"pf = vbt.Portfolio.from_signals(close=close, short_entries=short_entries, short_exits=short_exits, sl_stop=0.03/100, tp_stop = 0.03/100, fees=0.0167/100, freq=\"1s\") #sl_stop=sl_stop, tp_stop = sl_stop,\n",
|
||||
"\n",
|
||||
"#pf.stats()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#list of orders\n",
|
||||
"#pf.orders.records_readable\n",
|
||||
"#pf.orders.plots()\n",
|
||||
"#pf.stats()\n",
|
||||
"pf.stats()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf[(0.0015,0.0013)].plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf[0.03].plot_trade_signals()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# pristup k pf jako multi index"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#pf[0.03].plot()\n",
|
||||
"#pf.order_records\n",
|
||||
"pf[(0.03)].stats()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#zgrupovane statistiky\n",
|
||||
"stats_df = pf.stats([\n",
|
||||
" 'total_return',\n",
|
||||
" 'total_trades',\n",
|
||||
" 'win_rate',\n",
|
||||
" 'expectancy'\n",
|
||||
"], agg_func=None)\n",
|
||||
"stats_df\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"stats_df.nlargest(10, 'Total Return [%]')\n",
|
||||
"#stats_df.info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf[(0.0011,0.0013000000000000002)].plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pandas.tseries.offsets import DateOffset\n",
|
||||
"\n",
|
||||
"temp_data = basic_data['2024-4-22']\n",
|
||||
"temp_data\n",
|
||||
"res1m = temp_data[[\"Open\", \"High\", \"Low\", \"Close\", \"Volume\"]]\n",
|
||||
"\n",
|
||||
"# Define a custom date offset that starts at 9:30 AM and spans 4 hours\n",
|
||||
"custom_offset = DateOffset(hours=4, minutes=30)\n",
|
||||
"\n",
|
||||
"# res1m = res1m.get().resample(\"4H\").agg({ \n",
|
||||
"# \"Open\": \"first\",\n",
|
||||
"# \"High\": \"max\",\n",
|
||||
"# \"Low\": \"min\",\n",
|
||||
"# \"Close\": \"last\",\n",
|
||||
"# \"Volume\": \"sum\"\n",
|
||||
"# })\n",
|
||||
"\n",
|
||||
"res4h = res1m.resample(\"1h\", resample_kwargs=dict(origin=\"start\"))\n",
|
||||
"\n",
|
||||
"res4h.data\n",
|
||||
"\n",
|
||||
"res15m = res1m.resample(\"15T\", resample_kwargs=dict(origin=\"start\"))\n",
|
||||
"\n",
|
||||
"res15m.data[\"BAC\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"@vbt.njit\n",
|
||||
"def long_entry_place_func_nb(c, low, close, time_in_ns, rsi14, window_open, window_close):\n",
|
||||
" market_open_minutes = 570 # 9 hours * 60 minutes + 30 minutes\n",
|
||||
"\n",
|
||||
" for out_i in range(len(c.out)):\n",
|
||||
" i = c.from_i + out_i\n",
|
||||
"\n",
|
||||
" current_minutes = vbt.dt_nb.hour_nb(time_in_ns[i]) * 60 + vbt.dt_nb.minute_nb(time_in_ns[i])\n",
|
||||
" #print(\"current_minutes\", current_minutes)\n",
|
||||
" # Calculate elapsed minutes since market open at 9:30 AM\n",
|
||||
" elapsed_from_open = current_minutes - market_open_minutes\n",
|
||||
" elapsed_from_open = elapsed_from_open if elapsed_from_open >= 0 else 0\n",
|
||||
" #print( \"elapsed_from_open\", elapsed_from_open)\n",
|
||||
"\n",
|
||||
" #elapsed_from_open = elapsed_minutes_from_open_nb(time_in_ns) \n",
|
||||
" in_window = elapsed_from_open > window_open and elapsed_from_open < window_close\n",
|
||||
" #print(\"in_window\", in_window)\n",
|
||||
" # if in_window:\n",
|
||||
" # print(\"in window\")\n",
|
||||
"\n",
|
||||
" if in_window and rsi14[i] > 60: # and low[i, c.col] <= hit_price: # and hour == 9: # (4)!\n",
|
||||
" return out_i\n",
|
||||
" return -1\n",
|
||||
"\n",
|
||||
"@vbt.njit\n",
|
||||
"def long_exit_place_func_nb(c, high, close, time_index, tp, sl): # (5)!\n",
|
||||
" entry_i = c.from_i - c.wait\n",
|
||||
" entry_price = close[entry_i, c.col]\n",
|
||||
" hit_price = entry_price * (1 + tp)\n",
|
||||
" stop_price = entry_price * (1 - sl)\n",
|
||||
" for out_i in range(len(c.out)):\n",
|
||||
" i = c.from_i + out_i\n",
|
||||
" last_bar_of_day = vbt.dt_nb.day_changed_nb(time_index[i], time_index[i + 1])\n",
|
||||
"\n",
|
||||
" #print(next_day)\n",
|
||||
" if last_bar_of_day: #pokud je dalsi next day, tak zavirame posledni\n",
|
||||
" print(\"ted\",out_i)\n",
|
||||
" return out_i\n",
|
||||
" if close[i, c.col] >= hit_price or close[i, c.col] <= stop_price :\n",
|
||||
" return out_i\n",
|
||||
" return -1\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))\n",
|
||||
"\n",
|
||||
"df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"df.sum()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
782
research/strat_SUPERTREND/SUPERTREND_v1_SINGLE.ipynb
Normal file
782
research/strat_SUPERTREND/SUPERTREND_v1_SINGLE.ipynb
Normal file
@ -0,0 +1,782 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# SUPERTREND\n",
|
||||
"\n",
|
||||
"* kombinace supertrendu na vice urovnich"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from dotenv import load_dotenv\n",
|
||||
"\n",
|
||||
"#as V2realbot is client , load env variables here\n",
|
||||
"env_file = \"/Users/davidbrazda/Documents/Development/python/.env\"\n",
|
||||
"# Load the .env file\n",
|
||||
"load_dotenv(env_file)\n",
|
||||
"\n",
|
||||
"from v2realbot.utils.utils import zoneNY\n",
|
||||
"import pandas as pd\n",
|
||||
"import numpy as np\n",
|
||||
"import vectorbtpro as vbt\n",
|
||||
"# from itables import init_notebook_mode, show\n",
|
||||
"import datetime\n",
|
||||
"from itertools import product\n",
|
||||
"from v2realbot.config import DATA_DIR\n",
|
||||
"from lightweight_charts import JupyterChart, chart, Panel, PlotAccessor\n",
|
||||
"from IPython.display import display\n",
|
||||
"\n",
|
||||
"# init_notebook_mode(all_interactive=True)\n",
|
||||
"\n",
|
||||
"vbt.settings.set_theme(\"dark\")\n",
|
||||
"vbt.settings['plotting']['layout']['width'] = 1280\n",
|
||||
"vbt.settings.plotting.auto_rangebreaks = True\n",
|
||||
"# Set the option to display with pagination\n",
|
||||
"pd.set_option('display.notebook_repr_html', True)\n",
|
||||
"pd.set_option('display.max_rows', 10) # Number of rows per page"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"trades_df-BAC-2024-01-01T09_30_00-2024-05-14T16_00_00-CO4B7VPWUZF-100.parquet\n",
|
||||
"trades_df-BAC-2024-01-11T09:30:00-2024-01-12T16:00:00.parquet\n",
|
||||
"trades_df-SPY-2024-01-01T09:30:00-2024-05-14T16:00:00.parquet\n",
|
||||
"trades_df-BAC-2023-01-01T09_30_00-2024-05-25T16_00_00-47BCFOPUVWZ-100.parquet\n",
|
||||
"ohlcv_df-BAC-2024-01-11T09:30:00-2024-01-12T16:00:00.parquet\n",
|
||||
"trades_df-BAC-2024-05-15T09_30_00-2024-05-25T16_00_00-47BCFOPUVWZ-100.parquet\n",
|
||||
"ohlcv_df-BAC-2024-01-01T09_30_00-2024-05-25T16_00_00-47BCFOPUVWZ-100.parquet\n",
|
||||
"ohlcv_df-SPY-2024-01-01T09:30:00-2024-05-14T16:00:00.parquet\n",
|
||||
"ohlcv_df-BAC-2024-01-01T09_30_00-2024-05-14T16_00_00-CO4B7VPWUZF-100.parquet\n",
|
||||
"ohlcv_df-BAC-2023-01-01T09_30_00-2024-05-25T16_00_00-47BCFOPUVWZ-100.parquet\n",
|
||||
"ohlcv_df-BAC-2023-01-01T09_30_00-2024-05-25T15_30_00-47BCFOPUVWZ-100.parquet\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"351"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Define the market open and close times\n",
|
||||
"market_open = datetime.time(9, 30)\n",
|
||||
"market_close = datetime.time(16, 0)\n",
|
||||
"entry_window_opens = 1\n",
|
||||
"entry_window_closes = 370\n",
|
||||
"forced_exit_start = 380\n",
|
||||
"forced_exit_end = 390\n",
|
||||
"\n",
|
||||
"#LOAD FROM PARQUET\n",
|
||||
"#list all files is dir directory with parquet extension\n",
|
||||
"dir = DATA_DIR + \"/notebooks/\"\n",
|
||||
"import os\n",
|
||||
"files = [f for f in os.listdir(dir) if f.endswith(\".parquet\")]\n",
|
||||
"print('\\n'.join(map(str, files)))\n",
|
||||
"file_name = \"ohlcv_df-BAC-2023-01-01T09_30_00-2024-05-25T15_30_00-47BCFOPUVWZ-100.parquet\"\n",
|
||||
"ohlcv_df = pd.read_parquet(dir+file_name,engine='pyarrow')\n",
|
||||
"#filter ohlcv_df to certain date range (assuming datetime index)\n",
|
||||
"#ohlcv_df = ohlcv_df.loc[\"2024-02-12 9:30\":\"2024-02-14 16:00\"]\n",
|
||||
"\n",
|
||||
"#add vwap column to ohlcv_df\n",
|
||||
"#ohlcv_df[\"hlcc4\"] = (ohlcv_df[\"close\"] + ohlcv_df[\"high\"] + ohlcv_df[\"low\"] + ohlcv_df[\"close\"]) / 4\n",
|
||||
"\n",
|
||||
"basic_data = vbt.Data.from_data(vbt.symbol_dict({\"BAC\": ohlcv_df}), tz_convert=zoneNY)\n",
|
||||
"ohlcv_df= None\n",
|
||||
"basic_data.wrapper.index.normalize().nunique()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"<class 'pandas.core.frame.DataFrame'>\n",
|
||||
"DatetimeIndex: 4549772 entries, 2023-01-03 09:30:01-05:00 to 2024-05-24 15:59:59-04:00\n",
|
||||
"Data columns (total 10 columns):\n",
|
||||
" # Column Dtype \n",
|
||||
"--- ------ ----- \n",
|
||||
" 0 open float64 \n",
|
||||
" 1 high float64 \n",
|
||||
" 2 low float64 \n",
|
||||
" 3 close float64 \n",
|
||||
" 4 volume float64 \n",
|
||||
" 5 trades float64 \n",
|
||||
" 6 updated datetime64[ns, US/Eastern]\n",
|
||||
" 7 vwap float64 \n",
|
||||
" 8 buyvolume float64 \n",
|
||||
" 9 sellvolume float64 \n",
|
||||
"dtypes: datetime64[ns, US/Eastern](1), float64(9)\n",
|
||||
"memory usage: 381.8 MB\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"basic_data.data[\"BAC\"].info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Add resample function to custom columns"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from vectorbtpro.utils.config import merge_dicts, Config, HybridConfig\n",
|
||||
"from vectorbtpro import _typing as tp\n",
|
||||
"from vectorbtpro.generic import nb as generic_nb\n",
|
||||
"\n",
|
||||
"_feature_config: tp.ClassVar[Config] = HybridConfig(\n",
|
||||
" {\n",
|
||||
" \"buyvolume\": dict(\n",
|
||||
" resample_func=lambda self, obj, resampler: obj.vbt.resample_apply(\n",
|
||||
" resampler,\n",
|
||||
" generic_nb.sum_reduce_nb,\n",
|
||||
" )\n",
|
||||
" ),\n",
|
||||
" \"sellvolume\": dict(\n",
|
||||
" resample_func=lambda self, obj, resampler: obj.vbt.resample_apply(\n",
|
||||
" resampler,\n",
|
||||
" generic_nb.sum_reduce_nb,\n",
|
||||
" )\n",
|
||||
" ),\n",
|
||||
" \"trades\": dict(\n",
|
||||
" resample_func=lambda self, obj, resampler: obj.vbt.resample_apply(\n",
|
||||
" resampler,\n",
|
||||
" generic_nb.sum_reduce_nb,\n",
|
||||
" )\n",
|
||||
" )\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"basic_data._feature_config = _feature_config"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"s1data = basic_data[['open', 'high', 'low', 'close', 'volume','vwap','buyvolume','trades','sellvolume']]\n",
|
||||
"\n",
|
||||
"s5data = s1data.resample(\"5s\")\n",
|
||||
"s5data = s5data.transform(lambda df: df.between_time('09:30', '16:00').dropna())\n",
|
||||
"\n",
|
||||
"t1data = basic_data[['open', 'high', 'low', 'close', 'volume','vwap','buyvolume','trades','sellvolume']].resample(\"1T\")\n",
|
||||
"t1data = t1data.transform(lambda df: df.between_time('09:30', '16:00').dropna())\n",
|
||||
"# t1data.data[\"BAC\"].info()\n",
|
||||
"\n",
|
||||
"t30data = basic_data[['open', 'high', 'low', 'close', 'volume','vwap','buyvolume','trades','sellvolume']].resample(\"30T\")\n",
|
||||
"t30data = t30data.transform(lambda df: df.between_time('09:30', '16:00').dropna())\n",
|
||||
"# t30data.data[\"BAC\"].info()\n",
|
||||
"\n",
|
||||
"s1close = s1data.close\n",
|
||||
"t1close = t1data.close\n",
|
||||
"t30close = t30data.close\n",
|
||||
"t30volume = t30data.volume\n",
|
||||
"\n",
|
||||
"#resample on specific index \n",
|
||||
"resampler = vbt.Resampler(t30data.index, s1data.index, source_freq=\"30T\", target_freq=\"1s\")\n",
|
||||
"t30close_realigned = t30close.vbt.realign_closing(resampler)\n",
|
||||
"\n",
|
||||
"#resample 1min to s\n",
|
||||
"resampler_s = vbt.Resampler(t1data.index, s1data.index, source_freq=\"1T\", target_freq=\"1s\")\n",
|
||||
"t1close_realigned = t1close.vbt.realign_closing(resampler_s)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"datetime64[ns, US/Eastern]\n",
|
||||
"datetime64[ns, US/Eastern]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(t30data.index.dtype)\n",
|
||||
"print(s1data.index.dtype)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"<class 'pandas.core.frame.DataFrame'>\n",
|
||||
"DatetimeIndex: 4551 entries, 2023-01-03 09:30:00-05:00 to 2024-05-24 15:30:00-04:00\n",
|
||||
"Data columns (total 9 columns):\n",
|
||||
" # Column Non-Null Count Dtype \n",
|
||||
"--- ------ -------------- ----- \n",
|
||||
" 0 open 4551 non-null float64\n",
|
||||
" 1 high 4551 non-null float64\n",
|
||||
" 2 low 4551 non-null float64\n",
|
||||
" 3 close 4551 non-null float64\n",
|
||||
" 4 volume 4551 non-null float64\n",
|
||||
" 5 vwap 4551 non-null float64\n",
|
||||
" 6 buyvolume 4551 non-null float64\n",
|
||||
" 7 trades 4551 non-null float64\n",
|
||||
" 8 sellvolume 4551 non-null float64\n",
|
||||
"dtypes: float64(9)\n",
|
||||
"memory usage: 355.5 KB\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"t30data.data[\"BAC\"].info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vbt.IF.list_indicators(\"*vwap\")\n",
|
||||
"vbt.phelp(vbt.VWAP.run)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# VWAP"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"t1vwap_h = vbt.VWAP.run(t1data.high, t1data.low, t1data.close, t1data.volume, anchor=\"H\")\n",
|
||||
"t1vwap_d = vbt.VWAP.run(t1data.high, t1data.low, t1data.close, t1data.volume, anchor=\"D\")\n",
|
||||
"t1vwap_t = vbt.VWAP.run(t1data.high, t1data.low, t1data.close, t1data.volume, anchor=\"T\")\n",
|
||||
"\n",
|
||||
"t1vwap_h_real = t1vwap_h.vwap.vbt.realign_closing(resampler_s)\n",
|
||||
"t1vwap_d_real = t1vwap_d.vwap.vbt.realign_closing(resampler_s)\n",
|
||||
"t1vwap_t_real = t1vwap_t.vwap.vbt.realign_closing(resampler_s)\n",
|
||||
"\n",
|
||||
"#t1vwap_5t.xloc[\"2024-01-3 09:30:00\":\"2024-01-03 16:00:00\"].plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#m30data.close.lw.plot()\n",
|
||||
"#quick few liner\n",
|
||||
"pane1 = Panel(\n",
|
||||
" histogram=[\n",
|
||||
" #(s1data.volume, \"volume\",None, 0.8),\n",
|
||||
" #(m30volume, \"m30volume\",None, 1)\n",
|
||||
" ], # [(series, name, \"rgba(53, 94, 59, 0.6)\", opacity)]\n",
|
||||
" right=[\n",
|
||||
" (s1data.close, \"1s close\"),\n",
|
||||
" (t1data.close, \"1min close\"),\n",
|
||||
" (t1vwap_t, \"1mvwap_t\"),\n",
|
||||
" (t1vwap_h, \"1mvwap_h\"),\n",
|
||||
" (t1vwap_d, \"1mvwap_d\"),\n",
|
||||
" (t1vwap_t_real, \"1mvwap_t_real\"),\n",
|
||||
" (t1vwap_h_real, \"1mvwap_h_real\"),\n",
|
||||
" (t1vwap_d_real, \"1mvwap_d_real\")\n",
|
||||
" # (t1close_realigned, \"1min close realigned\"),\n",
|
||||
" # (m30data.close, \"30min-close\"),\n",
|
||||
" # (m30close_realigned, \"30min close realigned\"),\n",
|
||||
" ],\n",
|
||||
")\n",
|
||||
"ch = chart([pane1], size=\"s\", xloc=slice(\"2024-05-1 09:30:00\",\"2024-05-25 16:00:00\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# SUPERTREND"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"supertrend_s1 = vbt.SUPERTREND.run(s1data.high, s1data.low, s1data.close, period=5, multiplier=3)\n",
|
||||
"direction_series_s1 = supertrend_s1.direction\n",
|
||||
"supertrend_t1 = vbt.SUPERTREND.run(t1data.high, t1data.low, t1data.close, period=14, multiplier=3)\n",
|
||||
"direction_series_t1 = supertrend_t1.direction\n",
|
||||
"supertrend_t30 = vbt.SUPERTREND.run(t30data.high, t30data.low, t30data.close, period=14, multiplier=3)\n",
|
||||
"direction_series_t30 = supertrend_t30.direction\n",
|
||||
"\n",
|
||||
"resampler_1t_sec = vbt.Resampler(direction_series_t1.index, direction_series_s1.index, source_freq=\"1T\", target_freq=\"1s\")\n",
|
||||
"resampler_30t_sec = vbt.Resampler(direction_series_t30.index, direction_series_s1.index, source_freq=\"30T\", target_freq=\"1s\")\n",
|
||||
"direction_series_t1_realigned = direction_series_t1.vbt.realign_closing(resampler_1t_sec)\n",
|
||||
"direction_series_t30_realigned = direction_series_t30.vbt.realign_closing(resampler_30t_sec)\n",
|
||||
"\n",
|
||||
"#supertrend_s1.xloc[\"2024-01-3 09:30:00\":\"2024-01-03 16:00:00\"].plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# aligned_ups= pd.Series(False, index=direction_real.index)\n",
|
||||
"# aligned_downs= pd.Series(False, index=direction_real.index)\n",
|
||||
"\n",
|
||||
"# aligned_ups = direction_real == 1 & supertrend.direction == 1\n",
|
||||
"# aligned_ups"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"s5close = s5data.data[\"BAC\"].close\n",
|
||||
"s5open = s5data.data[\"BAC\"].open\n",
|
||||
"s5high = s5data.data[\"BAC\"].high\n",
|
||||
"s5close_prev = s5close.shift(1)\n",
|
||||
"s5open_prev = s5open.shift(1)\n",
|
||||
"s5high_prev = s5high.shift(1)\n",
|
||||
"#gap nahoru od byci svicky a nevraci se zpet na jeji uroven\n",
|
||||
"entry_ups = (s5close_prev > s5open_prev) & (s5open > s5high_prev + 0.010) & (s5close > s5close_prev)\n",
|
||||
"\n",
|
||||
"entry_ups.value_counts()\n",
|
||||
"\n",
|
||||
"#entry_ups.info()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Entry window"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"market_open = datetime.time(9, 30)\n",
|
||||
"market_close = datetime.time(16, 0)\n",
|
||||
"entry_window_opens = 10\n",
|
||||
"entry_window_closes = 370"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"entry_window_open= pd.Series(False, index=entry_ups.index)\n",
|
||||
"# Calculate the time difference in minutes from market open for each timestamp\n",
|
||||
"elapsed_min_from_open = (entry_ups.index.hour - market_open.hour) * 60 + (entry_ups.index.minute - market_open.minute)\n",
|
||||
"entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
|
||||
"#entry_window_open\n",
|
||||
"\n",
|
||||
"entry_ups = entry_ups & entry_window_open\n",
|
||||
"# entry_ups\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"s5vwap_h = vbt.VWAP.run(s5data.high, s5data.low, s5data.close, s5data.volume, anchor=\"H\")\n",
|
||||
"s5vwap_d = vbt.VWAP.run(s5data.high, s5data.low, s5data.close, s5data.volume, anchor=\"D\")\n",
|
||||
"\n",
|
||||
"# s5vwap_h_real = s5vwap_h.vwap.vbt.realign_closing(resampler_s)\n",
|
||||
"# s5vwap_d_real = s5vwap_d.vwap.vbt.realign_closing(resampler_s)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pane1 = Panel(\n",
|
||||
" ohlcv=(s5data.data[\"BAC\"],), #(series, entries, exits, other_markers)\n",
|
||||
" histogram=[], # [(series, name, \"rgba(53, 94, 59, 0.6), opacity\")]\n",
|
||||
" right=[#(bbands,), #[(series, name, entries, exits, other_markers)]\n",
|
||||
" (s5data.data[\"BAC\"].close, \"close\", entry_ups),\n",
|
||||
" (s5data.data[\"BAC\"].open, \"open\"),\n",
|
||||
" (s5vwap_h, \"vwap5s_H\",),\n",
|
||||
" (s5vwap_d, \"vwap5s_D\",)\n",
|
||||
" # (t1data.data[\"BAC\"].vwap, \"vwap\"),\n",
|
||||
" # (t1data.close, \"1min close\"),\n",
|
||||
" # (supertrend_s1.trend,\"STtrend\"),\n",
|
||||
" # (supertrend_s1.long,\"STlong\"),\n",
|
||||
" # (supertrend_s1.short,\"STshort\")\n",
|
||||
" ],\n",
|
||||
" left = [\n",
|
||||
" #(direction_series_s1,\"direction_s1\"),\n",
|
||||
" # (direction_series_t1,\"direction_t1\"),\n",
|
||||
" # (direction_series_t30,\"direction_t30\")\n",
|
||||
" \n",
|
||||
" ],\n",
|
||||
" # right=[(bbands.upperband, \"upperband\",),\n",
|
||||
" # (bbands.lowerband, \"lowerband\",),\n",
|
||||
" # (bbands.middleband, \"middleband\",)\n",
|
||||
" # ], #[(series, name, entries, exits, other_markers)]\n",
|
||||
" middle1=[],\n",
|
||||
" middle2=[],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# pane2 = Panel(\n",
|
||||
"# ohlcv=(t1data.data[\"BAC\"],uptrend_m30, downtrend_m30), #(series, entries, exits, other_markers)\n",
|
||||
"# histogram=[], # [(series, name, \"rgba(53, 94, 59, 0.6), opacity\")]\n",
|
||||
"# left=[#(bbands,), #[(series, name, entries, exits, other_markers)]\n",
|
||||
"# (direction_real,\"direction30min_real\"),\n",
|
||||
"# ],\n",
|
||||
"# # left = [(supertrendm30.direction,\"STdirection30\")],\n",
|
||||
"# # # right=[(bbands.upperband, \"upperband\",),\n",
|
||||
"# # # (bbands.lowerband, \"lowerband\",),\n",
|
||||
"# # # (bbands.middleband, \"middleband\",)\n",
|
||||
"# # # ], #[(series, name, entries, exits, other_markers)]\n",
|
||||
"# middle1=[],\n",
|
||||
"# middle2=[],\n",
|
||||
"# title = \"1m\")\n",
|
||||
"\n",
|
||||
"ch = chart([pane1], sync=True, size=\"s\", xloc=slice(\"2024-02-20 09:30:00\",\"2024-02-22 16:00:00\"), precision=6)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# data = s5data.xloc[\"2024-01-03 09:30:00\":\"2024-03-10 16:00:00\"]\n",
|
||||
"# entry = entry_ups.vbt.xloc[\"2024-01-03 09:30:00\":\"2024-03-10 16:00:00\"].obj\n",
|
||||
"\n",
|
||||
"pf = vbt.Portfolio.from_signals(close=s5data, entries=entry_ups, direction=\"longonly\", sl_stop=0.05/100, tp_stop = 0.05/100, fees=0.0167/100, freq=\"5s\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.xloc[\"2024-01-26 09:30:00\":\"2024-02-28 16:00:00\"].positions.plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.xloc[\"2024-01-26 09:30:00\":\"2024-01-28 16:00:00\"].plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pd.set_option('display.max_rows', None)\n",
|
||||
"pf.stats()\n",
|
||||
"# pf.xloc[\"monday\"].stats()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"buyvolume = t1data.data[\"BAC\"].buyvolume\n",
|
||||
"sellvolume = t1data.data[\"BAC\"].sellvolume\n",
|
||||
"totalvolume = buyvolume + sellvolume\n",
|
||||
"\n",
|
||||
"#adjust to minimal value to avoid division by zero\n",
|
||||
"sellvolume_adjusted = sellvolume.replace(0, 1e-10)\n",
|
||||
"oibratio = buyvolume / sellvolume\n",
|
||||
"\n",
|
||||
"#cumulative order flow (net difference)\n",
|
||||
"cof = buyvolume - sellvolume\n",
|
||||
"\n",
|
||||
"# Calculate the order imbalance (net differene) normalize the order imbalance by calculating the difference between buy and sell volumes and then scaling it by the total volume.\n",
|
||||
"order_imbalance = cof / totalvolume\n",
|
||||
"order_imbalance = order_imbalance.fillna(0) #nan nahradime 0\n",
|
||||
"\n",
|
||||
"order_imbalance_allvolume = cof / t1data.data[\"BAC\"].volume\n",
|
||||
"\n",
|
||||
"order_imbalance_sma = vbt.indicator(\"talib:EMA\").run(order_imbalance, timeperiod=5)\n",
|
||||
"short_signals = order_imbalance.vbt < -0.5\n",
|
||||
"#short_entries = oibratio.vbt < 0.01\n",
|
||||
"short_signals.value_counts()\n",
|
||||
"short_signals.name = \"short_entries\"\n",
|
||||
"#.fillna(False)\n",
|
||||
"short_exits = short_signals.shift(-2).fillna(False).astype(bool)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pane1 = Panel(\n",
|
||||
" ohlcv=(t1data.data[\"BAC\"],), #(series, entries, exits, other_markers)\n",
|
||||
" histogram=[(order_imbalance_allvolume, \"oib_allvolume\", \"rgba(53, 94, 59, 0.6)\",0.5),\n",
|
||||
" (t1data.data[\"BAC\"].trades, \"trades\",None,0.4),\n",
|
||||
" ], # [(series, name, \"rgba(53, 94, 59, 0.6)\", opacity)]\n",
|
||||
" # right=[\n",
|
||||
" # (supertrend.trend,\"STtrend\"),\n",
|
||||
" # (supertrend.long,\"STlong\"),\n",
|
||||
" # (supertrend.short,\"STshort\")\n",
|
||||
" # ],\n",
|
||||
" # left = [(supertrend.direction,\"STdirection\")],\n",
|
||||
" # right=[(bbands.upperband, \"upperband\",),\n",
|
||||
" # (bbands.lowerband, \"lowerband\",),\n",
|
||||
" # (bbands.middleband, \"middleband\",)\n",
|
||||
" # ], #[(series, name, entries, exits, other_markers)]\n",
|
||||
" middle1=[],\n",
|
||||
" middle2=[],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"pane2 = Panel(\n",
|
||||
" ohlcv=(basic_data.data[\"BAC\"],), #(series, entries, exits, other_markers)\n",
|
||||
" left=[(basic_data.data[\"BAC\"].trades, \"trades\")],\n",
|
||||
" histogram=[(basic_data.data[\"BAC\"].trades, \"trades_hist\", \"white\", 0.5)], #\"rgba(53, 94, 59, 0.6)\"\n",
|
||||
" # ], # [(series, name, \"rgba(53, 94, 59, 0.6)\")]\n",
|
||||
" # right=[\n",
|
||||
" # (supertrend.trend,\"STtrend\"),\n",
|
||||
" # (supertrend.long,\"STlong\"),\n",
|
||||
" # (supertrend.short,\"STshort\")\n",
|
||||
" # ],\n",
|
||||
" # left = [(supertrend.direction,\"STdirection\")],\n",
|
||||
" # right=[(bbands.upperband, \"upperband\",),\n",
|
||||
" # (bbands.lowerband, \"lowerband\",),\n",
|
||||
" # (bbands.middleband, \"middleband\",)\n",
|
||||
" # ], #[(series, name, entries, exits, other_markers)]\n",
|
||||
" middle1=[],\n",
|
||||
" middle2=[],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"ch = chart([pane1, pane2], size=\"m\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#short_signal = t1slope.real_below(t1_th) & t2slope.real_below(t2_th) & t3slope.real_below(t3_th) & t4slope.real_below(t4_th)\n",
|
||||
"#long_signal = t1slope.real_above(t1_th) & t2slope.real_above(t2_th) & t3slope.real_above(t3_th) & t4slope.real_above(t4_th)\n",
|
||||
"\n",
|
||||
"#test na daily s reversem crossed 0\n",
|
||||
"short_signal = t2slope.vbt < -0.01 & t3slope.vbt < -0.01 #min value of threshold\n",
|
||||
"long_signal = t2slope.vbt > 0.01 & t3slope.vbt > 0.01 #min\n",
|
||||
"\n",
|
||||
"# thirty_up_signal = t3slope.vbt.crossed_above(0.01)\n",
|
||||
"# thirty_down_signal = t3slope.vbt.crossed_below(-0.01)\n",
|
||||
"\n",
|
||||
"fig = plot_2y_close(priminds=[], secinds=[t3slope], close=t1data.close)\n",
|
||||
"#short_signal.vbt.signals.plot_as_entries(basic_data.close, fig=fig)\n",
|
||||
"\n",
|
||||
"short_signal.vbt.signals.plot_as_entries(t1data.close, fig=fig, trace_kwargs=dict(name=\"SHORTS\",\n",
|
||||
" line=dict(color=\"#ffe476\"),\n",
|
||||
" marker=dict(color=\"red\", symbol=\"triangle-down\"),\n",
|
||||
" fill=None,\n",
|
||||
" connectgaps=True,\n",
|
||||
" ))\n",
|
||||
"long_signal.vbt.signals.plot_as_entries(t1data.close, fig=fig, trace_kwargs=dict(name=\"LONGS\",\n",
|
||||
" line=dict(color=\"#ffe476\"),\n",
|
||||
" marker=dict(color=\"limegreen\"),\n",
|
||||
" fill=None,\n",
|
||||
" connectgaps=True,\n",
|
||||
" ))\n",
|
||||
"\n",
|
||||
"# thirty_down_signal.vbt.signals.plot_as_entries(t1data.close, fig=fig, trace_kwargs=dict(name=\"DOWN30\",\n",
|
||||
"# line=dict(color=\"#ffe476\"),\n",
|
||||
"# marker=dict(color=\"yellow\", symbol=\"triangle-down\"),\n",
|
||||
"# fill=None,\n",
|
||||
"# connectgaps=True,\n",
|
||||
"# ))\n",
|
||||
"# thirty_up_signal.vbt.signals.plot_as_entries(t1data.close, fig=fig, trace_kwargs=dict(name=\"UP30\",\n",
|
||||
"# line=dict(color=\"#ffe476\"),\n",
|
||||
"# marker=dict(color=\"grey\"),\n",
|
||||
"# fill=None,\n",
|
||||
"# connectgaps=True,\n",
|
||||
"# ))\n",
|
||||
"\n",
|
||||
"# thirtymin_slope_to_compare.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=True), trace_kwargs=dict(name=\"30min slope\",\n",
|
||||
"# line=dict(color=\"yellow\"), \n",
|
||||
"# fill=None,\n",
|
||||
"# connectgaps=True,\n",
|
||||
"# ))\n",
|
||||
"\n",
|
||||
"fig.show()\n",
|
||||
"# print(\"short signal\")\n",
|
||||
"# print(short_signal.value_counts())\n",
|
||||
"\n",
|
||||
"#forced_exit = pd.Series(False, index=close.index)\n",
|
||||
"forced_exit = basic_data.symbol_wrapper.fill(False)\n",
|
||||
"#entry_window_open = pd.Series(False, index=close.index)\n",
|
||||
"entry_window_open= basic_data.symbol_wrapper.fill(False)\n",
|
||||
"\n",
|
||||
"# Calculate the time difference in minutes from market open for each timestamp\n",
|
||||
"elapsed_min_from_open = (forced_exit.index.hour - market_open.hour) * 60 + (forced_exit.index.minute - market_open.minute)\n",
|
||||
"\n",
|
||||
"entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
|
||||
"\n",
|
||||
"#print(entry_window_open.value_counts())\n",
|
||||
"\n",
|
||||
"forced_exit[(elapsed_min_from_open >= forced_exit_start) & (elapsed_min_from_open < forced_exit_end)] = True\n",
|
||||
"short_entries = (short_signal & entry_window_open)\n",
|
||||
"short_exits = forced_exit\n",
|
||||
"\n",
|
||||
"entries = (long_signal & entry_window_open)\n",
|
||||
"exits = forced_exit\n",
|
||||
"#long_entries.info()\n",
|
||||
"#number of trues and falses in long_entries\n",
|
||||
"# print(short_exits.value_counts())\n",
|
||||
"# print(short_entries.value_counts())\n",
|
||||
"\n",
|
||||
"#fig = plot_2y_close([],[momshort, rocp], close)\n",
|
||||
"#short_signal.vbt.signals.plot_as_entries(close, fig=fig, add_trace_kwargs=dict(secondary_y=False))\n",
|
||||
"#print(sl_stop)\n",
|
||||
"#short_entries=short_entries, short_exits=short_exits,\n",
|
||||
"# pf = vbt.Portfolio.from_signals(close=basic_data, entries=short_entries, exits=exits, tsl_stop=0.005, tp_stop = 0.05, fees=0.0167/100, freq=\"1s\") #sl_stop=sl_stop, tp_stop = sl_stop,\n",
|
||||
"\n",
|
||||
"# pf.stats()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"forced_exit = t1data.symbol_wrapper.fill(False)\n",
|
||||
"#entry_window_open = pd.Series(False, index=close.index)\n",
|
||||
"entry_window_open= t1data.symbol_wrapper.fill(False)\n",
|
||||
"\n",
|
||||
"# Calculate the time difference in minutes from market open for each timestamp\n",
|
||||
"elapsed_min_from_open = (forced_exit.index.hour - market_open.hour) * 60 + (forced_exit.index.minute - market_open.minute)\n",
|
||||
"\n",
|
||||
"entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
|
||||
"\n",
|
||||
"#print(entry_window_open.value_counts())\n",
|
||||
"\n",
|
||||
"forced_exit[(elapsed_min_from_open >= forced_exit_start) & (elapsed_min_from_open < forced_exit_end)] = True\n",
|
||||
"short_entries = (short_signals & entry_window_open)\n",
|
||||
"short_exits = forced_exit\n",
|
||||
"\n",
|
||||
"entries = (long_signals & entry_window_open)\n",
|
||||
"exits = forced_exit\n",
|
||||
"\n",
|
||||
"pf = vbt.Portfolio.from_signals(close=t1data, entries=entries, exits=exits, short_entries=short_entries, short_exits=exits,\n",
|
||||
"td_stop=2, time_delta_format=\"rows\",\n",
|
||||
"tsl_stop=0.005, tp_stop = 0.005, fees=0.0167/100)#, freq=\"1s\") #sl_stop=sl_stop, tp_stop = sl_stop,\n",
|
||||
"\n",
|
||||
"pf.stats()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.get_drawdowns().records_readable"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.orders.records_readable"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
1536
research/strat_TIME_ENTRIES copy/v1_MULTI.ipynb
Normal file
1536
research/strat_TIME_ENTRIES copy/v1_MULTI.ipynb
Normal file
File diff suppressed because one or more lines are too long
44779
research/strat_TIME_ENTRIES copy/v1_SINGLE.ipynb
Normal file
44779
research/strat_TIME_ENTRIES copy/v1_SINGLE.ipynb
Normal file
File diff suppressed because one or more lines are too long
23637
research/test.ipynb
Normal file
23637
research/test.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
105
research/test1.ipynb
Normal file
105
research/test1.ipynb
Normal file
File diff suppressed because one or more lines are too long
421
research/test1sbars.ipynb
Normal file
421
research/test1sbars.ipynb
Normal file
@ -0,0 +1,421 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from v2realbot.tools.loadbatch import load_batch\n",
|
||||
"from v2realbot.utils.utils import zoneNY\n",
|
||||
"import pandas as pd\n",
|
||||
"import numpy as np\n",
|
||||
"import vectorbtpro as vbt\n",
|
||||
"from itables import init_notebook_mode, show\n",
|
||||
"\n",
|
||||
"init_notebook_mode(all_interactive=True)\n",
|
||||
"\n",
|
||||
"vbt.settings.set_theme(\"dark\")\n",
|
||||
"vbt.settings['plotting']['layout']['width'] = 1280\n",
|
||||
"vbt.settings.plotting.auto_rangebreaks = True\n",
|
||||
"# Set the option to display with pagination\n",
|
||||
"pd.set_option('display.notebook_repr_html', True)\n",
|
||||
"pd.set_option('display.max_rows', 10) # Number of rows per page\n",
|
||||
"\n",
|
||||
"res, df = load_batch(batch_id=\"0fb5043a\", #46 days 1.3 - 6.5.\n",
|
||||
" space_resolution_evenly=False,\n",
|
||||
" indicators_columns=[\"Rsi14\"],\n",
|
||||
" main_session_only=True,\n",
|
||||
" verbose = False)\n",
|
||||
"if res < 0:\n",
|
||||
" print(\"Error\" + str(res) + str(df))\n",
|
||||
"df = df[\"bars\"]\n",
|
||||
"\n",
|
||||
"df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# filter dates"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#filter na dny\n",
|
||||
"# dates_of_interest = pd.to_datetime(['2024-04-22', '2024-04-23']).tz_localize('US/Eastern')\n",
|
||||
"# filtered_df = df.loc[df.index.normalize().isin(dates_of_interest)]\n",
|
||||
"\n",
|
||||
"# df = filtered_df\n",
|
||||
"# df.info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import plotly.io as pio\n",
|
||||
"pio.renderers.default = 'notebook'\n",
|
||||
"\n",
|
||||
"#naloadujeme do vbt symbol as column\n",
|
||||
"basic_data = vbt.Data.from_data({\"BAC\": df}, tz_convert=zoneNY)\n",
|
||||
"start_date = pd.Timestamp('2024-03-12 09:30', tz=zoneNY)\n",
|
||||
"end_date = pd.Timestamp('2024-03-13 16:00', tz=zoneNY)\n",
|
||||
"\n",
|
||||
"#basic_data = basic_data.transform(lambda df: df[df.index.date == start_date.date()])\n",
|
||||
"#basic_data = basic_data.transform(lambda df: df[(df.index >= start_date) & (df.index <= end_date)])\n",
|
||||
"#basic_data.data[\"BAC\"].info()\n",
|
||||
"\n",
|
||||
"# fig = basic_data.plot(plot_volume=False)\n",
|
||||
"# pivot_info = basic_data.run(\"pivotinfo\", up_th=0.003, down_th=0.002)\n",
|
||||
"# #pivot_info.plot()\n",
|
||||
"# pivot_info.plot(fig=fig, conf_value_trace_kwargs=dict(visible=True))\n",
|
||||
"# fig.show()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# rsi14 = basic_data.data[\"BAC\"][\"Rsi14\"].rename(\"Rsi14\")\n",
|
||||
"\n",
|
||||
"# rsi14.vbt.plot().show()\n",
|
||||
"#basic_data.xloc[\"09:30\":\"10:00\"].data[\"BAC\"].vbt.ohlcv.plot().show()\n",
|
||||
"\n",
|
||||
"vbt.settings.plotting.auto_rangebreaks = True\n",
|
||||
"#basic_data.data[\"BAC\"].vbt.ohlcv.plot()\n",
|
||||
"\n",
|
||||
"#basic_data.data[\"BAC\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"m1_data = basic_data[['Open', 'High', 'Low', 'Close', 'Volume']]\n",
|
||||
"\n",
|
||||
"m1_data.data[\"BAC\"]\n",
|
||||
"#m5_data = m1_data.resample(\"5T\")\n",
|
||||
"\n",
|
||||
"#m5_data.data[\"BAC\"].head(10)\n",
|
||||
"\n",
|
||||
"# m15_data = m1_data.resample(\"15T\")\n",
|
||||
"\n",
|
||||
"# m15 = m15_data.data[\"BAC\"]\n",
|
||||
"\n",
|
||||
"# m15.vbt.ohlcv.plot()\n",
|
||||
"\n",
|
||||
"# m1_data.wrapper.index\n",
|
||||
"\n",
|
||||
"# m1_resampler = m1_data.wrapper.get_resampler(\"1T\")\n",
|
||||
"# m1_resampler.index_difference(reverse=True)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# m5_resampler.prettify()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# defining ENTRY WINDOW and forced EXIT window"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#m1_data.data[\"BAC\"].info()\n",
|
||||
"import datetime\n",
|
||||
"# Define the market open and close times\n",
|
||||
"market_open = datetime.time(9, 30)\n",
|
||||
"market_close = datetime.time(16, 0)\n",
|
||||
"entry_window_opens = 1\n",
|
||||
"entry_window_closes = 350\n",
|
||||
"\n",
|
||||
"forced_exit_start = 380\n",
|
||||
"forced_exit_end = 390\n",
|
||||
"\n",
|
||||
"forced_exit = m1_data.symbol_wrapper.fill(False)\n",
|
||||
"entry_window_open= m1_data.symbol_wrapper.fill(False)\n",
|
||||
"\n",
|
||||
"# Calculate the time difference in minutes from market open for each timestamp\n",
|
||||
"elapsed_min_from_open = (forced_exit.index.hour - market_open.hour) * 60 + (forced_exit.index.minute - market_open.minute)\n",
|
||||
"\n",
|
||||
"entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
|
||||
"forced_exit[(elapsed_min_from_open >= forced_exit_start) & (elapsed_min_from_open < forced_exit_end)] = True\n",
|
||||
"\n",
|
||||
"#entry_window_open.info()\n",
|
||||
"# forced_exit.tail(100)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"close = m1_data.close\n",
|
||||
"\n",
|
||||
"rsi = vbt.RSI.run(close, window=14)\n",
|
||||
"\n",
|
||||
"long_entries = (rsi.rsi.vbt.crossed_below(20) & entry_window_open)\n",
|
||||
"long_exits = (rsi.rsi.vbt.crossed_above(70) | forced_exit)\n",
|
||||
"#long_entries.info()\n",
|
||||
"#number of trues and falses in long_entries\n",
|
||||
"long_entries.value_counts()\n",
|
||||
"#long_exits.value_counts()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def plot_rsi(rsi, close, entries, exits):\n",
|
||||
" fig = vbt.make_subplots(rows=1, cols=1, shared_xaxes=True, specs=[[{\"secondary_y\": True}]], vertical_spacing=0.02, subplot_titles=(\"RSI\", \"Price\" ))\n",
|
||||
" close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=True))\n",
|
||||
" rsi.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False))\n",
|
||||
" entries.vbt.signals.plot_as_entries(rsi.rsi, fig=fig, add_trace_kwargs=dict(secondary_y=False)) \n",
|
||||
" exits.vbt.signals.plot_as_exits(rsi.rsi, fig=fig, add_trace_kwargs=dict(secondary_y=False)) \n",
|
||||
" return fig\n",
|
||||
"\n",
|
||||
"plot_rsi(rsi, close, long_entries, long_exits)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vbt.phelp(vbt.Portfolio.from_signals)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sl_stop = np.arange(0.03/100, 0.2/100, 0.02/100).tolist()\n",
|
||||
"# Using the round function\n",
|
||||
"sl_stop = [round(val, 4) for val in sl_stop]\n",
|
||||
"print(sl_stop)\n",
|
||||
"sl_stop = vbt.Param(sl_stop) #np.nan mean s no stoploss\n",
|
||||
"\n",
|
||||
"pf = vbt.Portfolio.from_signals(close=close, entries=long_entries, sl_stop=sl_stop, tp_stop = sl_stop, exits=long_exits,fees=0.0167/100, freq=\"1s\") #sl_stop=sl_stop, tp_stop = sl_stop, \n",
|
||||
"\n",
|
||||
"#pf.stats()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf.plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf[(0.0015,0.0013)].plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf[0.03].plot_trade_signals()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# pristup k pf jako multi index"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#pf[0.03].plot()\n",
|
||||
"#pf.order_records\n",
|
||||
"pf[(0.03)].stats()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#zgrupovane statistiky\n",
|
||||
"stats_df = pf.stats([\n",
|
||||
" 'total_return',\n",
|
||||
" 'total_trades',\n",
|
||||
" 'win_rate',\n",
|
||||
" 'expectancy'\n",
|
||||
"], agg_func=None)\n",
|
||||
"stats_df\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"stats_df.nlargest(50, 'Total Return [%]')\n",
|
||||
"#stats_df.info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pf[(0.0011,0.0013)].plot()\n",
|
||||
"\n",
|
||||
"#pf[(0.0011,0.0013000000000000002)].plot()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pandas.tseries.offsets import DateOffset\n",
|
||||
"\n",
|
||||
"temp_data = basic_data['2024-4-22']\n",
|
||||
"temp_data\n",
|
||||
"res1m = temp_data[[\"Open\", \"High\", \"Low\", \"Close\", \"Volume\"]]\n",
|
||||
"\n",
|
||||
"# Define a custom date offset that starts at 9:30 AM and spans 4 hours\n",
|
||||
"custom_offset = DateOffset(hours=4, minutes=30)\n",
|
||||
"\n",
|
||||
"# res1m = res1m.get().resample(\"4H\").agg({ \n",
|
||||
"# \"Open\": \"first\",\n",
|
||||
"# \"High\": \"max\",\n",
|
||||
"# \"Low\": \"min\",\n",
|
||||
"# \"Close\": \"last\",\n",
|
||||
"# \"Volume\": \"sum\"\n",
|
||||
"# })\n",
|
||||
"\n",
|
||||
"res4h = res1m.resample(\"1h\", resample_kwargs=dict(origin=\"start\"))\n",
|
||||
"\n",
|
||||
"res4h.data\n",
|
||||
"\n",
|
||||
"res15m = res1m.resample(\"15T\", resample_kwargs=dict(origin=\"start\"))\n",
|
||||
"\n",
|
||||
"res15m.data[\"BAC\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"@vbt.njit\n",
|
||||
"def long_entry_place_func_nb(c, low, close, time_in_ns, rsi14, window_open, window_close):\n",
|
||||
" market_open_minutes = 570 # 9 hours * 60 minutes + 30 minutes\n",
|
||||
"\n",
|
||||
" for out_i in range(len(c.out)):\n",
|
||||
" i = c.from_i + out_i\n",
|
||||
"\n",
|
||||
" current_minutes = vbt.dt_nb.hour_nb(time_in_ns[i]) * 60 + vbt.dt_nb.minute_nb(time_in_ns[i])\n",
|
||||
" #print(\"current_minutes\", current_minutes)\n",
|
||||
" # Calculate elapsed minutes since market open at 9:30 AM\n",
|
||||
" elapsed_from_open = current_minutes - market_open_minutes\n",
|
||||
" elapsed_from_open = elapsed_from_open if elapsed_from_open >= 0 else 0\n",
|
||||
" #print( \"elapsed_from_open\", elapsed_from_open)\n",
|
||||
"\n",
|
||||
" #elapsed_from_open = elapsed_minutes_from_open_nb(time_in_ns) \n",
|
||||
" in_window = elapsed_from_open > window_open and elapsed_from_open < window_close\n",
|
||||
" #print(\"in_window\", in_window)\n",
|
||||
" # if in_window:\n",
|
||||
" # print(\"in window\")\n",
|
||||
"\n",
|
||||
" if in_window and rsi14[i] > 60: # and low[i, c.col] <= hit_price: # and hour == 9: # (4)!\n",
|
||||
" return out_i\n",
|
||||
" return -1\n",
|
||||
"\n",
|
||||
"@vbt.njit\n",
|
||||
"def long_exit_place_func_nb(c, high, close, time_index, tp, sl): # (5)!\n",
|
||||
" entry_i = c.from_i - c.wait\n",
|
||||
" entry_price = close[entry_i, c.col]\n",
|
||||
" hit_price = entry_price * (1 + tp)\n",
|
||||
" stop_price = entry_price * (1 - sl)\n",
|
||||
" for out_i in range(len(c.out)):\n",
|
||||
" i = c.from_i + out_i\n",
|
||||
" last_bar_of_day = vbt.dt_nb.day_changed_nb(time_index[i], time_index[i + 1])\n",
|
||||
"\n",
|
||||
" #print(next_day)\n",
|
||||
" if last_bar_of_day: #pokud je dalsi next day, tak zavirame posledni\n",
|
||||
" print(\"ted\",out_i)\n",
|
||||
" return out_i\n",
|
||||
" if close[i, c.col] >= hit_price or close[i, c.col] <= stop_price :\n",
|
||||
" return out_i\n",
|
||||
" return -1\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))\n",
|
||||
"\n",
|
||||
"df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"df.sum()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
1639
research/test1sbars_roc.ipynb
Normal file
1639
research/test1sbars_roc.ipynb
Normal file
File diff suppressed because one or more lines are too long
45
restart.sh
Executable file
45
restart.sh
Executable file
@ -0,0 +1,45 @@
|
||||
#!/bin/bash
|
||||
|
||||
# file: restart.sh
|
||||
|
||||
# Usage: ./restart.sh [test|prod|all]
|
||||
|
||||
# Define server addresses
|
||||
TEST_SERVER="david@142.132.188.109"
|
||||
PROD_SERVER="david@5.161.179.223"
|
||||
|
||||
# Define the remote directory where the script is located
|
||||
REMOTE_DIR="v2trading"
|
||||
|
||||
# Check for argument
|
||||
if [ "$#" -ne 1 ]; then
|
||||
echo "Usage: $0 [test|prod|all]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to restart a server
|
||||
restart_server() {
|
||||
local server=$1
|
||||
echo "Connecting to $server to restart the Python app..."
|
||||
ssh -t $server "cd $REMOTE_DIR && . ~/.bashrc && ./run.sh restart" # Sourcing .bashrc here
|
||||
echo "Operation completed on $server."
|
||||
}
|
||||
|
||||
# Select the server based on the input argument
|
||||
case $1 in
|
||||
test)
|
||||
restart_server $TEST_SERVER
|
||||
;;
|
||||
prod)
|
||||
restart_server $PROD_SERVER
|
||||
;;
|
||||
all)
|
||||
restart_server $TEST_SERVER
|
||||
restart_server $PROD_SERVER
|
||||
;;
|
||||
*)
|
||||
echo "Invalid argument: $1. Use 'test', 'prod', or 'all'."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
15
run.sh
15
run.sh
@ -26,12 +26,27 @@ PYTHON_TO_USE="python3"
|
||||
|
||||
#----END EDITABLE VARS-------
|
||||
|
||||
# Additions for handling strat.log backup
|
||||
HISTORY_DIR="$HOME/stratlogs"
|
||||
TIMESTAMP=$(date +"%Y%m%d-%H%M%S")
|
||||
LOG_FILE="strat.log"
|
||||
BACKUP_LOG_FILE="$HISTORY_DIR/${TIMESTAMP}_$LOG_FILE"
|
||||
|
||||
# If virtualenv specified & exists, using that version of python instead.
|
||||
if [ -d "$VIRTUAL_ENV_DIR" ]; then
|
||||
PYTHON_TO_USE="$VIRTUAL_ENV_DIR/bin/python"
|
||||
fi
|
||||
|
||||
start() {
|
||||
# Check and create history directory if it doesn't exist
|
||||
[ ! -d "$HISTORY_DIR" ] && mkdir -p "$HISTORY_DIR"
|
||||
|
||||
# Check if strat.log exists and back it up
|
||||
if [ -f "$LOG_FILE" ]; then
|
||||
mv "$LOG_FILE" "$BACKUP_LOG_FILE"
|
||||
echo "Backed up log to $BACKUP_LOG_FILE"
|
||||
fi
|
||||
|
||||
if [ ! -e "$OUTPUT_PID_PATH/$OUTPUT_PID_FILE" ]; then
|
||||
nohup "$PYTHON_TO_USE" ./$SCRIPT_TO_EXECUTE_PLUS_ARGS > strat.log 2>&1 & echo $! > "$OUTPUT_PID_PATH/$OUTPUT_PID_FILE"
|
||||
echo "Started $SCRIPT_TO_EXECUTE_PLUS_ARGS @ Process: $!"
|
||||
|
||||
2
setup.py
2
setup.py
@ -1,7 +1,7 @@
|
||||
from setuptools import find_packages, setup
|
||||
|
||||
setup(name='v2realbot',
|
||||
version='0.9',
|
||||
version='0.91',
|
||||
description='Realbot trader',
|
||||
author='David Brazda',
|
||||
author_email='davidbrazda61@gmail.com',
|
||||
|
||||
107
testdoc.md
Normal file
107
testdoc.md
Normal file
@ -0,0 +1,107 @@
|
||||
# Plotly
|
||||
|
||||
* MAKE_SUBPLOT Defines layout (if more then 1x1 or secondary y axis are required)
|
||||
|
||||
```python
|
||||
fig = vbt.make_subplots(rows=2, cols=1, shared_xaxes=True,
|
||||
specs=[[{"secondary_y": True}], [{"secondary_y": False}]],
|
||||
vertical_spacing=0.02, subplot_titles=("Row 1 title", "Row 2 title"))
|
||||
```
|
||||
|
||||
Then the different [sr/df generic accessor](http://5.161.179.223:8000/static/js/vbt/api/generic/accessors/index.html#vectorbtpro.generic.accessors.GenericAccessor.areaplot) are added with ADD_TRACE_KWARGS and TRACE_KWARGS. Other types of plot available in [plotting module](http://5.161.179.223:8000/static/js/vbt/api/generic/plotting/index.html)
|
||||
|
||||
```python
|
||||
#using accessor
|
||||
close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False,row=1, col=1), trace_kwargs=dict(line=dict(color="blue")))
|
||||
indvolume.vbt.barplot(fig=fig, add_trace_kwargs=dict(secondary_y=False, row=2, col=1))
|
||||
#using plotting module
|
||||
vbt.Bar(indvolume, fig=fig, add_trace_kwargs=dict(secondary_y=False, row=2, col=1))
|
||||
```
|
||||
|
||||
* ADD_TRACE_KWARGS - determines positioning withing subplot
|
||||
```python
|
||||
add_trace_kwargs=dict(secondary_y=False,row=1, col=1)
|
||||
```
|
||||
* TRACE_KWARGS - other styling of trace
|
||||
```python
|
||||
trace_kwargs=dict(name="LONGS",
|
||||
line=dict(color="#ffe476"),
|
||||
marker=dict(color="limegreen"),
|
||||
fill=None,
|
||||
connectgaps=True)
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
```python
|
||||
fig = vbt.make_subplots(rows=2, cols=1, shared_xaxes=True,
|
||||
specs=[[{"secondary_y": True}], [{"secondary_y": False}]],
|
||||
vertical_spacing=0.02, subplot_titles=("Price and Indicators", "Volume"))
|
||||
|
||||
# Plotting the close price
|
||||
close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False,row=1, col=1), trace_kwargs=dict(line=dict(color="blue")))
|
||||
```
|
||||
|
||||
# Data
|
||||
## Resampling
|
||||
```python
|
||||
t1data = basic_data[['open', 'high', 'low', 'close', 'volume','vwap','buyvolume','sellvolume']].resample("1T")
|
||||
t1data = t1data.transform(lambda df: df.between_time('09:30', '16:00').dropna()) #main session data only, no nans
|
||||
|
||||
t5data = basic_data[['open', 'high', 'low', 'close', 'volume','vwap','buyvolume','sellvolume']].resample("5T")
|
||||
t5data = t5data.transform(lambda df: df.between_time('09:30', '16:00').dropna())
|
||||
|
||||
dailydata = basic_data[['open', 'high', 'low', 'close', 'volume', 'vwap']].resample("D").dropna()
|
||||
|
||||
#realign 5min close to 1min so it can be compared with 1min
|
||||
t5data_close_realigned = t5data.close.vbt.realign_closing("1T").between_time('09:30', '16:00').dropna()
|
||||
#same with open
|
||||
t5data.open.vbt.realign_opening("1h")
|
||||
```
|
||||
### Define resample function for custom column
|
||||
Example of custom feature config [Binance Data](http://5.161.179.223:8000/static/js/vbt/api/data/custom/binance/index.html#vectorbtpro.data.custom.binance.BinanceData.feature_config).
|
||||
Other [reduced functions available](http://5.161.179.223:8000/static/js/vbt/api/generic/nb/apply_reduce/index.html). (mean, min, max, median, nth ...)
|
||||
```python
|
||||
from vectorbtpro.utils.config import merge_dicts, Config, HybridConfig
|
||||
from vectorbtpro import _typing as tp
|
||||
from vectorbtpro.generic import nb as generic_nb
|
||||
|
||||
_feature_config: tp.ClassVar[Config] = HybridConfig(
|
||||
{
|
||||
"buyvolume": dict(
|
||||
resample_func=lambda self, obj, resampler: obj.vbt.resample_apply(
|
||||
resampler,
|
||||
generic_nb.sum_reduce_nb,
|
||||
)
|
||||
),
|
||||
"sellvolume": dict(
|
||||
resample_func=lambda self, obj, resampler: obj.vbt.resample_apply(
|
||||
resampler,
|
||||
generic_nb.sum_reduce_nb,
|
||||
)
|
||||
)
|
||||
}
|
||||
)
|
||||
|
||||
basic_data._feature_config = _feature_config
|
||||
```
|
||||
|
||||
### Validate resample
|
||||
```python
|
||||
t2dataclose = t2data.close.rename("15MIN - realigned").vbt.realign_closing("1T")
|
||||
fig = t1data.close.rename("1MIN").vbt.plot()
|
||||
t2data.close.rename("15MIN").vbt.plot(fig=fig)
|
||||
t2dataclose.vbt.plot(fig=fig)
|
||||
```
|
||||
## Persisting
|
||||
```python
|
||||
basic_data.to_parquet(partition_by="day", compression="gzip")
|
||||
day_data = vbt.ParquetData.pull("BAC", filters=[("group", "==", "2024-05-03")])
|
||||
vbt.print_dir_tree("BTC-USD")#overeni directory structure
|
||||
```
|
||||
# Discover
|
||||
```python
|
||||
vbt.phelp(vbt.talib(“atr”).run) #parameters it accepts
|
||||
vbt.pdir(pf) - get available properties and methods
|
||||
vbt.pprint(basic_data) #to get correct shape, info about instance
|
||||
```
|
||||
BIN
tested_runner.png
Normal file
BIN
tested_runner.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 21 KiB |
@ -23,12 +23,12 @@ clientTrading = TradingClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY,
|
||||
|
||||
#get previous days bar
|
||||
|
||||
datetime_object_from = datetime.datetime(2023, 10, 11, 4, 0, 00, tzinfo=datetime.timezone.utc)
|
||||
datetime_object_to = datetime.datetime(2023, 10, 16, 16, 1, 00, tzinfo=datetime.timezone.utc)
|
||||
calendar_request = GetCalendarRequest(start=datetime_object_from,end=datetime_object_to)
|
||||
cal_dates = clientTrading.get_calendar(calendar_request)
|
||||
print(cal_dates)
|
||||
bar_request = StockBarsRequest(symbol_or_symbols="BAC",timeframe=TimeFrame.Day, start=datetime_object_from, end=datetime_object_to, feed=DataFeed.SIP)
|
||||
datetime_object_from = datetime.datetime(2024, 3, 9, 13, 29, 00, tzinfo=datetime.timezone.utc)
|
||||
datetime_object_to = datetime.datetime(2024, 3, 11, 20, 1, 00, tzinfo=datetime.timezone.utc)
|
||||
# calendar_request = GetCalendarRequest(start=datetime_object_from,end=datetime_object_to)
|
||||
# cal_dates = clientTrading.get_calendar(calendar_request)
|
||||
# print(cal_dates)
|
||||
bar_request = StockBarsRequest(symbol_or_symbols="BAC",timeframe=TimeFrame.Minute, start=datetime_object_from, end=datetime_object_to, feed=DataFeed.SIP)
|
||||
|
||||
# bars = client.get_stock_bars(bar_request).df
|
||||
|
||||
|
||||
@ -23,7 +23,7 @@ from rich import print
|
||||
from collections import defaultdict
|
||||
from pandas import to_datetime
|
||||
from msgpack.ext import Timestamp
|
||||
from v2realbot.utils.historicals import convert_daily_bars
|
||||
from v2realbot.utils.historicals import convert_historical_bars
|
||||
|
||||
def get_last_close():
|
||||
pass
|
||||
@ -38,7 +38,7 @@ def get_historical_bars(symbol: str, time_from: datetime, time_to: datetime, tim
|
||||
bars: BarSet = stock_client.get_stock_bars(bar_request)
|
||||
print("puvodni bars", bars["BAC"])
|
||||
print(bars)
|
||||
return convert_daily_bars(bars[symbol])
|
||||
return convert_historical_bars(bars[symbol])
|
||||
|
||||
|
||||
#v initu plnime pozadovana historicka data do historicals[]
|
||||
|
||||
@ -1,3 +1,3 @@
|
||||
API_KEY = 'PKGGEWIEYZOVQFDRY70L'
|
||||
SECRET_KEY = 'O5Kt8X4RLceIOvM98i5LdbalItsX7hVZlbPYHy8Y'
|
||||
API_KEY = ''
|
||||
SECRET_KEY = ''
|
||||
MAX_BATCH_SIZE = 1
|
||||
|
||||
@ -1,12 +1,14 @@
|
||||
import scipy.interpolate as spi
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
# x = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
|
||||
# y = [4, 7, 11, 16, 22, 29, 38, 49, 63, 80]
|
||||
|
||||
x = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
|
||||
y = [4, 7, 11, 16, 22, 29, 38, 49, 63, 80]
|
||||
|
||||
|
||||
y_interp = spi.interp1d(x, y)
|
||||
val = 10
|
||||
new = np.interp(val, [0, 50, 100], [0, 1, 2])
|
||||
print(new)
|
||||
# y_interp = spi.interp1d(x, y)
|
||||
|
||||
#find y-value associated with x-value of 13
|
||||
#print(y_interp(13))
|
||||
|
||||
File diff suppressed because one or more lines are too long
18
testy/createbatchimage.py
Normal file
18
testy/createbatchimage.py
Normal file
@ -0,0 +1,18 @@
|
||||
import argparse
|
||||
import v2realbot.reporting.metricstoolsimage as mt
|
||||
|
||||
# Parse the command-line arguments
|
||||
# parser = argparse.ArgumentParser(description="Generate trading report image with batch ID")
|
||||
# parser.add_argument("batch_id", type=str, help="The batch ID for the report")
|
||||
# args = parser.parse_args()
|
||||
|
||||
# batch_id = args.batch_id
|
||||
|
||||
# Generate the report image
|
||||
res, val = mt.generate_trading_report_image(batch_id="4d7dc163")
|
||||
|
||||
# Print the result
|
||||
if res == 0:
|
||||
print("BATCH REPORT CREATED")
|
||||
else:
|
||||
print(f"BATCH REPORT ERROR - {val}")
|
||||
@ -1,9 +1,9 @@
|
||||
import numpy as np
|
||||
from v2realbot.common.PrescribedTradeModel import Trade, TradeDirection, TradeStatus
|
||||
from v2realbot.common.model import Trade, TradeDirection, TradeStatus
|
||||
from typing import Tuple
|
||||
from copy import deepcopy
|
||||
from v2realbot.strategy.base import StrategyState
|
||||
from v2realbot.strategyblocks.activetrade.helpers import get_max_profit_price, get_profit_target_price, get_override_for_active_trade, keyword_conditions_met
|
||||
from v2realbot.strategyblocks.activetrade.helpers import get_max_profit_price, get_profit_target_price, get_signal_section_directive, keyword_conditions_met
|
||||
from v2realbot.utils.utils import safe_get
|
||||
# FIBONACCI PRO PROFIT A SL
|
||||
|
||||
@ -63,10 +63,10 @@ class SLOptimizer:
|
||||
|
||||
def initialize_levels(self, state):
|
||||
directive_name = 'SL_opt_exit_levels_'+str(self.direction)
|
||||
SL_opt_exit_levels = get_override_for_active_trade(state=state, directive_name=directive_name, default_value=safe_get(state.vars, directive_name, None))
|
||||
SL_opt_exit_levels = get_signal_section_directive(state=state, directive_name=directive_name, default_value=safe_get(state.vars, directive_name, None))
|
||||
|
||||
directive_name = 'SL_opt_exit_sizes_'+str(self.direction)
|
||||
SL_opt_exit_sizes = get_override_for_active_trade(state=state, directive_name=directive_name, default_value=safe_get(state.vars, directive_name, None))
|
||||
SL_opt_exit_sizes = get_signal_section_directive(state=state, directive_name=directive_name, default_value=safe_get(state.vars, directive_name, None))
|
||||
|
||||
if SL_opt_exit_levels is None or SL_opt_exit_sizes is not None:
|
||||
print("no directives found: SL_opt_exit_levels/SL_opt_exit_sizes")
|
||||
|
||||
@ -1,7 +1,9 @@
|
||||
import os,sys
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
print(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from alpaca.data.historical import CryptoHistoricalDataClient, StockHistoricalDataClient
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from alpaca.data.historical import StockHistoricalDataClient
|
||||
from alpaca.data.requests import CryptoLatestTradeRequest, StockLatestTradeRequest, StockLatestBarRequest, StockTradesRequest
|
||||
from alpaca.data.enums import DataFeed
|
||||
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY
|
||||
|
||||
89
testy/getrunnerdetail.py
Normal file
89
testy/getrunnerdetail.py
Normal file
@ -0,0 +1,89 @@
|
||||
|
||||
from v2realbot.common.model import RunDay, StrategyInstance, Runner, RunRequest, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, RunArchiveChange, Bar, TradeEvent, TestList, Intervals, ConfigItem, InstantIndicator, DataTablesRequest
|
||||
import v2realbot.controller.services as cs
|
||||
from v2realbot.utils.utils import slice_dict_lists,zoneUTC,safe_get, AttributeDict
|
||||
id = "b11c66d9-a9b6-475a-9ac1-28b11e1b4edf"
|
||||
state = AttributeDict(vars={})
|
||||
|
||||
##základ pro init_attached_data in strategy.init
|
||||
|
||||
# def get_previous_runner(state):
|
||||
# runner : Runner
|
||||
# res, runner = cs.get_runner(state.runner_id)
|
||||
# if res < 0:
|
||||
# print(f"Not running {id}")
|
||||
# return 0, None
|
||||
|
||||
# return 0, runner.batch_id
|
||||
|
||||
def attach_previous_data(state):
|
||||
runner : Runner
|
||||
#get batch_id of current runer
|
||||
res, runner = cs.get_runner(state.runner_id)
|
||||
if res < 0 or runner.batch_id is None:
|
||||
print(f"Couldnt get previous runner {val}")
|
||||
return None
|
||||
|
||||
batch_id = runner.batch_id
|
||||
#batch_id = "6a6b0bcf"
|
||||
|
||||
res, runner_ids =cs.get_archived_runnerslist_byBatchID(batch_id, "desc")
|
||||
if res < 0:
|
||||
msg = f"error whne fetching runners of batch {batch_id} {runner_ids}"
|
||||
print(msg)
|
||||
return None
|
||||
|
||||
if runner_ids is None or len(runner_ids) == 0:
|
||||
print(f"no runners found for batch {batch_id} {runner_ids}")
|
||||
return None
|
||||
|
||||
last_runner = runner_ids[0]
|
||||
print("Previous runner identified:", last_runner)
|
||||
|
||||
#get details from the runner
|
||||
res, val = cs.get_archived_runner_details_byID(last_runner)
|
||||
if res < 0:
|
||||
print(f"no archived runner {last_runner}")
|
||||
|
||||
detail = RunArchiveDetail(**val)
|
||||
#print("toto jsme si dotahnuli", detail.bars)
|
||||
|
||||
# from stratvars directives
|
||||
attach_previous_bars_indicators = safe_get(state.vars, "attach_previous_bars_indicators", 50)
|
||||
attach_previous_cbar_indicators = safe_get(state.vars, "attach_previous_cbar_indicators", 50)
|
||||
# [stratvars]
|
||||
# attach_previous_bars_indicators = 50
|
||||
# attach_previous_cbar_indicators = 50
|
||||
|
||||
#indicators datetime utc
|
||||
indicators = slice_dict_lists(d=detail.indicators[0],last_item=attach_previous_bars_indicators, time_to_datetime=True)
|
||||
|
||||
#time -datetime utc, updated - timestamp float
|
||||
bars = slice_dict_lists(d=detail.bars, last_item=attach_previous_bars_indicators, time_to_datetime=True)
|
||||
|
||||
#cbar_indicatzors #float
|
||||
cbar_inds = slice_dict_lists(d=detail.indicators[1],last_item=attach_previous_cbar_indicators)
|
||||
|
||||
#USE these as INITs - TADY SI TO JESTE ZASTAVIT a POROVNAT
|
||||
print(f"{state.indicators=} NEW:{indicators=}")
|
||||
state.indicators = indicators
|
||||
print(f"{state.bars=} NEW:{bars=}")
|
||||
state.bars = bars
|
||||
print(f"{state.cbar_indicators=} NEW:{cbar_inds=}")
|
||||
state.cbar_indicators = cbar_inds
|
||||
|
||||
print("BARS and INDS INITIALIZED")
|
||||
#bars
|
||||
|
||||
|
||||
#tady budou pripadne dalsi inicializace, z ext_data
|
||||
print("EXT_DATA", detail.ext_data)
|
||||
#podle urciteho nastaveni napr.v konfiguraci se pouziji urcite promenne
|
||||
|
||||
#pridavame dailyBars z extData
|
||||
# if hasattr(detail, "ext_data") and "dailyBars" in detail.ext_data:
|
||||
# state.dailyBars = detail.ext_data["dailyBars"]
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
attach_previous_data(state)
|
||||
@ -2,7 +2,7 @@ import sqlite3
|
||||
from v2realbot.config import DATA_DIR
|
||||
from v2realbot.utils.utils import json_serial
|
||||
from uuid import UUID, uuid4
|
||||
import json
|
||||
import orjson
|
||||
from datetime import datetime
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account
|
||||
from v2realbot.common.model import RunArchiveDetail, RunArchive, RunArchiveView
|
||||
@ -35,14 +35,14 @@ def row_to_object(row: dict) -> RunArchive:
|
||||
end_positions=row.get('end_positions'),
|
||||
end_positions_avgp=row.get('end_positions_avgp'),
|
||||
metrics=row.get('open_orders'),
|
||||
#metrics=json.loads(row.get('metrics')) if row.get('metrics') else None,
|
||||
#metrics=orjson.loads(row.get('metrics')) if row.get('metrics') else None,
|
||||
stratvars_toml=row.get('stratvars_toml')
|
||||
)
|
||||
|
||||
def get_all_archived_runners():
|
||||
conn = pool.get_connection()
|
||||
try:
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_header")
|
||||
finally:
|
||||
@ -54,7 +54,7 @@ def insert_archive_header(archeader: RunArchive):
|
||||
conn = pool.get_connection()
|
||||
try:
|
||||
c = conn.cursor()
|
||||
json_string = json.dumps(archeader, default=json_serial)
|
||||
json_string = orjson.dumps(archeader, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
if archeader.batch_id is not None:
|
||||
statement = f"INSERT INTO runner_header (runner_id, batch_id, ra) VALUES ('{str(archeader.id)}','{str(archeader.batch_id)}','{json_string}')"
|
||||
else:
|
||||
@ -103,7 +103,7 @@ def migrate_to_columns(ra: RunArchive):
|
||||
SET strat_id=?, batch_id=?, symbol=?, name=?, note=?, started=?, stopped=?, mode=?, account=?, bt_from=?, bt_to=?, strat_json=?, settings=?, ilog_save=?, profit=?, trade_count=?, end_positions=?, end_positions_avgp=?, metrics=?, stratvars_toml=?
|
||||
WHERE runner_id=?
|
||||
''',
|
||||
(str(ra.strat_id), ra.batch_id, ra.symbol, ra.name, ra.note, ra.started, ra.stopped, ra.mode, ra.account, ra.bt_from, ra.bt_to, json.dumps(ra.strat_json), json.dumps(ra.settings), ra.ilog_save, ra.profit, ra.trade_count, ra.end_positions, ra.end_positions_avgp, json.dumps(ra.metrics), ra.stratvars_toml, str(ra.id)))
|
||||
(str(ra.strat_id), ra.batch_id, ra.symbol, ra.name, ra.note, ra.started, ra.stopped, ra.mode, ra.account, ra.bt_from, ra.bt_to, orjson.dumps(ra.strat_json), orjson.dumps(ra.settings), ra.ilog_save, ra.profit, ra.trade_count, ra.end_positions, ra.end_positions_avgp, orjson.dumps(ra.metrics), ra.stratvars_toml, str(ra.id)))
|
||||
|
||||
conn.commit()
|
||||
finally:
|
||||
|
||||
@ -2,7 +2,7 @@ import sqlite3
|
||||
from v2realbot.config import DATA_DIR
|
||||
from v2realbot.utils.utils import json_serial
|
||||
from uuid import UUID, uuid4
|
||||
import json
|
||||
import orjson
|
||||
from datetime import datetime
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account
|
||||
from v2realbot.common.model import RunArchiveDetail
|
||||
@ -11,7 +11,7 @@ from tinydb import TinyDB, Query, where
|
||||
sqlite_db_file = DATA_DIR + "/v2trading.db"
|
||||
conn = sqlite3.connect(sqlite_db_file)
|
||||
#standardne vraci pole tuplů, kde clen tuplu jsou sloupce
|
||||
#conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
#conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
#conn.row_factory = lambda c, r: r[0]
|
||||
#conn.row_factory = sqlite3.Row
|
||||
|
||||
@ -28,7 +28,7 @@ insert_list = [dict(time=datetime.now().timestamp(), side="ddd", rectype=RecordT
|
||||
|
||||
def insert_log(runner_id: UUID, time: float, logdict: dict):
|
||||
c = conn.cursor()
|
||||
json_string = json.dumps(logdict, default=json_serial)
|
||||
json_string = orjson.dumps(logdict, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
res = c.execute("INSERT INTO runner_logs VALUES (?,?,?)",[str(runner_id), time, json_string])
|
||||
conn.commit()
|
||||
return res.rowcount
|
||||
@ -37,14 +37,14 @@ def insert_log_multiple(runner_id: UUID, loglist: list):
|
||||
c = conn.cursor()
|
||||
insert_data = []
|
||||
for i in loglist:
|
||||
row = (str(runner_id), i["time"], json.dumps(i, default=json_serial))
|
||||
row = (str(runner_id), i["time"], orjson.dumps(i, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME))
|
||||
insert_data.append(row)
|
||||
c.executemany("INSERT INTO runner_logs VALUES (?,?,?)", insert_data)
|
||||
conn.commit()
|
||||
return c.rowcount
|
||||
|
||||
# c = conn.cursor()
|
||||
# json_string = json.dumps(logdict, default=json_serial)
|
||||
# json_string = orjson.dumps(logdict, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
# res = c.execute("INSERT INTO runner_logs VALUES (?,?,?)",[str(runner_id), time, json_string])
|
||||
# print(res)
|
||||
# conn.commit()
|
||||
@ -52,7 +52,7 @@ def insert_log_multiple(runner_id: UUID, loglist: list):
|
||||
|
||||
#returns list of ilog jsons
|
||||
def read_log_window(runner_id: UUID, timestamp_from: float, timestamp_to: float):
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_logs WHERE runner_id='{str(runner_id)}' AND time >={ts_from} AND time <={ts_to}")
|
||||
return res.fetchall()
|
||||
@ -94,21 +94,21 @@ def delete_logs(runner_id: UUID):
|
||||
|
||||
def insert_archive_detail(archdetail: RunArchiveDetail):
|
||||
c = conn.cursor()
|
||||
json_string = json.dumps(archdetail, default=json_serial)
|
||||
json_string = orjson.dumps(archdetail, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
res = c.execute("INSERT INTO runner_detail VALUES (?,?)",[str(archdetail["id"]), json_string])
|
||||
conn.commit()
|
||||
return res.rowcount
|
||||
|
||||
#returns list of details
|
||||
def get_all_archive_detail():
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_detail")
|
||||
return res.fetchall()
|
||||
|
||||
#vrátí konkrétní
|
||||
def get_archive_detail_byID(runner_id: UUID):
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_detail WHERE runner_id='{str(runner_id)}'")
|
||||
return res.fetchone()
|
||||
@ -123,7 +123,7 @@ def delete_archive_detail(runner_id: UUID):
|
||||
|
||||
def get_all_archived_runners_detail():
|
||||
arch_detail_file = DATA_DIR + "/arch_detail.json"
|
||||
db_arch_d = TinyDB(arch_detail_file, default=json_serial)
|
||||
db_arch_d = TinyDB(arch_detail_file, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
res = db_arch_d.all()
|
||||
return 0, res
|
||||
|
||||
|
||||
@ -4,7 +4,7 @@ from keras.models import Sequential
|
||||
from keras.layers import LSTM, Dense
|
||||
from v2realbot.controller.services import get_archived_runner_details_byID
|
||||
from v2realbot.common.model import RunArchiveDetail
|
||||
import json
|
||||
import orjson
|
||||
|
||||
runner_id = "838e918e-9be0-4251-a968-c13c83f3f173"
|
||||
result = None
|
||||
|
||||
39
testy/pickle.py
Normal file
39
testy/pickle.py
Normal file
@ -0,0 +1,39 @@
|
||||
import pickle
|
||||
import os
|
||||
from v2realbot.config import STRATVARS_UNCHANGEABLES, ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, ACCOUNT1_LIVE_API_KEY, ACCOUNT1_LIVE_SECRET_KEY, DATA_DIR,BT_FILL_CONS_TRADES_REQUIRED,BT_FILL_LOG_SURROUNDING_TRADES,BT_FILL_CONDITION_BUY_LIMIT,BT_FILL_CONDITION_SELL_LIMIT, GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN, MEDIA_DIRECTORY, RUNNER_DETAIL_DIRECTORY
|
||||
|
||||
# #class to persist
|
||||
# class Store:
|
||||
# stratins : List[StrategyInstance] = []
|
||||
# runners: List[Runner] = []
|
||||
# def __init__(self) -> None:
|
||||
# self.db_file = DATA_DIR + "/strategyinstances.cache"
|
||||
# if os.path.exists(self.db_file):
|
||||
# with open (self.db_file, 'rb') as fp:
|
||||
# self.stratins = pickle.load(fp)
|
||||
|
||||
# def save(self):
|
||||
# with open(self.db_file, 'wb') as fp:
|
||||
# pickle.dump(self.stratins, fp)
|
||||
|
||||
|
||||
#db = Store()
|
||||
|
||||
def try_reading_after_skipping_bytes(file_path, skip_bytes, chunk_size=1024):
|
||||
with open(file_path, 'rb') as file:
|
||||
file.seek(skip_bytes) # Skip initial bytes
|
||||
while True:
|
||||
try:
|
||||
data = pickle.load(file)
|
||||
print("Recovered data:", data)
|
||||
break # Exit loop if successful
|
||||
except EOFError:
|
||||
print("Reached end of file without recovering data.")
|
||||
break
|
||||
except pickle.UnpicklingError:
|
||||
# Move ahead in file by chunk_size bytes and try again
|
||||
file.seek(file.tell() + chunk_size, os.SEEK_SET)
|
||||
|
||||
|
||||
file_path = DATA_DIR + "/strategyinstances.cache"
|
||||
try_reading_after_skipping_bytes(file_path,1)
|
||||
74
testy/tablesizes.py
Normal file
74
testy/tablesizes.py
Normal file
@ -0,0 +1,74 @@
|
||||
import queue
|
||||
import sqlite3
|
||||
import threading
|
||||
from appdirs import user_data_dir
|
||||
|
||||
DATA_DIR = user_data_dir("v2realbot")
|
||||
sqlite_db_file = DATA_DIR + "/v2trading.db"
|
||||
|
||||
class ConnectionPool:
|
||||
def __init__(self, max_connections):
|
||||
self.max_connections = max_connections
|
||||
self.connections = queue.Queue(max_connections)
|
||||
self.lock = threading.Lock()
|
||||
|
||||
def get_connection(self):
|
||||
with self.lock:
|
||||
if self.connections.empty():
|
||||
return self.create_connection()
|
||||
else:
|
||||
return self.connections.get()
|
||||
|
||||
def release_connection(self, connection):
|
||||
with self.lock:
|
||||
self.connections.put(connection)
|
||||
|
||||
def create_connection(self):
|
||||
connection = sqlite3.connect(sqlite_db_file, check_same_thread=False)
|
||||
return connection
|
||||
|
||||
|
||||
pool = ConnectionPool(10)
|
||||
|
||||
def get_table_sizes_in_mb():
|
||||
# Connect to the SQLite database
|
||||
conn = pool.get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get the list of tables
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
|
||||
tables = cursor.fetchall()
|
||||
|
||||
# Dictionary to store table sizes
|
||||
table_sizes = {}
|
||||
|
||||
for table in tables:
|
||||
table_name = table[0]
|
||||
|
||||
# Get total number of rows in the table
|
||||
cursor.execute(f"SELECT COUNT(*) FROM {table_name};")
|
||||
row_count = cursor.fetchone()[0]
|
||||
|
||||
if row_count > 0:
|
||||
# Sample a few rows (e.g., 10 rows) and calculate average row size
|
||||
cursor.execute(f"SELECT * FROM {table_name} LIMIT 10;")
|
||||
sample_rows = cursor.fetchall()
|
||||
total_sample_size = sum(sum(len(str(cell)) for cell in row) for row in sample_rows)
|
||||
avg_row_size = total_sample_size / len(sample_rows)
|
||||
|
||||
# Estimate table size in megabytes
|
||||
size_in_mb = (avg_row_size * row_count) / (1024 * 1024)
|
||||
else:
|
||||
size_in_mb = 0
|
||||
|
||||
table_sizes[table_name] = {'size_mb': size_in_mb, 'rows': row_count}
|
||||
|
||||
conn.close()
|
||||
return table_sizes
|
||||
|
||||
# Usage example
|
||||
db_path = 'path_to_your_database.db'
|
||||
table_sizes = get_table_sizes_in_mb()
|
||||
for table, info in table_sizes.items():
|
||||
print(f"Table: {table}, Size: {info['size_mb']} MB, Rows: {info['rows']}")
|
||||
|
||||
@ -2,7 +2,7 @@ import sqlite3
|
||||
from v2realbot.config import DATA_DIR
|
||||
from v2realbot.utils.utils import json_serial
|
||||
from uuid import UUID, uuid4
|
||||
import json
|
||||
import orjson
|
||||
from datetime import datetime
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account
|
||||
from v2realbot.common.model import RunArchiveDetail
|
||||
@ -11,7 +11,7 @@ from tinydb import TinyDB, Query, where
|
||||
sqlite_db_file = DATA_DIR + "/v2trading.db"
|
||||
conn = sqlite3.connect(sqlite_db_file)
|
||||
#standardne vraci pole tuplů, kde clen tuplu jsou sloupce
|
||||
#conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
#conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
#conn.row_factory = lambda c, r: r[0]
|
||||
#conn.row_factory = sqlite3.Row
|
||||
|
||||
@ -28,7 +28,7 @@ insert_list = [dict(time=datetime.now().timestamp(), side="ddd", rectype=RecordT
|
||||
|
||||
def insert_log(runner_id: UUID, time: float, logdict: dict):
|
||||
c = conn.cursor()
|
||||
json_string = json.dumps(logdict, default=json_serial)
|
||||
json_string = orjson.dumps(logdict, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
res = c.execute("INSERT INTO runner_logs VALUES (?,?,?)",[str(runner_id), time, json_string])
|
||||
conn.commit()
|
||||
return res.rowcount
|
||||
@ -37,14 +37,14 @@ def insert_log_multiple(runner_id: UUID, loglist: list):
|
||||
c = conn.cursor()
|
||||
insert_data = []
|
||||
for i in loglist:
|
||||
row = (str(runner_id), i["time"], json.dumps(i, default=json_serial))
|
||||
row = (str(runner_id), i["time"], orjson.dumps(i, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME))
|
||||
insert_data.append(row)
|
||||
c.executemany("INSERT INTO runner_logs VALUES (?,?,?)", insert_data)
|
||||
conn.commit()
|
||||
return c.rowcount
|
||||
|
||||
# c = conn.cursor()
|
||||
# json_string = json.dumps(logdict, default=json_serial)
|
||||
# json_string = orjson.dumps(logdict, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
# res = c.execute("INSERT INTO runner_logs VALUES (?,?,?)",[str(runner_id), time, json_string])
|
||||
# print(res)
|
||||
# conn.commit()
|
||||
@ -52,7 +52,7 @@ def insert_log_multiple(runner_id: UUID, loglist: list):
|
||||
|
||||
#returns list of ilog jsons
|
||||
def read_log_window(runner_id: UUID, timestamp_from: float, timestamp_to: float):
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_logs WHERE runner_id='{str(runner_id)}' AND time >={ts_from} AND time <={ts_to}")
|
||||
return res.fetchall()
|
||||
@ -94,21 +94,21 @@ def delete_logs(runner_id: UUID):
|
||||
|
||||
def insert_archive_detail(archdetail: RunArchiveDetail):
|
||||
c = conn.cursor()
|
||||
json_string = json.dumps(archdetail, default=json_serial)
|
||||
json_string = orjson.dumps(archdetail, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
res = c.execute("INSERT INTO runner_detail VALUES (?,?)",[str(archdetail["id"]), json_string])
|
||||
conn.commit()
|
||||
return res.rowcount
|
||||
|
||||
#returns list of details
|
||||
def get_all_archive_detail():
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_detail")
|
||||
return res.fetchall()
|
||||
|
||||
#vrátí konkrétní
|
||||
def get_archive_detail_byID(runner_id: UUID):
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_detail WHERE runner_id='{str(runner_id)}'")
|
||||
return res.fetchone()
|
||||
@ -123,7 +123,7 @@ def delete_archive_detail(runner_id: UUID):
|
||||
|
||||
def get_all_archived_runners_detail():
|
||||
arch_detail_file = DATA_DIR + "/arch_detail.json"
|
||||
db_arch_d = TinyDB(arch_detail_file, default=json_serial)
|
||||
db_arch_d = TinyDB(arch_detail_file, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
res = db_arch_d.all()
|
||||
return 0, res
|
||||
|
||||
|
||||
@ -46,7 +46,7 @@ db.save()
|
||||
# b = 2
|
||||
|
||||
# def toJson(self):
|
||||
# return json.dumps(self, default=lambda o: o.__dict__)
|
||||
# return orjson.dumps(self, default=lambda o: o.__dict__)
|
||||
|
||||
# db.append(Neco.a)
|
||||
|
||||
|
||||
@ -1,12 +1,12 @@
|
||||
import timeit
|
||||
setup = '''
|
||||
import msgpack
|
||||
import json
|
||||
import orjson
|
||||
from copy import deepcopy
|
||||
data = {'name':'John Doe','ranks':{'sports':13,'edu':34,'arts':45},'grade':5}'''
|
||||
print(timeit.timeit('deepcopy(data)', setup=setup))
|
||||
# 12.0860249996
|
||||
print(timeit.timeit('json.loads(json.dumps(data))', setup=setup))
|
||||
print(timeit.timeit('orjson.loads(orjson.dumps(data))', setup=setup))
|
||||
# 9.07182312012
|
||||
print(timeit.timeit('msgpack.unpackb(msgpack.packb(data))', setup=setup))
|
||||
# 1.42743492126
|
||||
2
testy/testtt.py
Normal file
2
testy/testtt.py
Normal file
@ -0,0 +1,2 @@
|
||||
import locale
|
||||
print(locale.getdefaultlocale())
|
||||
@ -16,7 +16,7 @@ import importlib
|
||||
from queue import Queue
|
||||
from tinydb import TinyDB, Query, where
|
||||
from tinydb.operations import set
|
||||
import json
|
||||
import orjson
|
||||
from rich import print
|
||||
|
||||
|
||||
@ -29,7 +29,7 @@ class RunnerLogger:
|
||||
def __init__(self, runner_id: UUID) -> None:
|
||||
self.runner_id = runner_id
|
||||
runner_log_file = DATA_DIR + "/runner_log.json"
|
||||
db_runner_log = TinyDB(runner_log_file, default=json_serial)
|
||||
db_runner_log = TinyDB(runner_log_file, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
|
||||
def insert_log_multiple(runner_id: UUID, logList: list):
|
||||
runner_table = db_runner_log.table(str(runner_id))
|
||||
|
||||
@ -16,7 +16,7 @@ import importlib
|
||||
from queue import Queue
|
||||
#from tinydb import TinyDB, Query, where
|
||||
#from tinydb.operations import set
|
||||
import json
|
||||
import orjson
|
||||
from rich import print
|
||||
from tinyflux import Point, TinyFlux
|
||||
|
||||
@ -26,7 +26,7 @@ runner_log_file = DATA_DIR + "/runner_flux__log.json"
|
||||
db_runner_log = TinyFlux(runner_log_file)
|
||||
|
||||
insert_dict = {'datum': datetime.now(), 'side': "dd", 'name': 'david','id': uuid4(), 'order': "neco"}
|
||||
#json.dumps(insert_dict, default=json_serial)
|
||||
#orjson.dumps(insert_dict, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
p1 = Point(time=datetime.now(), tags=insert_dict)
|
||||
|
||||
db_runner_log.insert(p1)
|
||||
|
||||
@ -13,7 +13,7 @@ from v2realbot.common.model import Order, TradeUpdate as btTradeUpdate
|
||||
from alpaca.trading.models import TradeUpdate
|
||||
from alpaca.trading.enums import TradeEvent, OrderType, OrderSide, OrderType, OrderStatus
|
||||
from rich import print
|
||||
import json
|
||||
import orjson
|
||||
|
||||
#storage_with_injected_serialization = JSONStorage()
|
||||
|
||||
@ -110,7 +110,7 @@ a = Order(id=uuid4(),
|
||||
limit_price=22.4)
|
||||
|
||||
db_file = DATA_DIR + "/db.json"
|
||||
db = TinyDB(db_file, default=json_serial)
|
||||
db = TinyDB(db_file, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME)
|
||||
db.truncate()
|
||||
insert = {'datum': datetime.now(), 'side': OrderSide.BUY, 'name': 'david','id': uuid4(), 'order': orderList}
|
||||
|
||||
|
||||
66
testy/vectorbt/testHtml2MD.py
Normal file
66
testy/vectorbt/testHtml2MD.py
Normal file
@ -0,0 +1,66 @@
|
||||
import os
|
||||
from bs4 import BeautifulSoup
|
||||
import html2text
|
||||
|
||||
def convert_html_to_markdown(html_content, link_mapping):
|
||||
h = html2text.HTML2Text()
|
||||
h.ignore_links = False
|
||||
|
||||
# Update internal links to point to the relevant sections in the Markdown
|
||||
soup = BeautifulSoup(html_content, 'html.parser')
|
||||
for a in soup.find_all('a', href=True):
|
||||
href = a['href']
|
||||
if href in link_mapping:
|
||||
a['href'] = f"#{link_mapping[href]}"
|
||||
|
||||
return h.handle(str(soup))
|
||||
|
||||
def create_link_mapping(root_dir):
|
||||
link_mapping = {}
|
||||
for subdir, _, files in os.walk(root_dir):
|
||||
for file in files:
|
||||
if file == "index.html":
|
||||
relative_path = os.path.relpath(os.path.join(subdir, file), root_dir)
|
||||
chapter_id = relative_path.replace(os.sep, '-').replace('index.html', '')
|
||||
link_mapping[relative_path] = chapter_id
|
||||
link_mapping[relative_path.replace(os.sep, '/')] = chapter_id # for URLs with slashes
|
||||
return link_mapping
|
||||
|
||||
def read_html_files(root_dir, link_mapping):
|
||||
markdown_content = []
|
||||
|
||||
for subdir, _, files in os.walk(root_dir):
|
||||
relative_path = os.path.relpath(subdir, root_dir)
|
||||
if files and any(file == "index.html" for file in files):
|
||||
# Add directory as a heading based on its depth
|
||||
heading_level = relative_path.count(os.sep) + 1
|
||||
markdown_content.append(f"{'#' * heading_level} {relative_path}\n")
|
||||
|
||||
for file in files:
|
||||
if file == "index.html":
|
||||
file_path = os.path.join(subdir, file)
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
html_content = f.read()
|
||||
soup = BeautifulSoup(html_content, 'html.parser')
|
||||
title = soup.title.string if soup.title else "No Title"
|
||||
chapter_id = os.path.relpath(file_path, root_dir).replace(os.sep, '-').replace('index.html', '')
|
||||
markdown_content.append(f"<a id='{chapter_id}'></a>\n")
|
||||
markdown_content.append(f"{'#' * (heading_level + 1)} {title}\n")
|
||||
markdown_content.append(convert_html_to_markdown(html_content, link_mapping))
|
||||
|
||||
return "\n".join(markdown_content)
|
||||
|
||||
def save_to_markdown_file(content, output_file):
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
f.write(content)
|
||||
|
||||
def main():
|
||||
root_dir = "./v2realbot/static/js/vbt/"
|
||||
output_file = "output.md"
|
||||
link_mapping = create_link_mapping(root_dir)
|
||||
markdown_content = read_html_files(root_dir, link_mapping)
|
||||
save_to_markdown_file(markdown_content, output_file)
|
||||
print(f"Markdown document created at {output_file}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -6,7 +6,7 @@ import secrets
|
||||
from typing import Annotated
|
||||
import os
|
||||
import uvicorn
|
||||
import json
|
||||
import orjson
|
||||
from datetime import datetime
|
||||
from v2realbot.utils.utils import zoneNY
|
||||
|
||||
@ -103,7 +103,7 @@ async def websocket_endpoint(
|
||||
'vwap': 123,
|
||||
'updated': 123,
|
||||
'index': 123}
|
||||
await websocket.send_text(json.dumps(data))
|
||||
await websocket.send_text(orjson.dumps(data))
|
||||
except WebSocketDisconnect:
|
||||
print("CLIENT DISCONNECTED for", runner_id)
|
||||
|
||||
|
||||
@ -6,7 +6,7 @@ import secrets
|
||||
from typing import Annotated
|
||||
import os
|
||||
import uvicorn
|
||||
import json
|
||||
import orjson
|
||||
from datetime import datetime
|
||||
from v2realbot.utils.utils import zoneNY
|
||||
|
||||
@ -101,7 +101,7 @@ async def websocket_endpoint(websocket: WebSocket, client_id: int):
|
||||
# 'close': 123,
|
||||
# 'open': 123,
|
||||
# 'time': "2019-05-25"}
|
||||
await manager.send_personal_message(json.dumps(data), websocket)
|
||||
await manager.send_personal_message(orjson.dumps(data), websocket)
|
||||
#await manager.broadcast(f"Client #{client_id} says: {data}")
|
||||
except WebSocketDisconnect:
|
||||
manager.disconnect(websocket)
|
||||
|
||||
@ -3,7 +3,7 @@ sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from v2realbot.strategy.base import StrategyState
|
||||
from v2realbot.strategy.StrategyOrderLimitVykladaciNormalizedMYSELL import StrategyOrderLimitVykladaciNormalizedMYSELL
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account
|
||||
from v2realbot.utils.utils import zoneNY, print
|
||||
from v2realbot.utils.utils import zoneNY, print, fetch_calendar_data, send_to_telegram
|
||||
from v2realbot.utils.historicals import get_historical_bars
|
||||
from datetime import datetime, timedelta
|
||||
from rich import print as printanyway
|
||||
@ -16,10 +16,13 @@ from v2realbot.strategyblocks.newtrade.signals import signal_search
|
||||
from v2realbot.strategyblocks.activetrade.activetrade_hub import manage_active_trade
|
||||
from v2realbot.strategyblocks.inits.init_indicators import initialize_dynamic_indicators
|
||||
from v2realbot.strategyblocks.inits.init_directives import intialize_directive_conditions
|
||||
from alpaca.trading.requests import GetCalendarRequest
|
||||
from v2realbot.strategyblocks.inits.init_attached_data import attach_previous_data
|
||||
from alpaca.trading.client import TradingClient
|
||||
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR, OFFLINE_MODE
|
||||
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR
|
||||
from alpaca.trading.models import Calendar
|
||||
from v2realbot.indicators.oscillators import rsi
|
||||
from v2realbot.indicators.moving_averages import sma
|
||||
import numpy as np
|
||||
|
||||
print(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
""""
|
||||
@ -56,24 +59,11 @@ Hlavní loop:
|
||||
|
||||
"""
|
||||
def next(data, state: StrategyState):
|
||||
##print(10*"*","NEXT START",10*"*")
|
||||
# important vars state.avgp, state.positions, state.vars, data
|
||||
print(10*"*", state.account_variables)
|
||||
|
||||
#indicators moved to call_next in upper class
|
||||
|
||||
#pokud mame prazdne pozice a neceka se na nic
|
||||
if state.positions == 0 and state.vars.pending is None:
|
||||
#vykoname trady ve fronte
|
||||
execute_prescribed_trades(state, data)
|
||||
#pokud se neaktivoval nejaky trade, poustime signal search - ale jen jednou za bar?
|
||||
#if conf_bar == 1:
|
||||
if state.vars.pending is None:
|
||||
signal_search(state, data)
|
||||
#pro jistotu ihned zpracujeme
|
||||
execute_prescribed_trades(state, data)
|
||||
|
||||
#mame aktivni trade a neceka se n anic
|
||||
elif state.vars.activeTrade and state.vars.pending is None:
|
||||
execute_prescribed_trades(state, data) #pro jistotu ihned zpracujeme
|
||||
manage_active_trade(state, data)
|
||||
|
||||
def init(state: StrategyState):
|
||||
@ -85,25 +75,22 @@ def init(state: StrategyState):
|
||||
|
||||
#nove atributy na rizeni tradu
|
||||
#identifikuje provedenou změnu na Tradu (neděláme změny dokud nepřijde potvrzeni z notifikace)
|
||||
state.vars.pending = None
|
||||
#state.vars.pending = None #nahrazeno pebnding pod accountem state.account_variables[account.name].pending
|
||||
#obsahuje aktivni Trade a jeho nastaveni
|
||||
state.vars.activeTrade = None #pending/Trade
|
||||
#state.vars.activeTrade = None #pending/Trade moved to account_variables
|
||||
#obsahuje pripravene Trady ve frontě
|
||||
state.vars.prescribedTrades = []
|
||||
#flag pro reversal
|
||||
state.vars.requested_followup = None
|
||||
#state.vars.requested_followup = None #nahrazeno pod accountem
|
||||
|
||||
#TODO presunout inicializaci work_dict u podminek - sice hodnoty nepujdou zmenit, ale zlepsi se performance
|
||||
#pripadne udelat refresh kazdych x-iterací
|
||||
state.vars['sell_in_progress'] = False
|
||||
state.vars.mode = None
|
||||
state.vars.last_tick_price = 0
|
||||
state.vars.last_50_deltas = []
|
||||
state.vars.last_tick_volume = 0
|
||||
state.vars.next_new = 0
|
||||
state.vars.last_buy_index = None
|
||||
state.vars.last_exit_index = None
|
||||
state.vars.last_in_index = None
|
||||
state.vars.last_entry_index = None #mponechano obecne pro vsechny accounty
|
||||
state.vars.last_exit_index = None #obecna varianta ponechana
|
||||
state.vars.last_update_time = 0
|
||||
state.vars.reverse_position_waiting_amount = 0
|
||||
#INIT promenne, ktere byly zbytecne ve stratvars
|
||||
@ -114,19 +101,33 @@ def init(state: StrategyState):
|
||||
state.vars.blockbuy = 0
|
||||
#models
|
||||
state.vars.loaded_models = {}
|
||||
|
||||
#state attributes for martingale sizing mngmt
|
||||
state.vars["transferables"] = {}
|
||||
state.vars["transferables"]["martingale"] = dict(cont_loss_series_cnt=0)
|
||||
|
||||
#INITIALIZE CBAR INDICATORS - do vlastni funkce
|
||||
#state.cbar_indicators['ivwap'] = []
|
||||
state.vars.last_tick_price = 0
|
||||
state.vars.last_tick_volume = 0
|
||||
state.vars.last_tick_trades = 0
|
||||
state.cbar_indicators['tick_price'] = []
|
||||
state.cbar_indicators['tick_volume'] = []
|
||||
state.cbar_indicators['tick_trades'] = []
|
||||
state.cbar_indicators['CRSI'] = []
|
||||
|
||||
initialize_dynamic_indicators(state)
|
||||
intialize_directive_conditions(state)
|
||||
|
||||
#attach part of yesterdays data, bars, indicators, cbar_indicators
|
||||
attach_previous_data(state)
|
||||
|
||||
#intitialize indicator mapping (for use in operation) - mozna presunout do samostatne funkce prip dat do base kdyz se osvedci
|
||||
local_dict_cbar_inds = {key: state.cbar_indicators[key] for key in state.cbar_indicators.keys() if key != "time"}
|
||||
local_dict_inds = {key: state.indicators[key] for key in state.indicators.keys() if key != "time"}
|
||||
local_dict_bars = {key: state.bars[key] for key in state.bars.keys() if key != "time"}
|
||||
|
||||
state.ind_mapping = {**local_dict_inds, **local_dict_bars}
|
||||
state.ind_mapping = {**local_dict_inds, **local_dict_bars, **local_dict_cbar_inds}
|
||||
print("IND MAPPING DONE:", state.ind_mapping)
|
||||
|
||||
#30 DAYS historicall data fill - pridat do base pokud se osvedci
|
||||
@ -144,7 +145,8 @@ def init(state: StrategyState):
|
||||
time_to = state.bt.bp_from
|
||||
|
||||
|
||||
#TBD pridat i hour data - pro pocitani RSI na hodine
|
||||
#TBD NASLEDUJICI SEKCE BUDE PREDELANA, ABY UMOZNOVALA LIBOVOLNE ROZLISENI
|
||||
#INDIKATORY SE BUDOU TAKE BRAT Z KONFIGURACE
|
||||
#get 30 days (history_datetime_from musí být alespoň -2 aby to bralo i vcerejsek)
|
||||
#history_datetime_from = time_to - timedelta(days=40)
|
||||
#get previous market day
|
||||
@ -156,17 +158,25 @@ def init(state: StrategyState):
|
||||
#time_to = time_to.date()
|
||||
|
||||
today = time_to.date()
|
||||
several_days_ago = today - timedelta(days=40)
|
||||
several_days_ago = today - timedelta(days=60)
|
||||
#printanyway(f"{today=}",f"{several_days_ago=}")
|
||||
clientTrading = TradingClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=False)
|
||||
#clientTrading = TradingClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=False)
|
||||
#get all market days from here to 40days ago
|
||||
calendar_request = GetCalendarRequest(start=several_days_ago,end=today)
|
||||
cal_dates = clientTrading.get_calendar(calendar_request)
|
||||
|
||||
#calendar_request = GetCalendarRequest(start=several_days_ago,end=today)
|
||||
|
||||
cal_dates = fetch_calendar_data(several_days_ago, today)
|
||||
#cal_dates = clientTrading.get_calendar(calendar_request)
|
||||
|
||||
#find the first market day - 40days ago
|
||||
#history_datetime_from = zoneNY.localize(cal_dates[0].open)
|
||||
history_datetime_from = cal_dates[0].open
|
||||
|
||||
#ulozime si dnesni market close
|
||||
#pro automaticke ukonceni
|
||||
#TODO pripadne enablovat na parametr
|
||||
state.today_market_close = zoneNY.localize(cal_dates[-1].close)
|
||||
|
||||
# Find the previous market day
|
||||
history_datetime_to = None
|
||||
for session in reversed(cal_dates):
|
||||
@ -180,6 +190,74 @@ def init(state: StrategyState):
|
||||
#printanyway(history_datetime_from, history_datetime_to)
|
||||
#az do predchziho market dne dne
|
||||
state.dailyBars = get_historical_bars(state.symbol, history_datetime_from, history_datetime_to, TimeFrame.Day)
|
||||
|
||||
#NOTE zatim pridano takto do baru dalsi indikatory
|
||||
#BUDE PREDELANO - v rámci custom rozliseni a static indikátoru
|
||||
if state.dailyBars is None:
|
||||
print("Nepodařilo se načíst denní bary")
|
||||
err_msg = f"Nepodařilo se načíst denní bary (get_historical_bars) pro {state.symbol} od {history_datetime_from} do {history_datetime_to} ve strat.init. Probably wrong symbol?"
|
||||
send_to_telegram(err_msg)
|
||||
raise Exception(err_msg)
|
||||
|
||||
#RSI vraci pouze pro vsechny + prepend with zeros nepocita prvnich N (dle rsi length)
|
||||
rsi_calculated = rsi(state.dailyBars["vwap"], 14).tolist()
|
||||
num_zeros_to_prepend = len(state.dailyBars["vwap"]) - len(rsi_calculated)
|
||||
state.dailyBars["rsi"] = [0]*num_zeros_to_prepend + rsi_calculated
|
||||
|
||||
#VOLUME
|
||||
volume_sma = sma(state.dailyBars["volume"], 10) #vraci celkovy pocet - 10
|
||||
items_to_prepend = len(state.dailyBars["volume"]) - len(volume_sma)
|
||||
|
||||
volume_sma = np.hstack((np.full(items_to_prepend, np.nan), volume_sma))
|
||||
|
||||
#normalized divergence currvol-smavolume/currvol+smavolume
|
||||
volume_data = np.array(state.dailyBars["volume"])
|
||||
normalized_divergence = (volume_data - volume_sma) / (volume_data + volume_sma)
|
||||
# Replace NaN values with 0 or some other placeholder if needed
|
||||
normalized_divergence = np.nan_to_num(normalized_divergence)
|
||||
volume_sma = np.nan_to_num(volume_sma)
|
||||
state.dailyBars["volume_sma_divergence"] = normalized_divergence.tolist()
|
||||
state.dailyBars["volume_sma"] = volume_sma.tolist()
|
||||
|
||||
#vwap_cum and divergence
|
||||
volume_np = np.array(state.dailyBars["volume"])
|
||||
close_np = np.array(state.dailyBars["close"])
|
||||
high_np = np.array(state.dailyBars["high"])
|
||||
low_np = np.array(state.dailyBars["low"])
|
||||
vwap_cum_np = np.cumsum(((high_np + low_np + close_np) / 3) * volume_np) / np.cumsum(volume_np)
|
||||
state.dailyBars["vwap_cum"] = vwap_cum_np.tolist()
|
||||
normalized_divergence = (close_np - vwap_cum_np) / (close_np + vwap_cum_np)
|
||||
#divergence close ceny a cumulativniho vwapu
|
||||
state.dailyBars["div_vwap_cum"] = normalized_divergence.tolist()
|
||||
|
||||
#creates log returns for open, close, high and lows
|
||||
open_np = np.array(state.dailyBars["open"])
|
||||
state.dailyBars["open_log_return"] = np.log(open_np[1:] / open_np[:-1]).tolist()
|
||||
state.dailyBars["close_log_return"] = np.log(close_np[1:] / close_np[:-1]).tolist()
|
||||
state.dailyBars["high_log_return"] = np.log(high_np[1:] / high_np[:-1]).tolist()
|
||||
state.dailyBars["low_log_return"] = np.log(low_np[1:] / low_np[:-1]).tolist()
|
||||
|
||||
|
||||
#Features to emphasize the shape characteristics of each candlestick. For use in ML https://chat.openai.com/c/c1a22550-643b-4037-bace-3e810dbce087
|
||||
# Calculate the ratios of
|
||||
total_range = high_np - low_np
|
||||
upper_shadow = (high_np - np.maximum(open_np, close_np)) / total_range
|
||||
lower_shadow = (np.minimum(open_np, close_np) - low_np) / total_range
|
||||
body_size = np.abs(close_np - open_np) / total_range
|
||||
body_position = np.where(close_np >= open_np,
|
||||
(close_np - low_np) / total_range,
|
||||
(open_np - low_np) / total_range)
|
||||
|
||||
#other possibilities
|
||||
# Open to Close Change: (close[-1] - open[-1]) / open[-1]
|
||||
# High to Low Range: (high[-1] - low[-1]) / low[-1]
|
||||
|
||||
# Store the ratios in the bars dictionary
|
||||
state.dailyBars['upper_shadow_ratio'] = upper_shadow.tolist()
|
||||
state.dailyBars['lower_shadow_ratio'] = lower_shadow.tolist()
|
||||
state.dailyBars['body_size_ratio'] = body_size.tolist()
|
||||
state.dailyBars['body_position_ratio'] = body_position.tolist()
|
||||
|
||||
#printanyway("daily bars FILLED", state.dailyBars)
|
||||
#zatim ukladame do extData - pro instant indicatory a gui
|
||||
state.extData["dailyBars"] = state.dailyBars
|
||||
|
||||
@ -1,102 +0,0 @@
|
||||
import numpy as np
|
||||
from sklearn.preprocessing import StandardScaler
|
||||
from sklearn.metrics import mean_squared_error
|
||||
from sklearn.model_selection import train_test_split
|
||||
import v2realbot.ml.mlutils as mu
|
||||
from keras.layers import LSTM, Dense
|
||||
import matplotlib.pyplot as plt
|
||||
from v2realbot.ml.ml import ModelML
|
||||
from v2realbot.enums.enums import PredOutput, Source, TargetTRFM
|
||||
from v2realbot.controller.services import get_archived_runner_details_byID, update_archive_detail
|
||||
# from collections import defaultdict
|
||||
# from operator import itemgetter
|
||||
from joblib import load
|
||||
|
||||
#TODO - DO API
|
||||
# v ml atomicke api pro evaluaci (runneru, batche)
|
||||
# v services: model.add_vector_prediction_to_archrunner_as_new_indicator (vrátí v podstate obohacený archDetail) - nebo i ukládat do db? uvidime
|
||||
# v rest api prevolani
|
||||
# db support: zatim jen ciselnik modelu + jeho zakladni nastaveni, obrabeci api, load modelu zatim z file
|
||||
|
||||
cfg: ModelML = mu.load_model("model1", "0.1")
|
||||
|
||||
|
||||
#EVALUATE SPECIFIC RUNNER - VECTOR BASED (toto dat do samostatne API pripadne pak udelat nadstavnu na batch a runners)
|
||||
#otestuje model na neznamem runnerovi, seznamu runneru nebo batch_id
|
||||
|
||||
|
||||
|
||||
runner_id = "a38fc269-8df3-4374-9506-f0280d798854"
|
||||
save_new_ind = True
|
||||
source_data, target_data, rows_in_day = cfg.load_data(runners_ids=[runner_id])
|
||||
|
||||
if len(rows_in_day) > 1:
|
||||
#pro vis se cela tato sluzba volat v loopu
|
||||
raise Exception("Vytvareni indikatoru dostupne zatim jen pro jeden runner")
|
||||
|
||||
#scalujeme X
|
||||
source_data = cfg.scalerX.fit_transform(source_data)
|
||||
|
||||
#tady si vyzkousim i skrz vice runneru
|
||||
X_eval, y_eval, y_eval_ref = cfg.create_sequences(combined_data=source_data, target_data=target_data,remove_cross_sequences=True, rows_in_day=rows_in_day)
|
||||
|
||||
#toto nutne?
|
||||
X_eval = np.array(X_eval)
|
||||
y_eval = np.array(y_eval)
|
||||
y_eval_ref = np.array(y_eval_ref)
|
||||
#scaluji target - nemusis
|
||||
#y_eval = cfg.scalerY.fit_transform(y_eval)
|
||||
|
||||
X_eval = cfg.model.predict(X_eval)
|
||||
X_eval = cfg.scalerY.inverse_transform(X_eval)
|
||||
print("po predikci x_eval shape", X_eval.shape)
|
||||
|
||||
#pokud mame dostupnou i target v runneru, pak pridame porovnavaci indikator
|
||||
difference_mse = None
|
||||
if len(y_eval) > 0:
|
||||
#TODO porad to pliva 1 hodnotu
|
||||
difference_mse = mean_squared_error(y_eval, X_eval,multioutput="raw_values")
|
||||
|
||||
print("ted mam tedy dva nove sloupce")
|
||||
print("X_eval", X_eval.shape)
|
||||
if difference_mse is not None:
|
||||
print("difference_mse", difference_mse.shape)
|
||||
print(f"zplostime je, dopredu pridame {cfg.input_sequences-1} a dozadu nic")
|
||||
#print(f"a melo by nam to celkem dat {len(bars['time'])}")
|
||||
#tohle pak nejak doladit, ale vypada to good
|
||||
#plus do druheho indikatoru pridat ten difference_mse
|
||||
|
||||
#TODO jeste je posledni hodnota predikce nejak OFF (2.52... ) - podivat se na to
|
||||
#TODO na produkci srovnat se skutecnym BT predictem (mozna zde bude treba seq-1) -
|
||||
# prvni predikce nejspis uz bude na desítce
|
||||
ind_pred = list(np.concatenate([np.zeros(cfg.input_sequences-1), X_eval.ravel()]))
|
||||
print(ind_pred)
|
||||
print(len(ind_pred))
|
||||
print("tada")
|
||||
#ted k nim pridame
|
||||
|
||||
if save_new_ind:
|
||||
#novy ind ulozime do archrunnera (na produkci nejspis jen show)
|
||||
res, sada = get_archived_runner_details_byID(runner_id)
|
||||
if res == 0:
|
||||
print("ok")
|
||||
else:
|
||||
print("error",res,sada)
|
||||
raise Exception(f"error loading runner {runner_id} : {res} {sada}")
|
||||
|
||||
sada["indicators"][0]["pred_added"] = ind_pred
|
||||
|
||||
req, res = update_archive_detail(runner_id, sada)
|
||||
print(f"indicator pred_added was ADDED to {runner_id}")
|
||||
|
||||
|
||||
# Plot the predicted vs. actual
|
||||
plt.plot(y_eval, label='Target')
|
||||
plt.plot(X_eval, label='Predicted')
|
||||
#TODO zde nejak vymyslet jinou pricelinu - jako lightweight chart
|
||||
if difference_mse is not None:
|
||||
plt.plot(difference_mse, label='diference')
|
||||
plt.plot(y_eval_ref, label='reference column - vwap')
|
||||
plt.plot()
|
||||
plt.legend()
|
||||
plt.show()
|
||||
@ -1,278 +0,0 @@
|
||||
import numpy as np
|
||||
from sklearn.preprocessing import StandardScaler
|
||||
from sklearn.metrics import mean_squared_error
|
||||
from sklearn.model_selection import train_test_split
|
||||
import v2realbot.ml.mlutils as mu
|
||||
from keras.layers import LSTM, Dense
|
||||
import matplotlib.pyplot as plt
|
||||
from v2realbot.ml.ml import ModelML
|
||||
from v2realbot.enums.enums import PredOutput, Source, TargetTRFM
|
||||
# from collections import defaultdict
|
||||
# from operator import itemgetter
|
||||
from joblib import load
|
||||
|
||||
# region Notes
|
||||
|
||||
#ZAKLAD PRO TRAINING SCRIPT na vytvareni model u
|
||||
# TODO
|
||||
# podpora pro BINARY TARGET
|
||||
# podpora hyperpamaetru (activ.funkce sigmoid atp.)
|
||||
# vyuzit distribuovane prostredi - nebo aspon vlastni VM
|
||||
# dopracovat denni identifikatory typu lastday close, todays open atp.
|
||||
# random SEARCH a grid search
|
||||
# udelat nejaka model metadata (napr, trenovano na (runners+obdobi), nastaveni treningovych dat, počet epoch, hyperparametry, config atribu atp.) - mozna persistovat v db
|
||||
# udelat nejake verzovani
|
||||
# predelat do GUI a modulu
|
||||
# vyuzit VectorBT na dohledani optimalizovanych parametru napr. pro buy,sell atp. Vyuzit podobne API na pripravu dat jako model.
|
||||
# EVAL MODEL - umoznit vektorové přidání indikátoru do runneru (např. predikce v modulu, vectorBT, optimalizace atp) - vytvorit si na to API, podobne co mam, nacte runner, transformuje, sekvencuje, provede a pak zpetne transformuje a prida jako dalsi indikator. Lze pak použít i v gui.
|
||||
# nove tlacitko "Display model prediction" na urovni archrunnera, které
|
||||
# - má volbu model + jestli zobrazit jen predictionu jako novy indikator nebo i mse from ytarget (nutny i target)
|
||||
# po spusteni pak:
|
||||
# - zkonztoluje jestli runner ma indikatory,ktere odpovidaji features modelu (bar_ftrs, ind_ftrs, optional i target)
|
||||
# - vektorově doplní predictionu (transformuje data, udela predictionu a Y transformuje zpet)
|
||||
# - vysledek (jako nove indikatory) implantuje do runnerdetailu a zobrazi
|
||||
# podivat se na dalsi parametry kerasu, napr. false positive atp.
|
||||
# podivat se jeste na rozdil mezi vectorovou predikci a skalarni - proc je nekdy rozdil, odtrasovat - pripadne pogooglit
|
||||
# odtrasovat, nekde je sum (zkusit si oboji v jednom skriptu a porovnat)
|
||||
|
||||
#TODO NAPADY Na modely
|
||||
#1.binary identifikace trendu napr. pokud nasledujici 3 bary rostou (0-1) nebo nasledujici bary roste momentum
|
||||
#2.soustredit se na modely s vystupem 0-1 nebo -1 až 1
|
||||
#3.Vyzkouset jeden model, ktery by identifikoval trendy v obou smerech - -1 pro klesani a 1 pro stoupání.
|
||||
#4.vyzkouset zda model vytvoreny z casti dne nebude funkcni na druhe casti (on the fly daily models)
|
||||
#5.zkusit modely s a bez time (prizpusobit tomu kod v ModelML - zejmena jak na crossday sekvence) - mozna ze zecatku dat aspon pryc z indikatoru?
|
||||
# Dat vsechny zbytecne features pryc, nechat tam jen ty podstatne - attention, tak cílím.
|
||||
#6. zkusit vyuzit tickprice v nejaekm modelu, pripadne pak dalsi CBAR indikatory . vymslet tickbased features
|
||||
#7. zkusit jako features nevyuzit standardni ceny, ale pouze indikatory reprezentujici chovani (fastslope,samebarslope,volume,tradencnt)
|
||||
#8. relativni OHLC - model pouzivajici (jen) bary, ale misto hodnot ohlc udelat features reprezentujici vztahy(pomery) mezi temito velicinami. tzn. relativni ohlc
|
||||
#9. jiny pristup by byl ucit model na konkretnich chunkach, ktere chci aby mi identifikoval. Např. určité úseky. Vymyslet. Buď nyni jako test intervaly, ale v budoucnu to treba jen nejak oznacit a poslat k nauceni. Pripadne pak udelat nejaky vycuc.
|
||||
#10. mozna správným výběrem targetu, můžu taky naučit jen určité věci. Specializace. Stačí když se jednou dvakrát denně aktivuje.
|
||||
# 11. udelat si go IN model, ktery pomuze strategii generovat vstup - staci jen aby mel trochu lepsi edge nez conditiony, o zbytek se postara logika strategie
|
||||
# 12. model pro neagregované nebo jen filtroné či velmi lehce agregované trady? - tickprice
|
||||
# 13. jako featury pouzit Fourierovo transformaci, na sekundovem baru nebo tickprice
|
||||
|
||||
#DULEZITE
|
||||
# soustredit se v modelech na predikci nasledujici hodnoty, ideálně nějaký vektor ukazující směr (např. 0 - 1, kde nula nebude růst, 1 - bude růst strmě)
|
||||
# pro predikcí nějakého většího trendu, zkusti více modelů na různých rozlišení, každý ukazuje
|
||||
# hodnotu na svém rozlišení a jeho kombinace mi může určit vstup. Zkusit zda by nešel i jeden model.
|
||||
# Každopádně se soustředit
|
||||
# 1) na další hodnotu (tzn. vstupy musí být bezprostředně ovlivňující tuto (samebasrlope, atp.))
|
||||
# 2) její výše ukazuje směr na tomto rozlišení
|
||||
# 3) ideálně se učit z každého baru, tzn. cílová hodnota musí být známá u každého baru
|
||||
# (binary ne, potřebuju linární vektor) - i když 1 a 0 target v závislosti na stoupání a klesání by mohla být ok,
|
||||
# ale asi příliš restriktivní, spíš bych tam mohl dát jak moc. Tzn. +0.32, -0.04. Učilo by se to míru stoupání.
|
||||
# Tu míru tam potřebuju zachovanou.
|
||||
# pak si muzu rict, když je urcite pravdepodobnost, ze to bude stoupat (tzn. dalsi hodnota) na urovni 1,2,3 - tak jduvstup
|
||||
# zkusit na nejnižší úrovni i předvídat CBARy, směr dalšího ticku. Vyzkoušet.
|
||||
|
||||
##TODO - doma
|
||||
#bar_features a ind_features do dokumentace SL classic, stejne tak conditional indikator a mathop indikator
|
||||
#TODO - co je třeba vyvinout
|
||||
# GENERATOR test intervalu (vstup name, note, od,do,step)
|
||||
# napsat API, doma pak simple GUI
|
||||
# vyuziti ATR (jako hranice historickeho rozsahu) - atr-up, atr-down
|
||||
# nakreslit v grafu atru = close+atr, atrd = close-atr
|
||||
# pripadne si vypocet atr nejak customizovat, prip. ruzne multiplikatory pro high low, pripadne si to vypocist podle sebe
|
||||
# vyuziti:
|
||||
# pro prekroceni nejake lajny, napr. ema nebo yesterdayclose
|
||||
# - k identifikaci ze se pohybuje v jejim rozsahu
|
||||
# - proste je to buffer, ktery musi byt prekonan, aby byla urcita akce
|
||||
# pro learning pro vypocet conditional parametru (1,0,-1) prekroceni napr. dailyopen, yesterdayclose, gapclose
|
||||
# kde 1 prekroceno, 0 v rozsahu (atr), -1 prekroceno dolu - to pomuze uceni
|
||||
# vlastni supertrend strateige
|
||||
# zaroven moznost vyuzit klouzave či parametrizovane atr, které se na základě
|
||||
# určitých parametrů bude samo upravovat a cíleně vybočovat z KONTRA frekvencí, např. randomizovaný multiplier nebo nejak jinak ovlivneny minulým
|
||||
# v indikatorech vsude kde je odkaz ma source jako hodnotu tak defaultne mit moznost uvest lookback, napr. bude treba porovnavat nejak cenu vs predposledni hodnotu ATRka (nechat az vyvstane pozadavek)
|
||||
# zacit doma na ATRku si postavit supertrend, viz pinescript na ploše
|
||||
|
||||
|
||||
#TODO - obecne vylepsovaky
|
||||
# 1. v GUI graf container do n-TABů, mozna i draggable order, zaviratelne na Xko (innerContainer)
|
||||
# 2. mit mozna specialni mod na pripravu dat (agreg+indikator, tzn. vse jen bez vstupů) - můžu pak zapracovat víc vectorové doplňování dat
|
||||
# TOTO:: mozna by postacil vypnout backtester (tzn. no trades) - a projet jen indikatory. mozna by slo i vectorove optimalizovat.
|
||||
# indikatory by se mohli predsunout pred next a next by se vubec nemusel volat (jen nekompatibilita s predch.strategiemi)
|
||||
# 3. kombinace fastslope na fibonacci delkach (1,2,3,5..) jako dobry vstup pro ML
|
||||
# 4. podivat se na attention based LSTM zda je v kerasu implementace
|
||||
# do grafu přidat togglovatelné hranice barů určitých rozlišení - což mi jen udělá čáry Xs od sebe (dobré pro navrhování)
|
||||
# 5. vymyslet optimalizovane vyuziti modelu na produkci (nejak mit zkompilovane, aby to bylo raketově pro skalár) - nyní to backtest zpomalí 4x
|
||||
# 6. CONVNETS for time series forecasting - small 1D convnets can offer a fast alternative to RNNs for simple tasks such as text classification and timeseries forecasting.
|
||||
# zkusit small conv1D pro identifikaci víření před trendem, např. jen 6 barů - identifikovat dobře target, musí jít o tutovku na targetu
|
||||
# pro covnet zkusit cbar price, volume a time. Třeba to zachytí víření (ripples)
|
||||
# Další oblasti k predikci jsou ripples, vlnky - předzvěst nějakého mocnějšího pohybu. A je pravda, že předtím se mohou objevit nějaké indicie. Ty zkus zachytit.
|
||||
# Do runner_headers pridat bt_from, bt_to - pro razeni order_by, aby se runnery vzdy vraceli vzestupne dle data (pro machine l)
|
||||
|
||||
#TODO
|
||||
# vyvoj modelů workflow s LSTMtrain.py
|
||||
# 1) POC - pouze zde ve skriptu, nad 1-2 runnery, okamžité zobrazení v plotu,
|
||||
# optimalizace zakl. features a hyperparams. Zobrazit i u binary nejak cenu.
|
||||
# 2) REALITY CHECK - trening modelu na batchi test intervalu, overeni ve strategii v BT, zobrazeni predikce v RT chartu
|
||||
# 3) FINAL TRAINING
|
||||
# testovani predikce
|
||||
|
||||
|
||||
#TODO tady
|
||||
# train model
|
||||
# - train data- batch nebo runners
|
||||
# - test data - batch or runners (s cim porovnavat/validovat)
|
||||
# - vyber architektury
|
||||
# - soucast skriptu muze byt i porovnavacka pripadne nejaky search optimalnich parametru
|
||||
|
||||
#lstmtrain - podporit jednotlive kroky vyse
|
||||
#modelML - udelat lepsi PODMINKY
|
||||
#frontend? ma cenu? asi ano - GUI na model - new - train/retrain-change
|
||||
# (vymyslet jak v gui chytře vybírat arch modelu a hyperparams, loss, optim - treba nejaka templata?)
|
||||
# mozna ciselnik architektur s editačním polem pro kód -jen pár řádků(.add, .compile) přidat v editoru
|
||||
# vymyslet jak to udělat pythonově
|
||||
#testlist generator api
|
||||
|
||||
# endregion
|
||||
|
||||
#if null,the validation is made on 10% of train data
|
||||
#runnery pro testovani
|
||||
validation_runners = ["a38fc269-8df3-4374-9506-f0280d798854"]
|
||||
|
||||
#u binary bude target bud hotovy indikator a nebo jej vytvorit on the fly
|
||||
cfg = ModelML(name="model1",
|
||||
version = "0.1",
|
||||
note = None,
|
||||
pred_output=PredOutput.LINEAR,
|
||||
input_sequences = 10,
|
||||
use_bars = True,
|
||||
bar_features = ["volume","trades"],
|
||||
ind_features = ["slope20", "ema20","emaFast","samebarslope","fastslope","fastslope4"],
|
||||
target='target', #referencni hodnota pro target - napr pro graf
|
||||
target_reference='vwap',
|
||||
train_target_steps=3,
|
||||
train_target_transformation=TargetTRFM.KEEPVAL,
|
||||
train_runner_ids = ["08b7f96e-79bc-4849-9142-19d5b28775a8"],
|
||||
train_batch_id = None,
|
||||
train_epochs = 10,
|
||||
train_remove_cross_sequences = True,
|
||||
)
|
||||
|
||||
#TODO toto cele dat do TRAIN metody - vcetne pripadneho loopu a podpory API
|
||||
|
||||
test_size = None
|
||||
|
||||
#kdyz neplnime vstup, automaticky se loaduje training data z nastaveni classy
|
||||
source_data, target_data, rows_in_day = cfg.load_data()
|
||||
|
||||
if len(target_data) == 0:
|
||||
raise Exception("target is empty - required for TRAINING - check target column name")
|
||||
|
||||
np.set_printoptions(threshold=10,edgeitems=5)
|
||||
#print("source_data", source_data)
|
||||
#print("target_data", target_data)
|
||||
print("rows_in_day", rows_in_day)
|
||||
source_data = cfg.scalerX.fit_transform(source_data)
|
||||
|
||||
#TODO mozna vyhodit to UNTR
|
||||
#TODO asi vyhodit i target reference a vymyslet jinak
|
||||
|
||||
#vytvořeni sekvenci po vstupních sadách (např. 10 barů) - výstup 3D např. #X_train (6205, 10, 14)
|
||||
#doplneni transformace target data
|
||||
X_train, y_train, y_train_ref = cfg.create_sequences(combined_data=source_data,
|
||||
target_data=target_data,
|
||||
remove_cross_sequences=cfg.train_remove_cross_sequences,
|
||||
rows_in_day=rows_in_day)
|
||||
|
||||
#zobrazime si transformovany target a jeho referncni sloupec
|
||||
#ZHOMOGENIZOVAT OSY
|
||||
plt.plot(y_train, label='Transf target')
|
||||
plt.plot(y_train_ref, label='Ref target')
|
||||
plt.plot()
|
||||
plt.legend()
|
||||
plt.show()
|
||||
|
||||
print("After sequencing")
|
||||
print("source:X_train", np.shape(X_train))
|
||||
print("target:y_train", np.shape(y_train))
|
||||
print("target:", y_train)
|
||||
y_train = y_train.reshape(-1, 1)
|
||||
|
||||
X_complete = np.array(X_train.copy())
|
||||
Y_complete = np.array(y_train.copy())
|
||||
X_train = np.array(X_train)
|
||||
y_train = np.array(y_train)
|
||||
|
||||
#target scaluji az po transformaci v create sequence -narozdil od X je stejny shape
|
||||
y_train = cfg.scalerY.fit_transform(y_train)
|
||||
|
||||
|
||||
if len(validation_runners) == 0:
|
||||
test_size = 0.10
|
||||
# Split the data into training and test sets - kazdy vstupni pole rozdeli na dve
|
||||
#nechame si takhle rozdelit i referencni sloupec
|
||||
X_train, X_test, y_train, y_test, y_train_ref, y_test_ref = train_test_split(X_train, y_train, y_train_ref, test_size=test_size, shuffle=False) #random_state=42)
|
||||
|
||||
print("Splittig the data")
|
||||
|
||||
print("X_train", np.shape(X_train))
|
||||
print("X_test", np.shape(X_test))
|
||||
print("y_train", np.shape(y_train))
|
||||
print("y_test", np.shape(y_test))
|
||||
print("y_test_ref", np.shape(y_test_ref))
|
||||
print("y_train_ref", np.shape(y_train_ref))
|
||||
|
||||
#print(np.shape(X_train))
|
||||
# Define the input shape of the LSTM layer dynamically based on the reshaped X_train value
|
||||
input_shape = (X_train.shape[1], X_train.shape[2])
|
||||
|
||||
# Build the LSTM model
|
||||
#cfg.model = Sequential()
|
||||
cfg.model.add(LSTM(128, input_shape=input_shape))
|
||||
cfg.model.add(Dense(1, activation="relu"))
|
||||
#activation: Gelu, relu, elu, sigmoid...
|
||||
# Compile the model
|
||||
cfg.model.compile(loss='mse', optimizer='adam')
|
||||
#loss: mse, binary_crossentropy
|
||||
|
||||
# Train the model
|
||||
cfg.model.fit(X_train, y_train, epochs=cfg.train_epochs)
|
||||
|
||||
#save the model
|
||||
cfg.save()
|
||||
|
||||
#TBD db layer
|
||||
cfg: ModelML = mu.load_model(cfg.name, cfg.version)
|
||||
|
||||
# region Live predict
|
||||
#EVALUATE SIM LIVE - PREDICT SCALAR - based on last X items
|
||||
barslist, indicatorslist = cfg.load_runners_as_list(runner_id_list=["67b51211-d353-44d7-a58a-5ae298436da7"])
|
||||
#zmergujeme vsechny data dohromady
|
||||
bars = mu.merge_dicts(barslist)
|
||||
indicators = mu.merge_dicts(indicatorslist)
|
||||
cfg.validate_available_features(bars, indicators)
|
||||
#VSTUPEM JE standardni pole v strategii
|
||||
value = cfg.predict(bars, indicators)
|
||||
print("prediction for LIVE SIM:", value)
|
||||
# endregion
|
||||
|
||||
#EVALUATE TEST DATA - VECTOR BASED
|
||||
#pokud mame eval runners pouzijeme ty, jinak bereme cast z testovacich dat
|
||||
if len(validation_runners) > 0:
|
||||
source_data, target_data, rows_in_day = cfg.load_data(runners_ids=validation_runners)
|
||||
source_data = cfg.scalerX.fit_transform(source_data)
|
||||
X_test, y_test, y_test_ref = cfg.create_sequences(combined_data=source_data, target_data=target_data,remove_cross_sequences=True, rows_in_day=rows_in_day)
|
||||
|
||||
#prepnout ZDE pokud testovat cely bundle - jinak testujeme jen neznama
|
||||
#X_test = X_complete
|
||||
#y_test = Y_complete
|
||||
|
||||
X_test = cfg.model.predict(X_test)
|
||||
X_test = cfg.scalerY.inverse_transform(X_test)
|
||||
|
||||
#target testovacim dat proc tu je reshape? y_test.reshape(-1, 1)
|
||||
y_test = cfg.scalerY.inverse_transform(y_test)
|
||||
#celkovy mean? nebo spis vector pro graf?
|
||||
mse = mean_squared_error(y_test, X_test)
|
||||
print('Test MSE:', mse)
|
||||
|
||||
# Plot the predicted vs. actual
|
||||
plt.plot(y_test, label='Actual')
|
||||
plt.plot(X_test, label='Predicted')
|
||||
#TODO zde nejak vymyslet jinou pricelinu - jako lightweight chart
|
||||
plt.plot(y_test_ref, label='reference column - price')
|
||||
plt.plot()
|
||||
plt.legend()
|
||||
plt.show()
|
||||
@ -39,11 +39,11 @@
|
||||
"""
|
||||
from uuid import UUID, uuid4
|
||||
from alpaca.trading.enums import OrderSide, OrderStatus, TradeEvent, OrderType
|
||||
from v2realbot.common.model import TradeUpdate, Order
|
||||
#from rich import print
|
||||
from v2realbot.common.model import TradeUpdate, Order, Account
|
||||
from rich import print as printanyway
|
||||
import threading
|
||||
import asyncio
|
||||
from v2realbot.config import BT_DELAYS, DATA_DIR, BT_FILL_CONDITION_BUY_LIMIT, BT_FILL_CONDITION_SELL_LIMIT, BT_FILL_LOG_SURROUNDING_TRADES, BT_FILL_CONS_TRADES_REQUIRED,BT_FILL_PRICE_MARKET_ORDER_PREMIUM
|
||||
from v2realbot.config import DATA_DIR
|
||||
from v2realbot.utils.utils import AttributeDict, ltp, zoneNY, trunc, count_decimals, print
|
||||
from v2realbot.utils.tlog import tlog
|
||||
from v2realbot.enums.enums import FillCondition
|
||||
@ -60,6 +60,8 @@ from v2realbot.utils.dash_save_html import make_static
|
||||
import dash_bootstrap_components as dbc
|
||||
from dash.dependencies import Input, Output
|
||||
from dash import dcc, html, dash_table, Dash
|
||||
import v2realbot.utils.config_handler as cfh
|
||||
from typing import Set
|
||||
""""
|
||||
LATENCY DELAYS
|
||||
.000 trigger - last_trade_time (.4246266)
|
||||
@ -73,7 +75,20 @@ lock = threading.Lock
|
||||
#todo nejspis dat do classes, aby se mohlo backtestovat paralelne
|
||||
#ted je globalni promena last_time_now a self.account a cash
|
||||
class Backtester:
|
||||
def __init__(self, symbol: str, order_fill_callback: callable, btdata: list, bp_from: datetime, bp_to: datetime, cash: float = 100000):
|
||||
"""
|
||||
Initializes a new instance of the Backtester class.
|
||||
Args:
|
||||
symbol (str): The symbol of the security being backtested.
|
||||
accounts (set): A set of accounts to use for backtesting.
|
||||
order_fill_callback (callable): A callback function to handle order fills.
|
||||
btdata (list): A list of backtesting data.
|
||||
bp_from (datetime): The start date of the backtesting period.
|
||||
bp_to (datetime): The end date of the backtesting period.
|
||||
cash (float, optional): The initial cash balance. Defaults to 100000.
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
def __init__(self, symbol: str, accounts: Set, order_fill_callback: callable, btdata: list, bp_from: datetime, bp_to: datetime, cash: float = 100000):
|
||||
#this TIME value determines true time for submit, replace, cancel order should happen (allowing past)
|
||||
#it is set by every iteration of BT or before fill callback - allowing past events to happen
|
||||
self.time = None
|
||||
@ -82,6 +97,7 @@ class Backtester:
|
||||
self.btdata = btdata
|
||||
self.backtest_start = None
|
||||
self.backtest_end = None
|
||||
self.accounts = accounts
|
||||
self.cash_init = cash
|
||||
#backtesting period
|
||||
self.bp_from = bp_from
|
||||
@ -89,9 +105,10 @@ class Backtester:
|
||||
self.cash = cash
|
||||
self.cash_reserved_for_shorting = 0
|
||||
self.trades = []
|
||||
self.account = { "BAC": [0, 0] }
|
||||
# { "BAC": [avgp, size] }
|
||||
self.open_orders =[]
|
||||
self.internal_account = { account.name:{self.symbol: [0, 0]} for account in accounts }
|
||||
# { "ACCOUNT1": {}"BAC": [avgp, size]}, .... }
|
||||
self.open_orders =[] #open orders shared for all accounts, account being an attribute
|
||||
|
||||
# self.open_orders = [Order(id=uuid4(),
|
||||
# submitted_at = datetime(2023, 3, 17, 9, 30, 0, 0, tzinfo=zoneNY),
|
||||
# symbol = "BAC",
|
||||
@ -109,6 +126,8 @@ class Backtester:
|
||||
# side = OrderSide.BUY)]
|
||||
|
||||
#
|
||||
|
||||
|
||||
def execute_orders_and_callbacks(self, intime: float):
|
||||
"""""
|
||||
Voláno ze strategie před každou iterací s časem T.
|
||||
@ -165,13 +184,13 @@ class Backtester:
|
||||
|
||||
for order in self.open_orders:
|
||||
#pokud je vyplneny symbol, tak jedeme jen tyto, jinak vsechny
|
||||
print(order.id, datetime.timestamp(order.submitted_at), order.symbol, order.side, order.order_type, order.qty, order.limit_price, order.submitted_at)
|
||||
print(order.account.name, order.id, datetime.timestamp(order.submitted_at), order.symbol, order.side, order.order_type, order.qty, order.limit_price, order.submitted_at)
|
||||
if order.canceled_at:
|
||||
#ic("deleting canceled order",order.id)
|
||||
todel.append(order)
|
||||
elif not self.symbol or order.symbol == self.symbol:
|
||||
#pricteme mininimalni latency od submittu k fillu
|
||||
if order.submitted_at.timestamp() + BT_DELAYS.sub_to_fill > float(intime):
|
||||
if order.submitted_at.timestamp() + cfh.config_handler.get_val('BT_DELAYS','sub_to_fill') > float(intime):
|
||||
print(f"too soon for {order.id}")
|
||||
#try to execute
|
||||
else:
|
||||
@ -196,7 +215,10 @@ class Backtester:
|
||||
#TEST zkusime to nemazat, jak ovlivni performance
|
||||
#Mazeme, jinak je to hruza
|
||||
#nechavame na konci trady, které muzeme potrebovat pro consekutivni pravidlo
|
||||
del self.btdata[0:index_end-2-BT_FILL_CONS_TRADES_REQUIRED]
|
||||
#osetrujeme, kdy je malo tradu a oriznuti by slo do zaporu
|
||||
del_to_index = index_end-2-cfh.config_handler.get_val('BT_FILL_CONS_TRADES_REQUIRED')
|
||||
del_to_index = del_to_index if del_to_index > 0 else 0
|
||||
del self.btdata[0:del_to_index]
|
||||
##ic("after delete",len(self.btdata[0:index_end]))
|
||||
|
||||
if changes: return 1
|
||||
@ -215,7 +237,7 @@ class Backtester:
|
||||
|
||||
fill_time = None
|
||||
fill_price = None
|
||||
order_min_fill_time = o.submitted_at.timestamp() + BT_DELAYS.sub_to_fill
|
||||
order_min_fill_time = o.submitted_at.timestamp() + cfh.config_handler.get_val('BT_DELAYS','sub_to_fill')
|
||||
#ic(order_min_fill_time)
|
||||
#ic(len(work_range))
|
||||
|
||||
@ -237,17 +259,18 @@ class Backtester:
|
||||
#NASTVENI PODMINEK PLNENI
|
||||
fast_fill_condition = i[1] <= o.limit_price
|
||||
slow_fill_condition = i[1] < o.limit_price
|
||||
if BT_FILL_CONDITION_BUY_LIMIT == FillCondition.FAST:
|
||||
fill_cond_buy_limit = cfh.config_handler.get_val('BT_FILL_CONDITION_BUY_LIMIT')
|
||||
if fill_cond_buy_limit == FillCondition.FAST:
|
||||
fill_condition = fast_fill_condition
|
||||
elif BT_FILL_CONDITION_BUY_LIMIT == FillCondition.SLOW:
|
||||
elif fill_cond_buy_limit == FillCondition.SLOW:
|
||||
fill_condition = slow_fill_condition
|
||||
else:
|
||||
print("unknow fill condition")
|
||||
return -1
|
||||
|
||||
if float(i[0]) > float(order_min_fill_time+BT_DELAYS.limit_order_offset) and fill_condition:
|
||||
if float(i[0]) > float(order_min_fill_time+cfh.config_handler.get_val('BT_DELAYS','limit_order_offset')) and fill_condition:
|
||||
consec_cnt += 1
|
||||
if consec_cnt == BT_FILL_CONS_TRADES_REQUIRED:
|
||||
if consec_cnt == cfh.config_handler.get_val('BT_FILL_CONS_TRADES_REQUIRED'):
|
||||
|
||||
#(1679081919.381649, 27.88)
|
||||
#ic(i)
|
||||
@ -258,10 +281,10 @@ class Backtester:
|
||||
#fill_price = i[1]
|
||||
|
||||
print("FILL LIMIT BUY at", fill_time, datetime.fromtimestamp(fill_time).astimezone(zoneNY), "at",i[1])
|
||||
if BT_FILL_LOG_SURROUNDING_TRADES != 0:
|
||||
if cfh.config_handler.get_val('BT_FILL_LOG_SURROUNDING_TRADES') != 0:
|
||||
#TODO loguru
|
||||
print("FILL SURR TRADES: before",work_range[index-BT_FILL_LOG_SURROUNDING_TRADES:index])
|
||||
print("FILL SURR TRADES: fill and after",work_range[index:index+BT_FILL_LOG_SURROUNDING_TRADES])
|
||||
print("FILL SURR TRADES: before",work_range[index-cfh.config_handler.get_val('BT_FILL_LOG_SURROUNDING_TRADES'):index])
|
||||
print("FILL SURR TRADES: fill and after",work_range[index:index+cfh.config_handler.get_val('BT_FILL_LOG_SURROUNDING_TRADES')])
|
||||
break
|
||||
else:
|
||||
consec_cnt = 0
|
||||
@ -272,17 +295,18 @@ class Backtester:
|
||||
#NASTVENI PODMINEK PLNENI
|
||||
fast_fill_condition = i[1] >= o.limit_price
|
||||
slow_fill_condition = i[1] > o.limit_price
|
||||
if BT_FILL_CONDITION_SELL_LIMIT == FillCondition.FAST:
|
||||
fill_conf_sell_cfg = cfh.config_handler.get_val('BT_FILL_CONDITION_SELL_LIMIT')
|
||||
if fill_conf_sell_cfg == FillCondition.FAST:
|
||||
fill_condition = fast_fill_condition
|
||||
elif BT_FILL_CONDITION_SELL_LIMIT == FillCondition.SLOW:
|
||||
elif fill_conf_sell_cfg == FillCondition.SLOW:
|
||||
fill_condition = slow_fill_condition
|
||||
else:
|
||||
print("unknown fill condition")
|
||||
return -1
|
||||
|
||||
if float(i[0]) > float(order_min_fill_time+BT_DELAYS.limit_order_offset) and fill_condition:
|
||||
if float(i[0]) > float(order_min_fill_time+cfh.config_handler.get_val('BT_DELAYS','limit_order_offset')) and fill_condition:
|
||||
consec_cnt += 1
|
||||
if consec_cnt == BT_FILL_CONS_TRADES_REQUIRED:
|
||||
if consec_cnt == cfh.config_handler.get_val('BT_FILL_CONS_TRADES_REQUIRED'):
|
||||
#(1679081919.381649, 27.88)
|
||||
#ic(i)
|
||||
fill_time = i[0]
|
||||
@ -294,10 +318,11 @@ class Backtester:
|
||||
|
||||
#fill_price = i[1]
|
||||
print("FILL LIMIT SELL at", fill_time, datetime.fromtimestamp(fill_time).astimezone(zoneNY), "at",i[1])
|
||||
if BT_FILL_LOG_SURROUNDING_TRADES != 0:
|
||||
surr_trades_cfg = cfh.config_handler.get_val('BT_FILL_LOG_SURROUNDING_TRADES')
|
||||
if surr_trades_cfg != 0:
|
||||
#TODO loguru
|
||||
print("FILL SELL SURR TRADES: before",work_range[index-BT_FILL_LOG_SURROUNDING_TRADES:index])
|
||||
print("FILL SELL SURR TRADES: fill and after",work_range[index:index+BT_FILL_LOG_SURROUNDING_TRADES])
|
||||
print("FILL SELL SURR TRADES: before",work_range[index-surr_trades_cfg:index])
|
||||
print("FILL SELL SURR TRADES: fill and after",work_range[index:index+surr_trades_cfg])
|
||||
break
|
||||
else:
|
||||
consec_cnt = 0
|
||||
@ -311,11 +336,16 @@ class Backtester:
|
||||
#ic(i)
|
||||
fill_time = i[0]
|
||||
fill_price = i[1]
|
||||
#přičteme MARKET PREMIUM z konfigurace (do budoucna mozna rozdilne pro BUY/SELL a nebo mozna z konfigurace pro dany itutl)
|
||||
#přičteme MARKET PREMIUM z konfigurace (je v pct nebo abs) (do budoucna mozna rozdilne pro BUY/SELL a nebo mozna z konfigurace pro dany titul)
|
||||
cfg_premium = cfh.config_handler.get_val('BT_FILL_PRICE_MARKET_ORDER_PREMIUM')
|
||||
if cfg_premium < 0: #configured as percentage
|
||||
premium = abs(cfg_premium) * fill_price / 100.0
|
||||
else: #configured as absolute value
|
||||
premium = cfg_premium
|
||||
if o.side == OrderSide.BUY:
|
||||
fill_price = fill_price + BT_FILL_PRICE_MARKET_ORDER_PREMIUM
|
||||
fill_price = fill_price + premium
|
||||
elif o.side == OrderSide.SELL:
|
||||
fill_price = fill_price - BT_FILL_PRICE_MARKET_ORDER_PREMIUM
|
||||
fill_price = fill_price - premium
|
||||
|
||||
print("FILL ",o.side,"MARKET at", fill_time, datetime.fromtimestamp(fill_time).astimezone(zoneNY), "cena", i[1])
|
||||
break
|
||||
@ -336,21 +366,22 @@ class Backtester:
|
||||
|
||||
#ic(o.filled_at, o.filled_avg_price)
|
||||
|
||||
a = self.update_account(o = o)
|
||||
a = self.update_internal_account(o = o)
|
||||
if a < 0:
|
||||
tlog("update_account ERROR")
|
||||
return -1
|
||||
|
||||
trade = TradeUpdate(order = o,
|
||||
trade = TradeUpdate(account=o.account,
|
||||
order = o,
|
||||
event = TradeEvent.FILL,
|
||||
execution_id = str(uuid4()),
|
||||
timestamp = datetime.fromtimestamp(fill_time),
|
||||
position_qty= self.account[o.symbol][0],
|
||||
position_qty= self.internal_account[o.account.name][o.symbol][0],
|
||||
price=float(fill_price),
|
||||
qty = o.qty,
|
||||
value = float(o.qty*fill_price),
|
||||
cash = self.cash,
|
||||
pos_avg_price = self.account[o.symbol][1])
|
||||
pos_avg_price = self.internal_account[o.account.name][o.symbol][1])
|
||||
|
||||
self.trades.append(trade)
|
||||
|
||||
@ -364,52 +395,52 @@ class Backtester:
|
||||
def _do_notification_with_callbacks(self, tradeupdate: TradeUpdate, time: float):
|
||||
|
||||
#do callbacku je třeba zpropagovat filltime čas (včetně latency pro notifikaci), aby se pripadne akce v callbacku udály s tímto časem
|
||||
self.time = time + float(BT_DELAYS.fill_to_not)
|
||||
self.time = time + float(cfh.config_handler.get_val('BT_DELAYS','fill_to_not'))
|
||||
print("current bt.time",self.time)
|
||||
#print("FILL NOTIFICATION: ", tradeupdate)
|
||||
res = asyncio.run(self.order_fill_callback(tradeupdate))
|
||||
res = asyncio.run(self.order_fill_callback(tradeupdate, tradeupdate.account))
|
||||
return 0
|
||||
|
||||
def update_account(self, o: Order):
|
||||
def update_internal_account(self, o: Order):
|
||||
#updatujeme self.account
|
||||
#pokud neexistuje klic v accountu vytvorime si ho
|
||||
if o.symbol not in self.account:
|
||||
if o.symbol not in self.internal_account[o.account.name]:
|
||||
# { "BAC": [size, avgp] }
|
||||
self.account[o.symbol] = [0,0]
|
||||
self.internal_account[o.account.name][o.symbol] = [0,0]
|
||||
|
||||
if o.side == OrderSide.BUY:
|
||||
#[size, avgp]
|
||||
newsize = (self.account[o.symbol][0] + o.qty)
|
||||
newsize = (self.internal_account[o.account.name][o.symbol][0] + o.qty)
|
||||
#JPLNE UZAVRENI SHORT (avgp 0)
|
||||
if newsize == 0: newavgp = 0
|
||||
#CASTECNE UZAVRENI SHORT (avgp puvodni)
|
||||
elif newsize < 0: newavgp = self.account[o.symbol][1]
|
||||
elif newsize < 0: newavgp = self.internal_account[o.account.name][o.symbol][1]
|
||||
#JDE O LONG (avgp nove)
|
||||
else:
|
||||
newavgp = ((self.account[o.symbol][0] * self.account[o.symbol][1]) + (o.qty * o.filled_avg_price)) / (self.account[o.symbol][0] + o.qty)
|
||||
newavgp = ((self.internal_account[o.account.name][o.symbol][0] * self.internal_account[o.account.name][o.symbol][1]) + (o.qty * o.filled_avg_price)) / (self.internal_account[o.account.name][o.symbol][0] + o.qty)
|
||||
|
||||
self.account[o.symbol] = [newsize, newavgp]
|
||||
self.internal_account[o.account.name][o.symbol] = [newsize, newavgp]
|
||||
self.cash = self.cash - (o.qty * o.filled_avg_price)
|
||||
return 1
|
||||
#sell
|
||||
elif o.side == OrderSide.SELL:
|
||||
newsize = self.account[o.symbol][0]-o.qty
|
||||
newsize = self.internal_account[o.account.name][o.symbol][0]-o.qty
|
||||
#UPLNE UZAVRENI LONGU (avgp 0)
|
||||
if newsize == 0: newavgp = 0
|
||||
#CASTECNE UZAVRENI LONGU (avgp puvodni)
|
||||
elif newsize > 0: newavgp = self.account[o.symbol][1]
|
||||
elif newsize > 0: newavgp = self.internal_account[o.account.name][o.symbol][1]
|
||||
#jde o SHORT (avgp nove)
|
||||
else:
|
||||
#pokud je predchozi 0 - tzn. jde o prvni short
|
||||
if self.account[o.symbol][1] == 0:
|
||||
if self.internal_account[o.account.name][o.symbol][1] == 0:
|
||||
newavgp = o.filled_avg_price
|
||||
else:
|
||||
newavgp = ((abs(self.account[o.symbol][0]) * self.account[o.symbol][1]) + (o.qty * o.filled_avg_price)) / (abs(self.account[o.symbol][0]) + o.qty)
|
||||
newavgp = ((abs(self.internal_account[o.account.name][o.symbol][0]) * self.internal_account[o.account.name][o.symbol][1]) + (o.qty * o.filled_avg_price)) / (abs(self.internal_account[o.account.name][o.symbol][0]) + o.qty)
|
||||
|
||||
self.account[o.symbol] = [newsize, newavgp]
|
||||
self.internal_account[o.account.name][o.symbol] = [newsize, newavgp]
|
||||
|
||||
#pokud jde o prodej longu(nova pozice je>=0) upravujeme cash
|
||||
if self.account[o.symbol][0] >= 0:
|
||||
if self.internal_account[o.account.name][o.symbol][0] >= 0:
|
||||
self.cash = float(self.cash + (o.qty * o.filled_avg_price))
|
||||
print("uprava cashe, jde o prodej longu")
|
||||
else:
|
||||
@ -454,7 +485,7 @@ class Backtester:
|
||||
# #ic("get last price")
|
||||
# return self.btdata[i-1][1]
|
||||
|
||||
def submit_order(self, time: float, symbol: str, side: OrderSide, size: int, order_type: OrderType, price: float = None):
|
||||
def submit_order(self, time: float, symbol: str, side: OrderSide, size: int, order_type: OrderType, account: Account, price: float = None):
|
||||
"""submit order
|
||||
- zakladni validace
|
||||
- vloží do self.open_orders s daným časem
|
||||
@ -467,11 +498,11 @@ class Backtester:
|
||||
print("BT: submit order entry")
|
||||
|
||||
if not time or time < 0:
|
||||
print("time musi byt vyplneny")
|
||||
printanyway("time musi byt vyplneny")
|
||||
return -1
|
||||
|
||||
if not size or int(size) < 0:
|
||||
print("size musi byt vetsi nez 0")
|
||||
printanyway("size musi byt vetsi nez 0")
|
||||
return -1
|
||||
|
||||
if (order_type != OrderType.MARKET) and (order_type != OrderType.LIMIT):
|
||||
@ -479,17 +510,17 @@ class Backtester:
|
||||
return -1
|
||||
|
||||
if not side == OrderSide.BUY and not side == OrderSide.SELL:
|
||||
print("side buy/sell required")
|
||||
printanyway("side buy/sell required")
|
||||
return -1
|
||||
|
||||
if order_type == OrderType.LIMIT and count_decimals(price) > 2:
|
||||
print("only 2 decimals supported", price)
|
||||
printanyway("only 2 decimals supported", price)
|
||||
return -1
|
||||
|
||||
#pokud neexistuje klic v accountu vytvorime si ho
|
||||
if symbol not in self.account:
|
||||
if symbol not in self.internal_account[account.name]:
|
||||
# { "BAC": [size, avgp] }
|
||||
self.account[symbol] = [0,0]
|
||||
self.internal_account[account.name][symbol] = [0,0]
|
||||
|
||||
#check for available quantity
|
||||
if side == OrderSide.SELL:
|
||||
@ -497,22 +528,22 @@ class Backtester:
|
||||
reserved_price = 0
|
||||
#with lock:
|
||||
for o in self.open_orders:
|
||||
if o.side == OrderSide.SELL and o.symbol == symbol and o.canceled_at is None:
|
||||
if o.side == OrderSide.SELL and o.symbol == symbol and o.canceled_at is None and o.account==account:
|
||||
reserved += o.qty
|
||||
cena = o.limit_price if o.limit_price else self.get_last_price(time, o.symbol)
|
||||
reserved_price += o.qty * cena
|
||||
print("blokovano v open orders pro sell qty: ", reserved, "celkem:", reserved_price)
|
||||
|
||||
actual_minus_reserved = int(self.account[symbol][0]) - reserved
|
||||
actual_minus_reserved = int(self.internal_account[account.name][symbol][0]) - reserved
|
||||
if actual_minus_reserved > 0 and actual_minus_reserved - int(size) < 0:
|
||||
print("not enough shares available to sell or shorting while long position",self.account[symbol][0],"reserved",reserved,"available",int(self.account[symbol][0]) - reserved,"selling",size)
|
||||
printanyway("not enough shares available to sell or shorting while long position",self.internal_account[account.name][symbol][0],"reserved",reserved,"available",int(self.internal_account[account.name][symbol][0]) - reserved,"selling",size)
|
||||
return -1
|
||||
|
||||
#if is shorting - check available cash to short
|
||||
if actual_minus_reserved <= 0:
|
||||
cena = price if price else self.get_last_price(time, self.symbol)
|
||||
if (self.cash - reserved_price - float(int(size)*float(cena))) < 0:
|
||||
print("not enough cash for shorting. cash",self.cash,"reserved",reserved,"available",self.cash-reserved,"needed",float(int(size)*float(cena)))
|
||||
printanyway("ERROR: not enough cash for shorting. cash",self.cash,"reserved",reserved,"available",self.cash-reserved,"needed",float(int(size)*float(cena)))
|
||||
return -1
|
||||
|
||||
#check for available cash
|
||||
@ -521,28 +552,29 @@ class Backtester:
|
||||
reserved_price = 0
|
||||
#with lock:
|
||||
for o in self.open_orders:
|
||||
if o.side == OrderSide.BUY and o.canceled_at is None:
|
||||
if o.side == OrderSide.BUY and o.canceled_at is None and o.account==account:
|
||||
cena = o.limit_price if o.limit_price else self.get_last_price(time, o.symbol)
|
||||
reserved_price += o.qty * cena
|
||||
reserved_qty += o.qty
|
||||
print("blokovano v open orders for buy: qty, price", reserved_qty, reserved_price)
|
||||
|
||||
actual_plus_reserved_qty = int(self.account[symbol][0]) + reserved_qty
|
||||
actual_plus_reserved_qty = int(self.internal_account[account.name][symbol][0]) + reserved_qty
|
||||
|
||||
#jde o uzavreni shortu
|
||||
if actual_plus_reserved_qty < 0 and (actual_plus_reserved_qty + int(size)) > 0:
|
||||
print("nejprve je treba uzavrit short pozici pro buy res_qty, size", actual_plus_reserved_qty, size)
|
||||
printanyway("nejprve je treba uzavrit short pozici pro buy res_qty, size", actual_plus_reserved_qty, size)
|
||||
return -1
|
||||
|
||||
#jde o standardni long, kontroluju cash
|
||||
if actual_plus_reserved_qty >= 0:
|
||||
cena = price if price else self.get_last_price(time, self.symbol)
|
||||
if (self.cash - reserved_price - float(int(size)*float(cena))) < 0:
|
||||
print("not enough cash to buy long. cash",self.cash,"reserved_qty",reserved_qty,"reserved_price",reserved_price, "available",self.cash-reserved_price,"needed",float(int(size)*float(cena)))
|
||||
printanyway("ERROR: not enough cash to buy long. cash",self.cash,"reserved_qty",reserved_qty,"reserved_price",reserved_price, "available",self.cash-reserved_price,"needed",float(int(size)*float(cena)))
|
||||
return -1
|
||||
|
||||
id = str(uuid4())
|
||||
order = Order(id=id,
|
||||
account=account,
|
||||
submitted_at = datetime.fromtimestamp(float(time)),
|
||||
symbol=symbol,
|
||||
order_type = order_type,
|
||||
@ -557,7 +589,7 @@ class Backtester:
|
||||
return id
|
||||
|
||||
|
||||
def replace_order(self, id: str, time: float, size: int = None, price: float = None):
|
||||
def replace_order(self, id: str, time: float, account: Account, size: int = None, price: float = None):
|
||||
"""replace order
|
||||
- zakladni validace vrací synchronně
|
||||
- vrací číslo nové objednávky
|
||||
@ -565,16 +597,16 @@ class Backtester:
|
||||
print("BT: replace order entry",id,size,price)
|
||||
|
||||
if not price and not size:
|
||||
print("size or price required")
|
||||
printanyway("size or price required")
|
||||
return -1
|
||||
|
||||
if len(self.open_orders) == 0:
|
||||
print("BT: order doesnt exist")
|
||||
printanyway("BT: order doesnt exist")
|
||||
return 0
|
||||
#with lock:
|
||||
for o in self.open_orders:
|
||||
print(o.id)
|
||||
if str(o.id) == str(id):
|
||||
if str(o.id) == str(id) and o.account == account:
|
||||
newid = str(uuid4())
|
||||
o.id = newid
|
||||
o.submitted_at = datetime.fromtimestamp(time)
|
||||
@ -585,7 +617,7 @@ class Backtester:
|
||||
print("BT: replacement order doesnt exist")
|
||||
return 0
|
||||
|
||||
def cancel_order(self, time: float, id: str):
|
||||
def cancel_order(self, time: float, id: str, account: Account):
|
||||
"""cancel order
|
||||
- základní validace vrací synchronně
|
||||
- vymaže objednávku z open orders
|
||||
@ -597,26 +629,26 @@ class Backtester:
|
||||
"""
|
||||
print("BT: cancel order entry",id)
|
||||
if len(self.open_orders) == 0:
|
||||
print("BTC: order doesnt exist")
|
||||
printanyway("BTC: order doesnt exist")
|
||||
return 0
|
||||
#with lock:
|
||||
for o in self.open_orders:
|
||||
if str(o.id) == id:
|
||||
if str(o.id) == id and o.account == account:
|
||||
o.canceled_at = time
|
||||
print("set as canceled in self.open_orders")
|
||||
return 1
|
||||
print("BTC: cantchange. open order doesnt exist")
|
||||
return 0
|
||||
|
||||
def get_open_position(self, symbol: str):
|
||||
def get_open_position(self, symbol: str, account: Account):
|
||||
"""get positions ->(avg,size)"""
|
||||
#print("BT:get open positions entry")
|
||||
try:
|
||||
return self.account[symbol][1], self.account[symbol][0]
|
||||
return self.internal_account[account.name][symbol][1], self.internal_account[account.name][symbol][0]
|
||||
except:
|
||||
return (0,0)
|
||||
|
||||
def get_open_orders(self, side: OrderSide, symbol: str):
|
||||
def get_open_orders(self, side: OrderSide, symbol: str, account: Account):
|
||||
"""get open orders ->list(OrderNotification)"""
|
||||
print("BT:get open orders entry")
|
||||
if len(self.open_orders) == 0:
|
||||
@ -626,7 +658,7 @@ class Backtester:
|
||||
#with lock:
|
||||
for o in self.open_orders:
|
||||
#print(o)
|
||||
if o.symbol == symbol and o.canceled_at is None:
|
||||
if o.symbol == symbol and o.canceled_at is None and o.account == account:
|
||||
if side is None or o.side == side:
|
||||
res.append(o)
|
||||
return res
|
||||
@ -817,10 +849,10 @@ class Backtester:
|
||||
Trades:''' + str(len(self.trades)))
|
||||
textik8 = html.Div('''
|
||||
Profit:''' + str(state.profit))
|
||||
textik9 = html.Div(f"{BT_FILL_CONS_TRADES_REQUIRED=}")
|
||||
textik10 = html.Div(f"{BT_FILL_LOG_SURROUNDING_TRADES=}")
|
||||
textik11 = html.Div(f"{BT_FILL_CONDITION_BUY_LIMIT=}")
|
||||
textik12 = html.Div(f"{BT_FILL_CONDITION_SELL_LIMIT=}")
|
||||
textik9 = html.Div(f"{cfh.config_handler.get_val('BT_FILL_CONS_TRADES_REQUIRED')=}")
|
||||
textik10 = html.Div(f"{cfh.config_handler.get_val('BT_FILL_LOG_SURROUNDING_TRADES')=}")
|
||||
textik11 = html.Div(f"{cfh.config_handler.get_val('BT_FILL_CONDITION_BUY_LIMIT')=}")
|
||||
textik12 = html.Div(f"{cfh.config_handler.get_val('BT_FILL_CONDITION_SELL_LIMIT')=}")
|
||||
|
||||
orders_title = dcc.Markdown('## Open orders')
|
||||
trades_title = dcc.Markdown('## Trades')
|
||||
|
||||
@ -1,41 +0,0 @@
|
||||
from enum import Enum
|
||||
from datetime import datetime
|
||||
from pydantic import BaseModel
|
||||
from typing import Any, Optional, List, Union
|
||||
from uuid import UUID
|
||||
class TradeStatus(str, Enum):
|
||||
READY = "ready"
|
||||
ACTIVATED = "activated"
|
||||
CLOSED = "closed"
|
||||
#FINISHED = "finished"
|
||||
|
||||
class TradeDirection(str, Enum):
|
||||
LONG = "long"
|
||||
SHORT = "short"
|
||||
|
||||
class TradeStoplossType(str, Enum):
|
||||
FIXED = "fixed"
|
||||
TRAILING = "trailing"
|
||||
|
||||
#Predpis obchodu vygenerovany signalem, je to zastresujici jednotka
|
||||
#ke kteremu jsou pak navazany jednotlivy FILLy (reprezentovany model.TradeUpdate) - napr. castecne exity atp.
|
||||
class Trade(BaseModel):
|
||||
id: UUID
|
||||
last_update: datetime
|
||||
entry_time: Optional[datetime] = None
|
||||
exit_time: Optional[datetime] = None
|
||||
status: TradeStatus
|
||||
generated_by: Optional[str] = None
|
||||
direction: TradeDirection
|
||||
entry_price: Optional[float] = None
|
||||
goal_price: Optional[float] = None
|
||||
size: Optional[int] = None
|
||||
# size_multiplier je pomocna promenna pro pocitani relativniho denniho profit
|
||||
size_multiplier: Optional[float] = None
|
||||
# stoploss_type: TradeStoplossType
|
||||
stoploss_value: Optional[float] = None
|
||||
profit: Optional[float] = 0
|
||||
profit_sum: Optional[float] = 0
|
||||
rel_profit: Optional[float] = 0
|
||||
rel_profit_cum: Optional[float] = 0
|
||||
|
||||
@ -1,11 +1,8 @@
|
||||
from v2realbot.config import DATA_DIR
|
||||
import sqlite3
|
||||
import queue
|
||||
import threading
|
||||
import time
|
||||
from v2realbot.common.model import RunArchive, RunArchiveView
|
||||
from datetime import datetime
|
||||
import json
|
||||
from v2realbot.config import DATA_DIR
|
||||
|
||||
sqlite_db_file = DATA_DIR + "/v2trading.db"
|
||||
# Define the connection pool
|
||||
@ -31,7 +28,7 @@ class ConnectionPool:
|
||||
return connection
|
||||
|
||||
|
||||
def execute_with_retry(cursor: sqlite3.Cursor, statement: str, params = None, retry_interval: int = 1) -> sqlite3.Cursor:
|
||||
def execute_with_retry(cursor: sqlite3.Cursor, statement: str, params = None, retry_interval: int = 2) -> sqlite3.Cursor:
|
||||
"""get connection from pool and execute SQL statement with retry logic if required.
|
||||
|
||||
Args:
|
||||
@ -61,52 +58,3 @@ pool = ConnectionPool(10)
|
||||
#for one shared connection (used for writes only in WAL mode)
|
||||
insert_conn = sqlite3.connect(sqlite_db_file, check_same_thread=False)
|
||||
insert_queue = queue.Queue()
|
||||
|
||||
#prevede dict radku zpatky na objekt vcetme retypizace
|
||||
def row_to_runarchiveview(row: dict) -> RunArchiveView:
|
||||
return RunArchive(
|
||||
id=row['runner_id'],
|
||||
strat_id=row['strat_id'],
|
||||
batch_id=row['batch_id'],
|
||||
symbol=row['symbol'],
|
||||
name=row['name'],
|
||||
note=row['note'],
|
||||
started=datetime.fromisoformat(row['started']) if row['started'] else None,
|
||||
stopped=datetime.fromisoformat(row['stopped']) if row['stopped'] else None,
|
||||
mode=row['mode'],
|
||||
account=row['account'],
|
||||
bt_from=datetime.fromisoformat(row['bt_from']) if row['bt_from'] else None,
|
||||
bt_to=datetime.fromisoformat(row['bt_to']) if row['bt_to'] else None,
|
||||
ilog_save=bool(row['ilog_save']),
|
||||
profit=float(row['profit']),
|
||||
trade_count=int(row['trade_count']),
|
||||
end_positions=int(row['end_positions']),
|
||||
end_positions_avgp=float(row['end_positions_avgp']),
|
||||
metrics=json.loads(row['metrics']) if row['metrics'] else None
|
||||
)
|
||||
|
||||
#prevede dict radku zpatky na objekt vcetme retypizace
|
||||
def row_to_runarchive(row: dict) -> RunArchive:
|
||||
return RunArchive(
|
||||
id=row['runner_id'],
|
||||
strat_id=row['strat_id'],
|
||||
batch_id=row['batch_id'],
|
||||
symbol=row['symbol'],
|
||||
name=row['name'],
|
||||
note=row['note'],
|
||||
started=datetime.fromisoformat(row['started']) if row['started'] else None,
|
||||
stopped=datetime.fromisoformat(row['stopped']) if row['stopped'] else None,
|
||||
mode=row['mode'],
|
||||
account=row['account'],
|
||||
bt_from=datetime.fromisoformat(row['bt_from']) if row['bt_from'] else None,
|
||||
bt_to=datetime.fromisoformat(row['bt_to']) if row['bt_to'] else None,
|
||||
strat_json=json.loads(row['strat_json']),
|
||||
settings=json.loads(row['settings']),
|
||||
ilog_save=bool(row['ilog_save']),
|
||||
profit=float(row['profit']),
|
||||
trade_count=int(row['trade_count']),
|
||||
end_positions=int(row['end_positions']),
|
||||
end_positions_avgp=float(row['end_positions_avgp']),
|
||||
metrics=json.loads(row['metrics']),
|
||||
stratvars_toml=row['stratvars_toml']
|
||||
)
|
||||
@ -1,12 +1,80 @@
|
||||
from uuid import UUID
|
||||
from uuid import UUID, uuid4
|
||||
from alpaca.trading.enums import OrderSide, OrderStatus, TradeEvent,OrderType
|
||||
#from utils import AttributeDict
|
||||
from rich import print
|
||||
from typing import Any, Optional, List, Union
|
||||
from datetime import datetime, date
|
||||
from pydantic import BaseModel
|
||||
from v2realbot.enums.enums import Mode, Account
|
||||
from pydantic import BaseModel, Field
|
||||
from v2realbot.enums.enums import Mode, Account, SchedulerStatus, Moddus, Market, Followup
|
||||
from alpaca.data.enums import Exchange
|
||||
from enum import Enum
|
||||
from datetime import datetime
|
||||
from pydantic import BaseModel
|
||||
from typing import Any, Optional, List, Union
|
||||
from uuid import UUID
|
||||
|
||||
|
||||
#prescribed model
|
||||
#from prescribed model
|
||||
class InstantIndicator(BaseModel):
|
||||
name: str
|
||||
toml: str
|
||||
|
||||
|
||||
class TradeStatus(str, Enum):
|
||||
READY = "ready"
|
||||
ACTIVATED = "activated"
|
||||
CLOSED = "closed"
|
||||
#FINISHED = "finished"
|
||||
|
||||
class TradeDirection(str, Enum):
|
||||
LONG = "long"
|
||||
SHORT = "short"
|
||||
|
||||
class TradeStoplossType(str, Enum):
|
||||
FIXED = "fixed"
|
||||
TRAILING = "trailing"
|
||||
|
||||
#Predpis obchodu vygenerovany signalem, je to zastresujici jednotka
|
||||
#ke kteremu jsou pak navazany jednotlivy FILLy (reprezentovany model.TradeUpdate) - napr. castecne exity atp.
|
||||
class Trade(BaseModel):
|
||||
account: Account
|
||||
id: UUID
|
||||
last_update: datetime
|
||||
entry_time: Optional[datetime] = None
|
||||
exit_time: Optional[datetime] = None
|
||||
status: TradeStatus
|
||||
generated_by: Optional[str] = None
|
||||
direction: TradeDirection
|
||||
entry_price: Optional[float] = None
|
||||
goal_price: Optional[float] = None
|
||||
size: Optional[int] = None
|
||||
# size_multiplier je pomocna promenna pro pocitani relativniho denniho profit
|
||||
size_multiplier: Optional[float] = None
|
||||
# stoploss_type: TradeStoplossType
|
||||
stoploss_value: Optional[float] = None
|
||||
profit: Optional[float] = 0
|
||||
profit_sum: Optional[float] = 0
|
||||
rel_profit: Optional[float] = 0
|
||||
rel_profit_cum: Optional[float] = 0
|
||||
|
||||
#account variables that can be accessed by ACCOUNT key dictionary
|
||||
class AccountVariables(BaseModel):
|
||||
positions: float = 0
|
||||
avgp: float = 0
|
||||
pending: str = None
|
||||
blockbuy: int = 0
|
||||
wait_for_fill: float = None
|
||||
profit: float = 0
|
||||
docasny_rel_profit: list = []
|
||||
rel_profit_cum: list = []
|
||||
last_entry_index: int = None #acc varianta, mame taky obnecnou state.vars.last_entry_index
|
||||
requested_followup: Followup = None
|
||||
activeTrade: Trade = None
|
||||
dont_exit_already_activated: bool = False
|
||||
#activeTrade, prescribedTrades
|
||||
#tbd transferables?
|
||||
|
||||
|
||||
#models for server side datatables
|
||||
# Model for individual column data
|
||||
@ -88,15 +156,15 @@ class TestList(BaseModel):
|
||||
dates: List[Intervals]
|
||||
|
||||
#for GUI to fetch historical trades on given symbol
|
||||
class Trade(BaseModel):
|
||||
class TradeView(BaseModel):
|
||||
symbol: str
|
||||
timestamp: datetime
|
||||
exchange: Optional[Union[Exchange, str]]
|
||||
exchange: Optional[Union[Exchange, str]] = None
|
||||
price: float
|
||||
size: float
|
||||
id: int
|
||||
conditions: Optional[List[str]]
|
||||
tape: Optional[str]
|
||||
conditions: Optional[List[str]] = None
|
||||
tape: Optional[str] = None
|
||||
|
||||
|
||||
#persisted object in pickle
|
||||
@ -111,8 +179,20 @@ class StrategyInstance(BaseModel):
|
||||
close_rush: int = 0
|
||||
stratvars_conf: str
|
||||
add_data_conf: str
|
||||
note: Optional[str]
|
||||
history: Optional[str]
|
||||
note: Optional[str] = None
|
||||
history: Optional[str] = None
|
||||
|
||||
def __setstate__(self, state: dict[Any, Any]) -> None:
|
||||
"""
|
||||
Hack to allow unpickling models stored from pydantic V1
|
||||
"""
|
||||
state.setdefault("__pydantic_extra__", {})
|
||||
state.setdefault("__pydantic_private__", {})
|
||||
|
||||
if "__pydantic_fields_set__" not in state:
|
||||
state["__pydantic_fields_set__"] = state.get("__fields_set__")
|
||||
|
||||
super().__setstate__(state)
|
||||
|
||||
class RunRequest(BaseModel):
|
||||
id: UUID
|
||||
@ -122,8 +202,8 @@ class RunRequest(BaseModel):
|
||||
debug: bool = False
|
||||
strat_json: Optional[str] = None
|
||||
ilog_save: bool = False
|
||||
bt_from: datetime = None
|
||||
bt_to: datetime = None
|
||||
bt_from: Optional[datetime] = None
|
||||
bt_to: Optional[datetime] = None
|
||||
#weekdays filter
|
||||
#pokud je uvedeny filtrujeme tyto dny
|
||||
weekdays_filter: Optional[list] = None
|
||||
@ -134,7 +214,34 @@ class RunRequest(BaseModel):
|
||||
cash: int = 100000
|
||||
skip_cache: Optional[bool] = False
|
||||
|
||||
|
||||
#Trida, která je nadstavbou runrequestu a pouzivame ji v scheduleru, je zde navic jen par polí
|
||||
class RunManagerRecord(BaseModel):
|
||||
moddus: Moddus
|
||||
id: UUID = Field(default_factory=uuid4)
|
||||
strat_id: UUID
|
||||
symbol: Optional[str] = None
|
||||
account: Account
|
||||
mode: Mode
|
||||
note: Optional[str] = None
|
||||
ilog_save: bool = False
|
||||
market: Optional[Market] = Market.US
|
||||
bt_from: Optional[datetime] = None
|
||||
bt_to: Optional[datetime] = None
|
||||
#weekdays filter
|
||||
#pokud je uvedeny filtrujeme tyto dny
|
||||
weekdays_filter: Optional[list] = None #list of strings 0-6 representing days to run
|
||||
#GENERATED ID v ramci runu, vaze vsechny runnery v batchovem behu
|
||||
batch_id: Optional[str] = None
|
||||
testlist_id: Optional[str] = None
|
||||
start_time: str #time (HH:MM) that start function is called
|
||||
stop_time: Optional[str] = None #time (HH:MM) that stop function is called
|
||||
status: SchedulerStatus
|
||||
last_processed: Optional[datetime] = None
|
||||
history: Optional[str] = None
|
||||
valid_from: Optional[datetime] = None # US East time zone daetime
|
||||
valid_to: Optional[datetime] = None # US East time zone daetime
|
||||
runner_id: Optional[UUID] = None #last runner_id from scheduler after stratefy is started
|
||||
strat_running: Optional[bool] = None #automatically updated field based on status of runner_id above, it is added by row_to_RunManagerRecord
|
||||
class RunnerView(BaseModel):
|
||||
id: UUID
|
||||
strat_id: UUID
|
||||
@ -147,8 +254,8 @@ class RunnerView(BaseModel):
|
||||
run_symbol: Optional[str] = None
|
||||
run_trade_count: Optional[int] = 0
|
||||
run_profit: Optional[float] = 0
|
||||
run_positions: Optional[int] = 0
|
||||
run_avgp: Optional[float] = 0
|
||||
run_positions: Optional[dict] = 0
|
||||
run_avgp: Optional[dict] = 0
|
||||
run_stopped: Optional[datetime] = None
|
||||
run_paused: Optional[datetime] = None
|
||||
|
||||
@ -164,10 +271,10 @@ class Runner(BaseModel):
|
||||
run_name: Optional[str] = None
|
||||
run_note: Optional[str] = None
|
||||
run_ilog_save: Optional[bool] = False
|
||||
run_trade_count: Optional[int]
|
||||
run_profit: Optional[float]
|
||||
run_positions: Optional[int]
|
||||
run_avgp: Optional[float]
|
||||
run_trade_count: Optional[int] = None
|
||||
run_profit: Optional[float] = None
|
||||
run_positions: Optional[dict] = None
|
||||
run_avgp: Optional[dict] = None
|
||||
run_strat_json: Optional[str] = None
|
||||
run_stopped: Optional[datetime] = None
|
||||
run_paused: Optional[datetime] = None
|
||||
@ -201,41 +308,43 @@ class Bar(BaseModel):
|
||||
low: float
|
||||
close: float
|
||||
volume: float
|
||||
trade_count: Optional[float]
|
||||
vwap: Optional[float]
|
||||
trade_count: Optional[float] = 0
|
||||
vwap: Optional[float] = 0
|
||||
|
||||
class Order(BaseModel):
|
||||
account: Account
|
||||
id: UUID
|
||||
submitted_at: datetime
|
||||
filled_at: Optional[datetime]
|
||||
canceled_at: Optional[datetime]
|
||||
filled_at: Optional[datetime] = None
|
||||
canceled_at: Optional[datetime] = None
|
||||
symbol: str
|
||||
qty: int
|
||||
status: OrderStatus
|
||||
order_type: OrderType
|
||||
filled_qty: Optional[int]
|
||||
filled_avg_price: Optional[float]
|
||||
filled_qty: Optional[int] = None
|
||||
filled_avg_price: Optional[float] = None
|
||||
side: OrderSide
|
||||
limit_price: Optional[float]
|
||||
limit_price: Optional[float] = None
|
||||
|
||||
#entita pro kazdy kompletni FILL, je navazana na prescribed_trade
|
||||
class TradeUpdate(BaseModel):
|
||||
account: Account
|
||||
event: Union[TradeEvent, str]
|
||||
execution_id: Optional[UUID]
|
||||
execution_id: Optional[UUID] = None
|
||||
order: Order
|
||||
timestamp: datetime
|
||||
position_qty: Optional[float]
|
||||
price: Optional[float]
|
||||
qty: Optional[float]
|
||||
value: Optional[float]
|
||||
cash: Optional[float]
|
||||
pos_avg_price: Optional[float]
|
||||
profit: Optional[float]
|
||||
profit_sum: Optional[float]
|
||||
rel_profit: Optional[float]
|
||||
rel_profit_cum: Optional[float]
|
||||
signal_name: Optional[str]
|
||||
prescribed_trade_id: Optional[str]
|
||||
position_qty: Optional[float] = None
|
||||
price: Optional[float] = None
|
||||
qty: Optional[float] = None
|
||||
value: Optional[float] = None
|
||||
cash: Optional[float] = None
|
||||
pos_avg_price: Optional[float] = None
|
||||
profit: Optional[float] = None
|
||||
profit_sum: Optional[float] = None
|
||||
rel_profit: Optional[float] = None
|
||||
rel_profit_cum: Optional[float] = None
|
||||
signal_name: Optional[str] = None
|
||||
prescribed_trade_id: Optional[str] = None
|
||||
|
||||
|
||||
class RunArchiveChange(BaseModel):
|
||||
@ -260,14 +369,13 @@ class RunArchive(BaseModel):
|
||||
bt_from: Optional[datetime] = None
|
||||
bt_to: Optional[datetime] = None
|
||||
strat_json: Optional[str] = None
|
||||
##bude decomiss, misto toho stratvars_toml
|
||||
stratvars: Optional[dict] = None
|
||||
transferables: Optional[dict] = None #varaibles that are transferrable to next run
|
||||
settings: Optional[dict] = None
|
||||
ilog_save: Optional[bool] = False
|
||||
profit: float = 0
|
||||
trade_count: int = 0
|
||||
end_positions: int = 0
|
||||
end_positions_avgp: float = 0
|
||||
end_positions: Union[dict,str] = None
|
||||
end_positions_avgp: Union[dict,str] = None
|
||||
metrics: Union[dict, str] = None
|
||||
stratvars_toml: Optional[str] = None
|
||||
|
||||
@ -288,9 +396,11 @@ class RunArchiveView(BaseModel):
|
||||
ilog_save: Optional[bool] = False
|
||||
profit: float = 0
|
||||
trade_count: int = 0
|
||||
end_positions: int = 0
|
||||
end_positions_avgp: float = 0
|
||||
end_positions: Union[dict,int] = None
|
||||
end_positions_avgp: Union[dict,float] = None
|
||||
metrics: Union[dict, str] = None
|
||||
batch_profit: float = 0 # Total profit for the batch - now calculated during query
|
||||
batch_count: int = 0 # Count of runs in the batch - now calculated during query
|
||||
|
||||
#same but with pagination
|
||||
class RunArchiveViewPagination(BaseModel):
|
||||
@ -301,9 +411,11 @@ class RunArchiveViewPagination(BaseModel):
|
||||
|
||||
#trida pro ukladani historie stoplossy do ext_data
|
||||
class SLHistory(BaseModel):
|
||||
id: Optional[UUID]
|
||||
id: Optional[UUID] = None
|
||||
time: datetime
|
||||
sl_val: float
|
||||
direction: TradeDirection
|
||||
account: Account
|
||||
|
||||
#Contains archive of running strategies (runner) - detail data
|
||||
class RunArchiveDetail(BaseModel):
|
||||
@ -314,11 +426,5 @@ class RunArchiveDetail(BaseModel):
|
||||
indicators: List[dict]
|
||||
statinds: dict
|
||||
trades: List[TradeUpdate]
|
||||
ext_data: Optional[dict]
|
||||
|
||||
|
||||
class InstantIndicator(BaseModel):
|
||||
name: str
|
||||
toml: str
|
||||
|
||||
ext_data: Optional[dict] = None
|
||||
|
||||
|
||||
87
v2realbot/common/transform.py
Normal file
87
v2realbot/common/transform.py
Normal file
@ -0,0 +1,87 @@
|
||||
from v2realbot.common.model import RunArchive, RunArchiveView, RunManagerRecord
|
||||
from datetime import datetime
|
||||
import orjson
|
||||
import v2realbot.controller.services as cs
|
||||
|
||||
#prevede dict radku zpatky na objekt vcetme retypizace
|
||||
def row_to_runmanager(row: dict) -> RunManagerRecord:
|
||||
is_running = cs.is_runner_running(row['runner_id']) if row['runner_id'] else False
|
||||
res = RunManagerRecord(
|
||||
moddus=row['moddus'],
|
||||
id=row['id'],
|
||||
strat_id=row['strat_id'],
|
||||
symbol=row['symbol'],
|
||||
mode=row['mode'],
|
||||
account=row['account'],
|
||||
note=row['note'],
|
||||
ilog_save=bool(row['ilog_save']),
|
||||
market=row['market'] if row['market'] is not None else None,
|
||||
bt_from=datetime.fromisoformat(row['bt_from']) if row['bt_from'] else None,
|
||||
bt_to=datetime.fromisoformat(row['bt_to']) if row['bt_to'] else None,
|
||||
weekdays_filter=[int(x) for x in row['weekdays_filter'].split(',')] if row['weekdays_filter'] else [],
|
||||
batch_id=row['batch_id'],
|
||||
testlist_id=row['testlist_id'],
|
||||
start_time=row['start_time'],
|
||||
stop_time=row['stop_time'],
|
||||
status=row['status'],
|
||||
#last_started=zoneNY.localize(datetime.fromisoformat(row['last_started'])) if row['last_started'] else None,
|
||||
last_processed=datetime.fromisoformat(row['last_processed']) if row['last_processed'] else None,
|
||||
history=row['history'],
|
||||
valid_from=datetime.fromisoformat(row['valid_from']) if row['valid_from'] else None,
|
||||
valid_to=datetime.fromisoformat(row['valid_to']) if row['valid_to'] else None,
|
||||
runner_id = row['runner_id'] if row['runner_id'] and is_running else None, #runner_id is only present if it is running
|
||||
strat_running = is_running) #cant believe this when called from separate process as not current
|
||||
return res
|
||||
|
||||
#prevede dict radku zpatky na objekt vcetme retypizace
|
||||
def row_to_runarchiveview(row: dict) -> RunArchiveView:
|
||||
a = RunArchiveView(
|
||||
id=row['runner_id'],
|
||||
strat_id=row['strat_id'],
|
||||
batch_id=row['batch_id'],
|
||||
symbol=row['symbol'],
|
||||
name=row['name'],
|
||||
note=row['note'],
|
||||
started=datetime.fromisoformat(row['started']) if row['started'] else None,
|
||||
stopped=datetime.fromisoformat(row['stopped']) if row['stopped'] else None,
|
||||
mode=row['mode'],
|
||||
account=row['account'],
|
||||
bt_from=datetime.fromisoformat(row['bt_from']) if row['bt_from'] else None,
|
||||
bt_to=datetime.fromisoformat(row['bt_to']) if row['bt_to'] else None,
|
||||
ilog_save=bool(row['ilog_save']),
|
||||
profit=float(row['profit']),
|
||||
trade_count=int(row['trade_count']),
|
||||
end_positions=orjson.loads(row['end_positions']),
|
||||
end_positions_avgp=orjson.loads(row['end_positions_avgp']),
|
||||
metrics=orjson.loads(row['metrics']) if row['metrics'] else None,
|
||||
batch_profit=int(row['batch_profit']) if row['batch_profit'] and row['batch_id'] else 0,
|
||||
batch_count=int(row['batch_count']) if row['batch_count'] and row['batch_id'] else 0,
|
||||
)
|
||||
return a
|
||||
|
||||
#prevede dict radku zpatky na objekt vcetme retypizace
|
||||
def row_to_runarchive(row: dict) -> RunArchive:
|
||||
return RunArchive(
|
||||
id=row['runner_id'],
|
||||
strat_id=row['strat_id'],
|
||||
batch_id=row['batch_id'],
|
||||
symbol=row['symbol'],
|
||||
name=row['name'],
|
||||
note=row['note'],
|
||||
started=datetime.fromisoformat(row['started']) if row['started'] else None,
|
||||
stopped=datetime.fromisoformat(row['stopped']) if row['stopped'] else None,
|
||||
mode=row['mode'],
|
||||
account=row['account'],
|
||||
bt_from=datetime.fromisoformat(row['bt_from']) if row['bt_from'] else None,
|
||||
bt_to=datetime.fromisoformat(row['bt_to']) if row['bt_to'] else None,
|
||||
strat_json=orjson.loads(row['strat_json']),
|
||||
settings=orjson.loads(row['settings']),
|
||||
ilog_save=bool(row['ilog_save']),
|
||||
profit=float(row['profit']),
|
||||
trade_count=int(row['trade_count']),
|
||||
end_positions=str(row['end_positions']),
|
||||
end_positions_avgp=str(row['end_positions_avgp']),
|
||||
metrics=orjson.loads(row['metrics']),
|
||||
stratvars_toml=row['stratvars_toml'],
|
||||
transferables=orjson.loads(row['transferables']) if row['transferables'] else None
|
||||
)
|
||||
@ -2,64 +2,58 @@ from alpaca.data.enums import DataFeed
|
||||
from v2realbot.enums.enums import Mode, Account, FillCondition
|
||||
from appdirs import user_data_dir
|
||||
from pathlib import Path
|
||||
import os
|
||||
from collections import defaultdict
|
||||
from dotenv import load_dotenv
|
||||
# Global flag to track if the ml module has been imported (solution for long import times of tensorflow)
|
||||
#the first occurence of using it will load it globally
|
||||
_ml_module_loaded = False
|
||||
|
||||
#directory for generated images and basic reports
|
||||
MEDIA_DIRECTORY = Path(__file__).parent.parent.parent / "media"
|
||||
RUNNER_DETAIL_DIRECTORY = Path(__file__).parent.parent.parent / "runner_detail"
|
||||
|
||||
#location of strat.log - it is used to fetch by gui
|
||||
LOG_PATH = Path(__file__).parent.parent
|
||||
LOG_FILE = Path(__file__).parent.parent / "strat.log"
|
||||
JOB_LOG_FILE = Path(__file__).parent.parent / "job.log"
|
||||
|
||||
#'0.0.0.0',
|
||||
#currently only prod server has acces to LIVE
|
||||
PROD_SERVER_HOSTNAMES = ['tradingeastcoast','David-MacBook-Pro.local'] #,'David-MacBook-Pro.local'
|
||||
TEST_SERVER_HOSTNAMES = ['tradingtest']
|
||||
|
||||
#TODO vybrane dat do config db a managovat pres GUI
|
||||
|
||||
#AGGREGATOR filter trades
|
||||
#NOTE pridana F - Inter Market Sweep Order - obcas vytvarela spajky
|
||||
AGG_EXCLUDED_TRADES = ['C','O','4','B','7','V','P','W','U','Z','F']
|
||||
|
||||
OFFLINE_MODE = False
|
||||
|
||||
# ilog lvls = 0,1 - 0 debug, 1 info
|
||||
ILOG_SAVE_LEVEL_FROM = 1
|
||||
|
||||
#minimalni vzdalenost mezi trady, kterou agregator pousti pro CBAR(0.001 - blokuje mensi nez 1ms)
|
||||
GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN = 0.003
|
||||
#normalized price for tick 0.01
|
||||
NORMALIZED_TICK_BASE_PRICE = 30.00
|
||||
LOG_RUNNER_EVENTS = False
|
||||
#no print in console
|
||||
QUIET_MODE = True
|
||||
#how many consecutive trades with the fill price are necessary for LIMIT fill to happen in backtesting
|
||||
#0 - optimistic, every knot high will fill the order
|
||||
#N - N consecutive trades required
|
||||
#not impl.yet
|
||||
#minimum is 1, na alpace live to vetsinou vychazi 7-8 u BAC, je to hodne podobne tomu, nez je cena překonaná pul centu. tzn. 7-8 a nebo FillCondition.SLOW
|
||||
BT_FILL_CONS_TRADES_REQUIRED = 2
|
||||
#during bt trade execution logs X-surrounding trades of the one that triggers the fill
|
||||
BT_FILL_LOG_SURROUNDING_TRADES = 10
|
||||
#fill condition for limit order in bt
|
||||
# fast - price has to be equal or bigger <=
|
||||
# slow - price has to be bigger <
|
||||
BT_FILL_CONDITION_BUY_LIMIT = FillCondition.SLOW
|
||||
BT_FILL_CONDITION_SELL_LIMIT = FillCondition.SLOW
|
||||
#TBD TODO not implemented yet
|
||||
BT_FILL_PRICE_MARKET_ORDER_PREMIUM = 0.005
|
||||
#backend counter of api requests
|
||||
COUNT_API_REQUESTS = False
|
||||
#stratvars that cannot be changed in gui
|
||||
STRATVARS_UNCHANGEABLES = ['pendingbuys', 'blockbuy', 'jevylozeno', 'limitka']
|
||||
DATA_DIR = user_data_dir("v2realbot")
|
||||
DATA_DIR = user_data_dir("v2realbot", False)
|
||||
MODEL_DIR = Path(DATA_DIR)/"models"
|
||||
#BT DELAYS
|
||||
#profiling
|
||||
PROFILING_NEXT_ENABLED = False
|
||||
PROFILING_OUTPUT_DIR = DATA_DIR
|
||||
|
||||
#FILL CONFIGURATION CLASS FOR BACKTESTING
|
||||
def find_dotenv(start_path):
|
||||
"""
|
||||
Searches for a .env file in the given directory or its parents and returns the path.
|
||||
|
||||
#WIP
|
||||
Args:
|
||||
start_path: The directory to start searching from.
|
||||
|
||||
Returns:
|
||||
Path to the .env file if found, otherwise None.
|
||||
"""
|
||||
current_path = Path(start_path)
|
||||
for _ in range(6): # Limit search depth to 5 levels
|
||||
dotenv_path = current_path / '.env'
|
||||
if dotenv_path.exists():
|
||||
return dotenv_path
|
||||
current_path = current_path.parent
|
||||
return None
|
||||
|
||||
ENV_FILE = find_dotenv(__file__)
|
||||
|
||||
#NALOADUJEME DOTENV ENV VARIABLES
|
||||
if load_dotenv(ENV_FILE, verbose=True) is False:
|
||||
print(f"Error loading.env file {ENV_FILE}. Now depending on ENV VARIABLES set externally.")
|
||||
else:
|
||||
print(f"Loaded env variables from file {ENV_FILE}")
|
||||
|
||||
#WIP - FILL CONFIGURATION CLASS FOR BACKTESTING
|
||||
class BT_FILL_CONF:
|
||||
""""
|
||||
Trida pro konfiguraci backtesting fillu pro dany symbol, pokud neexistuje tak fallback na obecny viz vyse-
|
||||
@ -73,24 +67,6 @@ class BT_FILL_CONF:
|
||||
self.BT_FILL_CONDITION_SELL_LIMIT=BT_FILL_CONDITION_SELL_LIMIT
|
||||
self.BT_FILL_PRICE_MARKET_ORDER_PREMIUM=BT_FILL_PRICE_MARKET_ORDER_PREMIUM
|
||||
|
||||
|
||||
""""
|
||||
LATENCY DELAYS for LIVE eastcoast
|
||||
.000 trigger - last_trade_time (.4246266)
|
||||
+.020 vstup do strategie a BUY (.444606)
|
||||
+.023 submitted (.469198)
|
||||
+.008 filled (.476695552)
|
||||
+.023 fill not(.499888)
|
||||
"""
|
||||
#TODO změnit názvy delay promennych vystizneji a obecneji
|
||||
class BT_DELAYS:
|
||||
trigger_to_strat: float = 0.020
|
||||
strat_to_sub: float = 0.023
|
||||
sub_to_fill: float = 0.008
|
||||
fill_to_not: float = 0.023
|
||||
#doplnit dle live
|
||||
limit_order_offset: float = 0
|
||||
|
||||
class Keys:
|
||||
def __init__(self, api_key, secret_key, paper, feed) -> None:
|
||||
self.API_KEY = api_key
|
||||
@ -99,17 +75,18 @@ class Keys:
|
||||
self.FEED = feed
|
||||
|
||||
# podle modu (PAPER, LIVE) vrati objekt
|
||||
# obsahujici klice pro pripojeni k alpace
|
||||
# obsahujici klice pro pripojeni k alpace - používá se pro Trading API a order updates websockets (pristupy relevantni per strategie)
|
||||
#pro real time data se bere LIVE_DATA_API_KEY, LIVE_DATA_SECRET_KEY, LIVE_DATA_FEED nize - jelikoz jde o server wide nastaveni
|
||||
def get_key(mode: Mode, account: Account):
|
||||
if mode not in [Mode.PAPER, Mode.LIVE]:
|
||||
print("has to be LIVE or PAPER only")
|
||||
return None
|
||||
dict = globals()
|
||||
try:
|
||||
API_KEY = dict[str.upper(str(account.value)) + "_" + str.upper(str(mode.value)) + "_API_KEY" ]
|
||||
SECRET_KEY = dict[str.upper(str(account.value)) + "_" + str.upper(str(mode.value)) + "_SECRET_KEY" ]
|
||||
PAPER = dict[str.upper(str(account.value)) + "_" + str.upper(str(mode.value)) + "_PAPER" ]
|
||||
FEED = dict[str.upper(str(account.value)) + "_" + str.upper(str(mode.value)) + "_FEED" ]
|
||||
API_KEY = dict[str.upper(str(account.name)) + "_" + str.upper(str(mode.name)) + "_API_KEY" ]
|
||||
SECRET_KEY = dict[str.upper(str(account.name)) + "_" + str.upper(str(mode.name)) + "_SECRET_KEY" ]
|
||||
PAPER = dict[str.upper(str(account.name)) + "_" + str.upper(str(mode.name)) + "_PAPER" ]
|
||||
FEED = dict[str.upper(str(account.name)) + "_" + str.upper(str(mode.name)) + "_FEED" ]
|
||||
return Keys(API_KEY, SECRET_KEY, PAPER, FEED)
|
||||
except KeyError:
|
||||
print("Not valid combination to get keys for", mode, account)
|
||||
@ -118,36 +95,85 @@ def get_key(mode: Mode, account: Account):
|
||||
#strategy instance main loop heartbeat
|
||||
HEARTBEAT_TIMEOUT=5
|
||||
|
||||
WEB_API_KEY="david"
|
||||
WEB_API_KEY=os.environ.get('WEB_API_KEY')
|
||||
|
||||
#PRIMARY PAPER
|
||||
ACCOUNT1_PAPER_API_KEY = 'PKGGEWIEYZOVQFDRY70L'
|
||||
ACCOUNT1_PAPER_SECRET_KEY = 'O5Kt8X4RLceIOvM98i5LdbalItsX7hVZlbPYHy8Y'
|
||||
ACCOUNT1_PAPER_API_KEY = os.environ.get('ACCOUNT1_PAPER_API_KEY')
|
||||
ACCOUNT1_PAPER_SECRET_KEY = os.environ.get('ACCOUNT1_PAPER_SECRET_KEY')
|
||||
ACCOUNT1_PAPER_MAX_BATCH_SIZE = 1
|
||||
ACCOUNT1_PAPER_PAPER = True
|
||||
#ACCOUNT1_PAPER_FEED = DataFeed.SIP
|
||||
|
||||
# Load the data feed type from environment variable
|
||||
data_feed_type_str = os.environ.get('ACCOUNT1_PAPER_FEED', 'iex') # Default to 'sip' if not set
|
||||
|
||||
# Convert the string to DataFeed enum
|
||||
try:
|
||||
ACCOUNT1_PAPER_FEED = DataFeed(data_feed_type_str)
|
||||
except nameError:
|
||||
# Handle the case where the environment variable does not match any enum member
|
||||
print(f"Invalid data feed type: {data_feed_type_str} in ACCOUNT1_PAPER_FEED defaulting to 'iex'")
|
||||
ACCOUNT1_PAPER_FEED = DataFeed.SIP
|
||||
|
||||
#PRIMARY LIVE
|
||||
ACCOUNT1_LIVE_API_KEY = 'AKB5HD32LPDZC9TPUWJT'
|
||||
ACCOUNT1_LIVE_SECRET_KEY = 'Xq1wPSNOtwmlMTAd4cEmdKvNDgfcUYfrOaCccaAs'
|
||||
ACCOUNT1_LIVE_API_KEY = os.environ.get('ACCOUNT1_LIVE_API_KEY')
|
||||
ACCOUNT1_LIVE_SECRET_KEY = os.environ.get('ACCOUNT1_LIVE_SECRET_KEY')
|
||||
ACCOUNT1_LIVE_MAX_BATCH_SIZE = 1
|
||||
ACCOUNT1_LIVE_PAPER = False
|
||||
ACCOUNT1_LIVE_FEED = DataFeed.SIP
|
||||
#ACCOUNT1_LIVE_FEED = DataFeed.SIP
|
||||
|
||||
# Load the data feed type from environment variable
|
||||
data_feed_type_str = os.environ.get('ACCOUNT1_LIVE_FEED', 'iex') # Default to 'sip' if not set
|
||||
|
||||
# Convert the string to DataFeed enum
|
||||
try:
|
||||
ACCOUNT1_LIVE_FEED = DataFeed(data_feed_type_str)
|
||||
except nameError:
|
||||
# Handle the case where the environment variable does not match any enum member
|
||||
print(f"Invalid data feed type: {data_feed_type_str} in ACCOUNT1_LIVE_FEED defaulting to 'iex'")
|
||||
ACCOUNT1_LIVE_FEED = DataFeed.IEX
|
||||
|
||||
#SECONDARY PAPER - Martin
|
||||
ACCOUNT2_PAPER_API_KEY = 'PKPDTCQLNHCBC2D9GQFB'
|
||||
ACCOUNT2_PAPER_SECRET_KEY = 'c1Z2V0gBleQmwHYCreqqTs45Jy33RqPGrofuSayz'
|
||||
ACCOUNT2_PAPER_API_KEY = os.environ.get('ACCOUNT2_PAPER_API_KEY')
|
||||
ACCOUNT2_PAPER_SECRET_KEY = os.environ.get('ACCOUNT2_PAPER_SECRET_KEY')
|
||||
ACCOUNT2_PAPER_MAX_BATCH_SIZE = 1
|
||||
ACCOUNT2_PAPER_PAPER = True
|
||||
#ACCOUNT2_PAPER_FEED = DataFeed.IEX
|
||||
|
||||
# Load the data feed type from environment variable
|
||||
data_feed_type_str = os.environ.get('ACCOUNT2_PAPER_FEED', 'iex') # Default to 'sip' if not set
|
||||
|
||||
# Convert the string to DataFeed enum
|
||||
try:
|
||||
ACCOUNT2_PAPER_FEED = DataFeed(data_feed_type_str)
|
||||
except nameError:
|
||||
# Handle the case where the environment variable does not match any enum member
|
||||
print(f"Invalid data feed type: {data_feed_type_str} in ACCOUNT2_PAPER_FEED defaulting to 'iex'")
|
||||
ACCOUNT2_PAPER_FEED = DataFeed.IEX
|
||||
|
||||
# #SECONDARY PAPER
|
||||
# ACCOUNT2_PAPER_API_KEY = 'PK0OQHZG03PUZ1SC560V'
|
||||
# ACCOUNT2_PAPER_SECRET_KEY = 'cTglhm7kwRcZfFT27fQWz31sXaxadzQApFDW6Lat'
|
||||
# ACCOUNT2_PAPER_MAX_BATCH_SIZE = 1
|
||||
# ACCOUNT2_PAPER_PAPER = True
|
||||
# ACCOUNT2_PAPER_FEED = DataFeed.IEX
|
||||
|
||||
#SECONDARY LIVE - Martin
|
||||
# ACCOUNT2_LIVE_API_KEY = os.environ.get('ACCOUNT2_LIVE_API_KEY')
|
||||
# ACCOUNT2_LIVE_SECRET_KEY = os.environ.get('ACCOUNT2_LIVE_SECRET_KEY')
|
||||
# ACCOUNT2_LIVE_MAX_BATCH_SIZE = 1
|
||||
# ACCOUNT2_LIVE_PAPER = True
|
||||
# #ACCOUNT2_LIVE_FEED = DataFeed.IEX
|
||||
|
||||
# # Load the data feed type from environment variable
|
||||
# data_feed_type_str = os.environ.get('ACCOUNT2_LIVE_FEED', 'iex') # Default to 'sip' if not set
|
||||
|
||||
# # Convert the string to DataFeed enum
|
||||
# try:
|
||||
# ACCOUNT2_LIVE_FEED = DataFeed(data_feed_type_str)
|
||||
# except nameError:
|
||||
# # Handle the case where the environment variable does not match any enum member
|
||||
# print(f"Invalid data feed type: {data_feed_type_str} in ACCOUNT2_LIVE_FEED defaulting to 'iex'")
|
||||
# ACCOUNT2_LIVE_FEED = DataFeed.IEX
|
||||
|
||||
#zatim jsou LIVE_DATA nastaveny jako z account1_paper
|
||||
LIVE_DATA_API_KEY = ACCOUNT1_PAPER_API_KEY
|
||||
LIVE_DATA_SECRET_KEY = ACCOUNT1_PAPER_SECRET_KEY
|
||||
#LIVE_DATA_FEED je nastaveny v config_handleru
|
||||
|
||||
class KW:
|
||||
activate: str = "activate"
|
||||
|
||||
112
v2realbot/controller/configs.py
Normal file
112
v2realbot/controller/configs.py
Normal file
@ -0,0 +1,112 @@
|
||||
|
||||
import v2realbot.common.db as db
|
||||
from v2realbot.common.model import ConfigItem
|
||||
import v2realbot.utils.config_handler as ch
|
||||
|
||||
# region CONFIG db services
|
||||
#TODO vytvorit modul pro dotahovani z pythonu (get_from_config(var_name, def_value) {)- stejne jako v js
|
||||
#TODO zvazit presunuti do TOML z JSONu
|
||||
def get_all_config_items():
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('SELECT id, item_name, json_data FROM config_table')
|
||||
config_items = [{"id": row[0], "item_name": row[1], "json_data": row[2]} for row in cursor.fetchall()]
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
return 0, config_items
|
||||
|
||||
# Function to get a config item by ID
|
||||
def get_config_item_by_id(item_id):
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('SELECT item_name, json_data FROM config_table WHERE id = ?', (item_id,))
|
||||
row = cursor.fetchone()
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
if row is None:
|
||||
return -2, "not found"
|
||||
else:
|
||||
return 0, {"item_name": row[0], "json_data": row[1]}
|
||||
|
||||
# Function to get a config item by ID
|
||||
def get_config_item_by_name(item_name):
|
||||
#print(item_name)
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
query = f"SELECT item_name, json_data FROM config_table WHERE item_name = '{item_name}'"
|
||||
#print(query)
|
||||
cursor.execute(query)
|
||||
row = cursor.fetchone()
|
||||
#print(row)
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
if row is None:
|
||||
return -2, "not found"
|
||||
else:
|
||||
return 0, {"item_name": row[0], "json_data": row[1]}
|
||||
|
||||
# Function to create a new config item
|
||||
def create_config_item(config_item: ConfigItem):
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('INSERT INTO config_table (item_name, json_data) VALUES (?, ?)', (config_item.item_name, config_item.json_data))
|
||||
item_id = cursor.lastrowid
|
||||
conn.commit()
|
||||
print(item_id)
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
|
||||
return 0, {"id": item_id, "item_name":config_item.item_name, "json_data":config_item.json_data}
|
||||
except Exception as e:
|
||||
return -2, str(e)
|
||||
|
||||
# Function to update a config item by ID
|
||||
def update_config_item(item_id, config_item: ConfigItem):
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('UPDATE config_table SET item_name = ?, json_data = ? WHERE id = ?', (config_item.item_name, config_item.json_data, item_id))
|
||||
conn.commit()
|
||||
|
||||
#refresh active item je zatím řešena takto natvrdo při updatu položky "active_profile" a při startu aplikace
|
||||
if config_item.item_name == "active_profile":
|
||||
ch.config_handler.activate_profile()
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
return 0, {"id": item_id, **config_item.dict()}
|
||||
except Exception as e:
|
||||
return -2, str(e)
|
||||
|
||||
# Function to delete a config item by ID
|
||||
def delete_config_item(item_id):
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('DELETE FROM config_table WHERE id = ?', (item_id,))
|
||||
conn.commit()
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
return 0, {"id": item_id}
|
||||
|
||||
# endregion
|
||||
|
||||
#Example of using config directive
|
||||
# config_directive = "overrides"
|
||||
# ret, res = get_config_item_by_name(config_directive)
|
||||
# if ret < 0:
|
||||
# print(f"CONFIG OVERRIDE {config_directive} Error {res}")
|
||||
# else:
|
||||
# config = orjson.loads(res["json_data"])
|
||||
|
||||
# print("OVERRIDN CFG:", config)
|
||||
# for key, value in config.items():
|
||||
# if hasattr(cfg, key):
|
||||
# print(f"Overriding {key} with {value}")
|
||||
# setattr(cfg, key, value)
|
||||
|
||||
463
v2realbot/controller/run_manager.py
Normal file
463
v2realbot/controller/run_manager.py
Normal file
@ -0,0 +1,463 @@
|
||||
from typing import Any, List, Tuple
|
||||
from uuid import UUID, uuid4
|
||||
from v2realbot.common.model import RunManagerRecord, StrategyInstance, RunDay, StrategyInstance, Runner, RunRequest, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, RunArchiveChange, Bar, TradeEvent, TestList, Intervals, ConfigItem, InstantIndicator, DataTablesRequest
|
||||
from v2realbot.utils.utils import validate_and_format_time, AttributeDict, zoneNY, zonePRG, safe_get, dict_replace_value, Store, parse_toml_string, json_serial, is_open_hours, send_to_telegram, concatenate_weekdays, transform_data
|
||||
from v2realbot.utils.ilog import delete_logs
|
||||
from v2realbot.common.model import Trade, TradeDirection, TradeStatus, TradeStoplossType
|
||||
from datetime import datetime
|
||||
from v2realbot.loader.trade_offline_streamer import Trade_Offline_Streamer
|
||||
from threading import Thread, current_thread, Event, enumerate
|
||||
from v2realbot.config import STRATVARS_UNCHANGEABLES, ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, ACCOUNT1_LIVE_API_KEY, ACCOUNT1_LIVE_SECRET_KEY, DATA_DIR,MEDIA_DIRECTORY, RUNNER_DETAIL_DIRECTORY
|
||||
import importlib
|
||||
from alpaca.trading.requests import GetCalendarRequest
|
||||
from alpaca.trading.client import TradingClient
|
||||
#from alpaca.trading.models import Calendar
|
||||
from queue import Queue
|
||||
from tinydb import TinyDB, Query, where
|
||||
from tinydb.operations import set
|
||||
import orjson
|
||||
import numpy as np
|
||||
from rich import print
|
||||
import pandas as pd
|
||||
from traceback import format_exc
|
||||
from datetime import timedelta, time
|
||||
from threading import Lock
|
||||
import v2realbot.common.db as db
|
||||
import v2realbot.common.transform as tr
|
||||
from sqlite3 import OperationalError, Row
|
||||
import v2realbot.strategyblocks.indicators.custom as ci
|
||||
from v2realbot.strategyblocks.inits.init_indicators import initialize_dynamic_indicators
|
||||
from v2realbot.strategyblocks.indicators.indicators_hub import populate_dynamic_indicators
|
||||
from v2realbot.interfaces.backtest_interface import BacktestInterface
|
||||
import os
|
||||
import v2realbot.reporting.metricstoolsimage as mt
|
||||
import gzip
|
||||
import os
|
||||
import msgpack
|
||||
import v2realbot.controller.services as cs
|
||||
import v2realbot.scheduler.ap_scheduler as aps
|
||||
|
||||
# Functions for your 'run_manager' table
|
||||
|
||||
# CREATE TABLE "run_manager" (
|
||||
# "moddus" TEXT NOT NULL,
|
||||
# "id" varchar(32),
|
||||
# "strat_id" varchar(32) NOT NULL,
|
||||
# "symbol" TEXT,
|
||||
# "account" TEXT NOT NULL,
|
||||
# "mode" TEXT NOT NULL,
|
||||
# "note" TEXT,
|
||||
# "ilog_save" BOOLEAN,
|
||||
# "bt_from" TEXT,
|
||||
# "bt_to" TEXT,
|
||||
# "weekdays_filter" TEXT,
|
||||
# "batch_id" TEXT,
|
||||
# "start_time" TEXT NOT NULL,
|
||||
# "stop_time" TEXT NOT NULL,
|
||||
# "status" TEXT NOT NULL,
|
||||
# "last_processed" TEXT,
|
||||
# "history" TEXT,
|
||||
# "valid_from" TEXT,
|
||||
# "valid_to" TEXT,
|
||||
# "testlist_id" TEXT,
|
||||
# "runner_id" varchar2(32),
|
||||
# PRIMARY KEY("id")
|
||||
# )
|
||||
|
||||
# CREATE INDEX idx_moddus ON run_manager (moddus);
|
||||
# CREATE INDEX idx_status ON run_manager (status);
|
||||
# CREATE INDEX idx_status_moddus ON run_manager (status, moddus);
|
||||
# CREATE INDEX idx_valid_from_to ON run_manager (valid_from, valid_to);
|
||||
# CREATE INDEX idx_stopped_batch_id ON runner_header (stopped, batch_id);
|
||||
# CREATE INDEX idx_search_value ON runner_header (strat_id, batch_id);
|
||||
|
||||
|
||||
##weekdays are stored as comma separated values
|
||||
# Fetching (assume 'weekdays' field is a comma-separated string)
|
||||
# weekday_str = record['weekdays']
|
||||
# weekdays = [int(x) for x in weekday_str.split(',')]
|
||||
|
||||
# # ... logic to check whether today's weekday is in 'weekdays'
|
||||
|
||||
# # Storing
|
||||
# weekdays = [1, 2, 5] # Example
|
||||
# weekday_str = ",".join(str(x) for x in weekdays)
|
||||
# update_data = {'weekdays': weekday_str}
|
||||
# # ... use in an SQL UPDATE statement
|
||||
|
||||
# for row in records:
|
||||
# row['weekdays_filter'] = [int(x) for x in row['weekdays_filter'].split(',')] if row['weekdays_filter'] else []
|
||||
|
||||
|
||||
#get stratin info return
|
||||
# strat : StrategyInstance = None
|
||||
# result, strat = cs.get_stratin("625760ac-6376-47fa-8989-1e6a3f6ab66a")
|
||||
# if result == 0:
|
||||
# print(strat)
|
||||
# else:
|
||||
# print("Error:", strat)
|
||||
|
||||
|
||||
# Fetch all
|
||||
#result, records = fetch_all_run_manager_records()
|
||||
|
||||
#TODO zvazit rozsireni vystupu o strat_status (running/stopped)
|
||||
|
||||
|
||||
def fetch_all_run_manager_records() -> list[RunManagerRecord]:
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
conn.row_factory = Row
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('SELECT * FROM run_manager')
|
||||
rows = cursor.fetchall()
|
||||
results = []
|
||||
#Transform row to object
|
||||
for row in rows:
|
||||
#add transformed object into result list
|
||||
results.append(tr.row_to_runmanager(row))
|
||||
|
||||
return 0, results
|
||||
finally:
|
||||
conn.row_factory = None
|
||||
db.pool.release_connection(conn)
|
||||
|
||||
# Fetch by strategy_id
|
||||
# result, record = fetch_run_manager_record_by_id('625760ac-6376-47fa-8989-1e6a3f6ab66a')
|
||||
def fetch_run_manager_record_by_id(strategy_id) -> RunManagerRecord:
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
conn.row_factory = Row
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('SELECT * FROM run_manager WHERE id = ?', (str(strategy_id),))
|
||||
row = cursor.fetchone()
|
||||
if row is None:
|
||||
return -2, "not found"
|
||||
else:
|
||||
return 0, tr.row_to_runmanager(row)
|
||||
|
||||
except Exception as e:
|
||||
print("ERROR while fetching all records:", str(e) + format_exc())
|
||||
return -2, str(e) + format_exc()
|
||||
finally:
|
||||
conn.row_factory = None
|
||||
db.pool.release_connection(conn)
|
||||
|
||||
def add_run_manager_record(new_record: RunManagerRecord):
|
||||
#validation/standardization of time
|
||||
new_record.start_time = validate_and_format_time(new_record.start_time)
|
||||
if new_record.start_time is None:
|
||||
return -2, f"Invalid start_time format {new_record.start_time}"
|
||||
|
||||
if new_record.stop_time is not None:
|
||||
new_record.stop_time = validate_and_format_time(new_record.stop_time)
|
||||
if new_record.stop_time is None:
|
||||
return -2, f"Invalid stop_time format {new_record.stop_time}"
|
||||
|
||||
if new_record.batch_id is None:
|
||||
new_record.batch_id = str(uuid4())[:8]
|
||||
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
|
||||
strat : StrategyInstance = None
|
||||
result, strat = cs.get_stratin(id=str(new_record.strat_id))
|
||||
if result == 0:
|
||||
new_record.symbol = strat.symbol
|
||||
else:
|
||||
return -1, f"Strategy {new_record.strat_id} not found"
|
||||
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Construct a suitable INSERT query based on your RunManagerRecord fields
|
||||
insert_query = """
|
||||
INSERT INTO run_manager (moddus, id, strat_id, symbol,account, mode, note,ilog_save,
|
||||
market, bt_from, bt_to, weekdays_filter, batch_id,
|
||||
start_time, stop_time, status, last_processed,
|
||||
history, valid_from, valid_to, testlist_id)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,?)
|
||||
"""
|
||||
values = [
|
||||
new_record.moddus, str(new_record.id), str(new_record.strat_id), new_record.symbol, new_record.account, new_record.mode, new_record.note,
|
||||
int(new_record.ilog_save), new_record.market,
|
||||
new_record.bt_from.isoformat() if new_record.bt_from is not None else None,
|
||||
new_record.bt_to.isoformat() if new_record.bt_to is not None else None,
|
||||
",".join(str(x) for x in new_record.weekdays_filter) if new_record.weekdays_filter else None,
|
||||
new_record.batch_id, new_record.start_time,
|
||||
new_record.stop_time, new_record.status,
|
||||
new_record.last_processed.isoformat() if new_record.last_processed is not None else None,
|
||||
new_record.history,
|
||||
new_record.valid_from.isoformat() if new_record.valid_from is not None else None,
|
||||
new_record.valid_to.isoformat() if new_record.valid_to is not None else None,
|
||||
new_record.testlist_id
|
||||
]
|
||||
db.execute_with_retry(cursor, insert_query, values)
|
||||
conn.commit()
|
||||
|
||||
#Add APS scheduler job refresh
|
||||
res, result = aps.initialize_jobs()
|
||||
if res < 0:
|
||||
return -2, f"Error initializing jobs: {res} {result}"
|
||||
|
||||
return 0, new_record.id # Assuming success, you might return something more descriptive
|
||||
except Exception as e:
|
||||
print("ERROR while adding record:", str(e) + format_exc())
|
||||
return -2, str(e) + format_exc()
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
|
||||
# Update (example)
|
||||
# update_data = {'last_started': '2024-02-13 10:35:00'}
|
||||
# result, message = update_run_manager_record('625760ac-6376-47fa-8989-1e6a3f6ab66a', update_data)
|
||||
def update_run_manager_record(record_id, updated_record: RunManagerRecord):
|
||||
#validation/standardization of time
|
||||
updated_record.start_time = validate_and_format_time(updated_record.start_time)
|
||||
if updated_record.start_time is None:
|
||||
return -2, f"Invalid start_time format {updated_record.start_time}"
|
||||
|
||||
if updated_record.stop_time is not None:
|
||||
updated_record.stop_time = validate_and_format_time(updated_record.stop_time)
|
||||
if updated_record.stop_time is None:
|
||||
return -2, f"Invalid stop_time format {updated_record.stop_time}"
|
||||
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
|
||||
#strategy lookup check, if strategy still exists
|
||||
strat : StrategyInstance = None
|
||||
result, strat = cs.get_stratin(id=str(updated_record.strat_id))
|
||||
if result == 0:
|
||||
updated_record.symbol = strat.symbol
|
||||
else:
|
||||
return -1, f"Strategy {updated_record.strat_id} not found"
|
||||
|
||||
#remove values with None, so they are not updated
|
||||
#updated_record_dict = updated_record.dict(exclude_none=True)
|
||||
|
||||
# Construct update query and handle weekdays conversion
|
||||
update_query = 'UPDATE run_manager SET '
|
||||
update_params = []
|
||||
for key, value in updated_record.dict().items(): # Iterate over model attributes
|
||||
if key in ['id', 'strat_running']: # Skip updating the primary key
|
||||
continue
|
||||
update_query += f"{key} = ?, "
|
||||
if key == "ilog_save":
|
||||
value = int(value)
|
||||
elif key in ["strat_id", "runner_id"]:
|
||||
value = str(value) if value else None
|
||||
elif key == "weekdays_filter":
|
||||
value = ",".join(str(x) for x in value) if value else None
|
||||
elif key in ['valid_from', 'valid_to', 'bt_from', 'bt_to', 'last_processed']:
|
||||
value = value.isoformat() if value else None
|
||||
update_params.append(value)
|
||||
# if 'weekdays_filter' in updated_record.dict():
|
||||
# updated_record.weekdays_filter = ",".join(str(x) for x in updated_record.weekdays_filter)
|
||||
update_query = update_query[:-2] # Remove trailing comma and space
|
||||
update_query += ' WHERE id = ?'
|
||||
update_params.append(str(record_id))
|
||||
|
||||
db.execute_with_retry(cursor, update_query, update_params)
|
||||
#cursor.execute(update_query, update_params)
|
||||
conn.commit()
|
||||
|
||||
#Add APS scheduler job refresh
|
||||
res, result = aps.initialize_jobs()
|
||||
if res < 0:
|
||||
return -2, f"Error initializing jobs: {res} {result}"
|
||||
|
||||
except Exception as e:
|
||||
print("ERROR while updating record:", str(e) + format_exc())
|
||||
return -2, str(e) + format_exc()
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
return 0, record_id
|
||||
|
||||
# result, message = delete_run_manager_record('625760ac-6376-47fa-8989-1e6a3f6ab66a')
|
||||
def delete_run_manager_record(record_id):
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
db.execute_with_retry(cursor, 'DELETE FROM run_manager WHERE id = ?', (str(record_id),))
|
||||
#cursor.execute('DELETE FROM run_manager WHERE id = ?', (str(strategy_id),))
|
||||
conn.commit()
|
||||
except Exception as e:
|
||||
print("ERROR while deleting record:", str(e) + format_exc())
|
||||
return -2, str(e) + format_exc()
|
||||
finally:
|
||||
db.pool.release_connection(conn)
|
||||
return 0, record_id
|
||||
|
||||
def fetch_scheduled_candidates_for_start_and_stop(market_datetime_now, market) -> tuple[int, dict]:
|
||||
"""
|
||||
Fetches all active records from the 'run_manager' table where the mode is 'schedule'. It checks if the current
|
||||
time in the America/New_York timezone is within the operational intervals specified by 'start_time' and 'stop_time'
|
||||
for each record. This function is designed to correctly handle scenarios where the operational interval crosses
|
||||
midnight, as well as intervals contained within a single day.
|
||||
|
||||
The function localizes 'valid_from', 'valid_to', 'start_time', and 'stop_time' using the 'zoneNY' timezone object
|
||||
for accurate comparison with the current time.
|
||||
|
||||
Parameters:
|
||||
market_datetime_now (datetime): The current date and time in the America/New_York timezone.
|
||||
market (str): The market identifier.
|
||||
|
||||
Returns:
|
||||
Tuple[int, dict]: A tuple where the first element is a status code (0 for success, -2 for error), and the
|
||||
second element is a dictionary. This dictionary has keys 'start' and 'stop', each containing a list of
|
||||
RunManagerRecord objects meeting the respective criteria. If an error occurs, the second element is a
|
||||
descriptive error message.
|
||||
|
||||
Note:
|
||||
- This function assumes that the 'zoneNY' pytz timezone object is properly defined and configured to represent
|
||||
the America/New York timezone.
|
||||
- It also assumes that the 'run_manager' table exists in the database with the required columns.
|
||||
- 'start_time' and 'stop_time' are expected to be strings representing times in 24-hour format.
|
||||
- If 'valid_from', 'valid_to', 'start_time', or 'stop_time' are NULL in the database, they are considered as
|
||||
having unlimited boundaries.
|
||||
|
||||
Pozor: je jeste jeden okrajovy pripad, kdy by to nemuselo zafungovat: kdyby casy byly nastaveny pro
|
||||
beh strategie pres pulnoc, ale zapla by se pozdeji az po pulnoci
|
||||
(https://chat.openai.com/c/3c77674a-8a2c-45aa-afbd-ab140f473e07)
|
||||
|
||||
"""
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
conn.row_factory = Row
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get current datetime in America/New York timezone
|
||||
market_datetime_now_str = market_datetime_now.strftime('%Y-%m-%d %H:%M:%S')
|
||||
current_time_str = market_datetime_now.strftime('%H:%M')
|
||||
print("current_market_datetime_str:", market_datetime_now_str)
|
||||
print("current_time_str:", current_time_str)
|
||||
|
||||
# Select also supports scenarios where strategy runs overnight
|
||||
# SQL query to fetch records with active status and date constraints for both start and stop times
|
||||
query = """
|
||||
SELECT *,
|
||||
CASE
|
||||
WHEN start_time <= stop_time AND (? >= start_time AND ? < stop_time) OR
|
||||
start_time > stop_time AND (? >= start_time OR ? < stop_time) THEN 1
|
||||
ELSE 0
|
||||
END as is_start_time,
|
||||
CASE
|
||||
WHEN start_time <= stop_time AND (? >= stop_time OR ? < start_time) OR
|
||||
start_time > stop_time AND (? >= stop_time AND ? < start_time) THEN 1
|
||||
ELSE 0
|
||||
END as is_stop_time
|
||||
FROM run_manager
|
||||
WHERE status = 'active' AND moddus = 'schedule' AND
|
||||
((valid_from IS NULL OR strftime('%Y-%m-%d %H:%M:%S', valid_from) <= ?) AND
|
||||
(valid_to IS NULL OR strftime('%Y-%m-%d %H:%M:%S', valid_to) >= ?))
|
||||
"""
|
||||
cursor.execute(query, (current_time_str, current_time_str, current_time_str, current_time_str,
|
||||
current_time_str, current_time_str, current_time_str, current_time_str,
|
||||
market_datetime_now_str, market_datetime_now_str))
|
||||
rows = cursor.fetchall()
|
||||
|
||||
start_candidates = []
|
||||
stop_candidates = []
|
||||
for row in rows:
|
||||
run_manager_record = tr.row_to_runmanager(row)
|
||||
if row['is_start_time']:
|
||||
start_candidates.append(run_manager_record)
|
||||
if row['is_stop_time']:
|
||||
stop_candidates.append(run_manager_record)
|
||||
|
||||
results = {'start': start_candidates, 'stop': stop_candidates}
|
||||
|
||||
return 0, results
|
||||
except Exception as e:
|
||||
msg_err = f"ERROR while fetching records for start and stop times with datetime {market_datetime_now_str}: {str(e)} {format_exc()}"
|
||||
print(msg_err)
|
||||
return -2, msg_err
|
||||
finally:
|
||||
conn.row_factory = None
|
||||
db.pool.release_connection(conn)
|
||||
|
||||
|
||||
def fetch_startstop_scheduled_candidates(market_datetime_now, time_check, market = "US") -> tuple[int, list[RunManagerRecord]]:
|
||||
"""
|
||||
Fetches all active records from the 'run_manager' table where moddus is schedule, the current date and time
|
||||
in the America/New_York timezone falls between the 'valid_from' and 'valid_to' datetime
|
||||
fields, and either 'start_time' or 'stop_time' matches the specified condition with the current time.
|
||||
If 'valid_from', 'valid_to', or the time column ('start_time'/'stop_time') are NULL, they are considered
|
||||
as having unlimited boundaries.
|
||||
|
||||
The function localizes the 'valid_from', 'valid_to', and the time column times using the 'zoneNY'
|
||||
timezone object for accurate comparison with the current time.
|
||||
|
||||
Parameters:
|
||||
market_datetime_now (datetime): Current datetime in the market timezone.
|
||||
market (str): The market for which to fetch candidates.
|
||||
time_check (str): Either 'start' or 'stop', indicating which time condition to check.
|
||||
|
||||
Returns:
|
||||
Tuple[int, list[RunManagerRecord]]: A tuple where the first element is a status code
|
||||
(0 for success, -2 for error), and the second element is a list of RunManagerRecord
|
||||
objects meeting the criteria. If an error occurs, the second element is a descriptive
|
||||
error message.
|
||||
|
||||
Note:
|
||||
This function assumes that the 'zoneNY' pytz timezone object is properly defined and
|
||||
configured to represent the America/New York timezone. It also assumes that the
|
||||
'run_manager' table exists in the database with the columns as described in the
|
||||
provided schema.
|
||||
"""
|
||||
if time_check not in ['start', 'stop']:
|
||||
return -2, "Invalid time_check parameter. Must be 'start' or 'stop'."
|
||||
|
||||
conn = db.pool.get_connection()
|
||||
try:
|
||||
conn.row_factory = Row
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get current datetime in America/New York timezone
|
||||
market_datetime_now_str = market_datetime_now.strftime('%Y-%m-%d %H:%M:%S')
|
||||
current_time_str = market_datetime_now.strftime('%H:%M')
|
||||
print("current_market_datetime_str:", market_datetime_now_str)
|
||||
print("current_time_str:", current_time_str)
|
||||
|
||||
# SQL query to fetch records with active status, date constraints, and time condition
|
||||
time_column = 'start_time' if time_check == 'start' else 'stop_time'
|
||||
query = f"""
|
||||
SELECT * FROM run_manager
|
||||
WHERE status = 'active' AND moddus = 'schedule' AND
|
||||
((valid_from IS NULL OR strftime('%Y-%m-%d %H:%M:%S', valid_from) <= ?) AND
|
||||
(valid_to IS NULL OR strftime('%Y-%m-%d %H:%M:%S', valid_to) >= ?)) AND
|
||||
({time_column} IS NULL OR {time_column} <= ?)
|
||||
"""
|
||||
cursor.execute(query, (market_datetime_now_str, market_datetime_now_str, current_time_str))
|
||||
rows = cursor.fetchall()
|
||||
results = [tr.row_to_runmanager(row) for row in rows]
|
||||
|
||||
return 0, results
|
||||
except Exception as e:
|
||||
msg_err = f"ERROR while fetching records based on {time_check} time with datetime {market_datetime_now_str}: {str(e)} {format_exc()}"
|
||||
print(msg_err)
|
||||
return -2, msg_err
|
||||
finally:
|
||||
conn.row_factory = None
|
||||
db.pool.release_connection(conn)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
res, sada = fetch_startstop_scheduled_candidates(datetime.now().astimezone(zoneNY), "start")
|
||||
if res == 0:
|
||||
print(sada)
|
||||
else:
|
||||
print("Error:", sada)
|
||||
|
||||
# from apscheduler.schedulers.background import BackgroundScheduler
|
||||
# import time
|
||||
|
||||
# def print_hello():
|
||||
# print("Hello")
|
||||
|
||||
# def schedule_job():
|
||||
# scheduler = BackgroundScheduler()
|
||||
# scheduler.add_job(print_hello, 'interval', seconds=10)
|
||||
# scheduler.start()
|
||||
|
||||
# schedule_job()
|
||||
1
v2realbot/controller/runner_details.py
Normal file
1
v2realbot/controller/runner_details.py
Normal file
@ -0,0 +1 @@
|
||||
#PLACEHOLDER TO RUNNER_DETAILS SERVICES - refactored
|
||||
File diff suppressed because it is too large
Load Diff
0
v2realbot/endpoints/__init__.py
Normal file
0
v2realbot/endpoints/__init__.py
Normal file
0
v2realbot/endpoints/archived_runners.py
Normal file
0
v2realbot/endpoints/archived_runners.py
Normal file
0
v2realbot/endpoints/batches.py
Normal file
0
v2realbot/endpoints/batches.py
Normal file
0
v2realbot/endpoints/configs.py
Normal file
0
v2realbot/endpoints/configs.py
Normal file
0
v2realbot/endpoints/models.py
Normal file
0
v2realbot/endpoints/models.py
Normal file
0
v2realbot/endpoints/runners.py
Normal file
0
v2realbot/endpoints/runners.py
Normal file
0
v2realbot/endpoints/stratins.py
Normal file
0
v2realbot/endpoints/stratins.py
Normal file
0
v2realbot/endpoints/testlists.py
Normal file
0
v2realbot/endpoints/testlists.py
Normal file
@ -1,6 +1,11 @@
|
||||
from enum import Enum
|
||||
from alpaca.trading.enums import OrderSide, OrderStatus, OrderType
|
||||
|
||||
class BarType(str, Enum):
|
||||
TIME = "time"
|
||||
VOLUME = "volume"
|
||||
DOLLAR = "dollar"
|
||||
|
||||
class Env(str, Enum):
|
||||
PROD = "prod"
|
||||
TEST = "test"
|
||||
@ -52,6 +57,16 @@ class Account(str, Enum):
|
||||
"""
|
||||
ACCOUNT1 = "ACCOUNT1"
|
||||
ACCOUNT2 = "ACCOUNT2"
|
||||
|
||||
class Moddus(str, Enum):
|
||||
"""
|
||||
Moddus for RunManager record
|
||||
|
||||
schedule - scheduled record
|
||||
queue - queued record
|
||||
"""
|
||||
SCHEDULE = "schedule"
|
||||
QUEUE = "queue"
|
||||
class RecordType(str, Enum):
|
||||
"""
|
||||
Represents output of aggregator
|
||||
@ -60,9 +75,19 @@ class RecordType(str, Enum):
|
||||
BAR = "bar"
|
||||
CBAR = "cbar"
|
||||
CBARVOLUME = "cbarvolume"
|
||||
CBARDOLLAR = "cbardollar"
|
||||
CBARRENKO = "cbarrenko"
|
||||
TRADE = "trade"
|
||||
|
||||
class SchedulerStatus(str, Enum):
|
||||
"""
|
||||
ACTIVE - active scheduling
|
||||
SUSPENDED - suspended for scheduling
|
||||
"""
|
||||
|
||||
ACTIVE = "active"
|
||||
SUSPENDED = "suspended"
|
||||
|
||||
class Mode(str, Enum):
|
||||
"""
|
||||
LIVE - live on production
|
||||
@ -76,7 +101,6 @@ class Mode(str, Enum):
|
||||
BT = "backtest"
|
||||
PREP = "prep"
|
||||
|
||||
|
||||
class StartBarAlign(str, Enum):
|
||||
"""
|
||||
Represents first bar start time alignement according to timeframe
|
||||
@ -85,3 +109,9 @@ class StartBarAlign(str, Enum):
|
||||
"""
|
||||
ROUND = "round"
|
||||
RANDOM = "random"
|
||||
|
||||
class Market(str, Enum):
|
||||
US = "US"
|
||||
CRYPTO = "CRYPTO"
|
||||
|
||||
|
||||
@ -2,9 +2,10 @@ from alpaca.trading.enums import OrderSide, OrderType
|
||||
from threading import Lock
|
||||
from v2realbot.interfaces.general_interface import GeneralInterface
|
||||
from v2realbot.backtesting.backtester import Backtester
|
||||
from v2realbot.config import BT_DELAYS, COUNT_API_REQUESTS
|
||||
from datetime import datetime
|
||||
from v2realbot.utils.utils import zoneNY
|
||||
import v2realbot.utils.config_handler as cfh
|
||||
from v2realbot.common.model import Account
|
||||
|
||||
""""
|
||||
backtester methods can be called
|
||||
@ -16,10 +17,11 @@ both should be backtestable
|
||||
if method are called for the past self.time must be set accordingly
|
||||
"""
|
||||
class BacktestInterface(GeneralInterface):
|
||||
def __init__(self, symbol, bt: Backtester) -> None:
|
||||
def __init__(self, symbol, bt: Backtester, account: Account) -> None:
|
||||
self.symbol = symbol
|
||||
self.account = account
|
||||
self.bt = bt
|
||||
self.count_api_requests = COUNT_API_REQUESTS
|
||||
self.count_api_requests = cfh.config_handler.get_val('COUNT_API_REQUESTS')
|
||||
self.mincnt = list([dict(minute=0,count=0)])
|
||||
#TODO time v API nejspis muzeme dat pryc a BT bude si to brat primo ze self.time (nezapomenout na + BT_DELAYS)
|
||||
# self.time = self.bt.time
|
||||
@ -43,48 +45,48 @@ class BacktestInterface(GeneralInterface):
|
||||
def buy(self, size = 1, repeat: bool = False):
|
||||
self.count()
|
||||
#add REST API latency
|
||||
return self.bt.submit_order(time=self.bt.time + BT_DELAYS.strat_to_sub,symbol=self.symbol,side=OrderSide.BUY,size=size,order_type = OrderType.MARKET)
|
||||
return self.bt.submit_order(time=self.bt.time + cfh.config_handler.get_val('BT_DELAYS','strat_to_sub'),symbol=self.symbol,side=OrderSide.BUY,size=size,order_type = OrderType.MARKET, account=self.account)
|
||||
|
||||
"""buy limit"""
|
||||
def buy_l(self, price: float, size: int = 1, repeat: bool = False, force: int = 0):
|
||||
self.count()
|
||||
return self.bt.submit_order(time=self.bt.time + BT_DELAYS.strat_to_sub,symbol=self.symbol,side=OrderSide.BUY,size=size,price=price,order_type = OrderType.LIMIT)
|
||||
return self.bt.submit_order(time=self.bt.time + cfh.config_handler.get_val('BT_DELAYS','strat_to_sub'),symbol=self.symbol,side=OrderSide.BUY,size=size,price=price,order_type = OrderType.LIMIT, account=self.account)
|
||||
|
||||
"""sell market"""
|
||||
def sell(self, size = 1, repeat: bool = False):
|
||||
self.count()
|
||||
return self.bt.submit_order(time=self.bt.time + BT_DELAYS.strat_to_sub,symbol=self.symbol,side=OrderSide.SELL,size=size,order_type = OrderType.MARKET)
|
||||
return self.bt.submit_order(time=self.bt.time + cfh.config_handler.get_val('BT_DELAYS','strat_to_sub'),symbol=self.symbol,side=OrderSide.SELL,size=size,order_type = OrderType.MARKET, account=self.account)
|
||||
|
||||
"""sell limit"""
|
||||
async def sell_l(self, price: float, size = 1, repeat: bool = False):
|
||||
self.count()
|
||||
return self.bt.submit_order(time=self.bt.time + BT_DELAYS.strat_to_sub,symbol=self.symbol,side=OrderSide.SELL,size=size,price=price,order_type = OrderType.LIMIT)
|
||||
return self.bt.submit_order(time=self.bt.time + cfh.config_handler.get_val('BT_DELAYS','strat_to_sub'),symbol=self.symbol,side=OrderSide.SELL,size=size,price=price,order_type = OrderType.LIMIT, account=self.account)
|
||||
|
||||
"""replace order"""
|
||||
async def repl(self, orderid: str, price: float = None, size: int = None, repeat: bool = False):
|
||||
self.count()
|
||||
return self.bt.replace_order(time=self.bt.time + BT_DELAYS.strat_to_sub,id=orderid,size=size,price=price)
|
||||
return self.bt.replace_order(time=self.bt.time + cfh.config_handler.get_val('BT_DELAYS','strat_to_sub'),id=orderid,size=size,price=price, account=self.account)
|
||||
|
||||
"""cancel order"""
|
||||
#TBD exec predtim?
|
||||
def cancel(self, orderid: str):
|
||||
self.count()
|
||||
return self.bt.cancel_order(time=self.bt.time + BT_DELAYS.strat_to_sub, id=orderid)
|
||||
return self.bt.cancel_order(time=self.bt.time + cfh.config_handler.get_val('BT_DELAYS','strat_to_sub'), id=orderid, account=self.account)
|
||||
|
||||
"""get positions ->(size,avgp)"""
|
||||
#TBD exec predtim?
|
||||
def pos(self):
|
||||
self.count()
|
||||
return self.bt.get_open_position(symbol=self.symbol)
|
||||
return self.bt.get_open_position(symbol=self.symbol, account=self.account)
|
||||
|
||||
"""get open orders ->list(Order)"""
|
||||
def get_open_orders(self, side: OrderSide, symbol: str):
|
||||
self.count()
|
||||
return self.bt.get_open_orders(side=side, symbol=symbol)
|
||||
return self.bt.get_open_orders(side=side, symbol=symbol, account=self.account)
|
||||
|
||||
def get_last_price(self, symbol: str):
|
||||
self.count()
|
||||
return self.bt.get_last_price(time=self.bt.time)
|
||||
return self.bt.get_last_price(time=self.bt.time, account=self.account)
|
||||
|
||||
|
||||
|
||||
|
||||
@ -40,7 +40,9 @@ class LiveInterface(GeneralInterface):
|
||||
|
||||
return market_order.id
|
||||
except Exception as e:
|
||||
print("Nepodarilo se odeslat buy", str(e))
|
||||
reason = "Nepodarilo se market buy:" + str(e) + format_exc()
|
||||
print(reason)
|
||||
send_to_telegram(reason)
|
||||
return -1
|
||||
|
||||
"""buy limit"""
|
||||
@ -65,7 +67,9 @@ class LiveInterface(GeneralInterface):
|
||||
|
||||
return limit_order.id
|
||||
except Exception as e:
|
||||
print("Nepodarilo se odeslat limitku", str(e))
|
||||
reason = "Nepodarilo se odeslat buy limitku:" + str(e) + format_exc()
|
||||
print(reason)
|
||||
send_to_telegram(reason)
|
||||
return -1
|
||||
|
||||
"""sell market"""
|
||||
@ -87,11 +91,13 @@ class LiveInterface(GeneralInterface):
|
||||
|
||||
return market_order.id
|
||||
except Exception as e:
|
||||
print("Nepodarilo se odeslat sell", str(e))
|
||||
reason = "Nepodarilo se odeslat sell:" + str(e) + format_exc()
|
||||
print(reason)
|
||||
send_to_telegram(reason)
|
||||
return -1
|
||||
|
||||
"""sell limit"""
|
||||
async def sell_l(self, price: float, size = 1, repeat: bool = False):
|
||||
def sell_l(self, price: float, size = 1, repeat: bool = False):
|
||||
self.size = size
|
||||
self.repeat = repeat
|
||||
|
||||
@ -112,12 +118,13 @@ class LiveInterface(GeneralInterface):
|
||||
return limit_order.id
|
||||
|
||||
except Exception as e:
|
||||
print("Nepodarilo se odeslat sell_l", str(e))
|
||||
#raise Exception(e)
|
||||
reason = "Nepodarilo se odeslat sell limitku:" + str(e) + format_exc()
|
||||
print(reason)
|
||||
send_to_telegram(reason)
|
||||
return -1
|
||||
|
||||
"""order replace"""
|
||||
async def repl(self, orderid: str, price: float = None, size: int = None, repeatl: bool = False):
|
||||
def repl(self, orderid: str, price: float = None, size: int = None, repeatl: bool = False):
|
||||
|
||||
if not price and not size:
|
||||
print("price or size has to be filled")
|
||||
@ -136,7 +143,9 @@ class LiveInterface(GeneralInterface):
|
||||
if e.code == 42210000: return orderid
|
||||
else:
|
||||
##mozna tady proste vracet vzdy ok
|
||||
print("Neslo nahradit profitku. Problem",str(e))
|
||||
reason = "Neslo nahradit profitku. Problem:" + str(e) + format_exc()
|
||||
print(reason)
|
||||
send_to_telegram(reason)
|
||||
return -1
|
||||
#raise Exception(e)
|
||||
|
||||
@ -150,7 +159,9 @@ class LiveInterface(GeneralInterface):
|
||||
#order doesnt exist
|
||||
if e.code == 40410000: return 0
|
||||
else:
|
||||
print("nepovedlo se zrusit objednavku", str(e))
|
||||
reason = "Nepovedlo se zrusit objednavku:" + str(e) + format_exc()
|
||||
print(reason)
|
||||
send_to_telegram(reason)
|
||||
#raise Exception(e)
|
||||
return -1
|
||||
|
||||
@ -162,7 +173,7 @@ class LiveInterface(GeneralInterface):
|
||||
return a.avg_entry_price, a.qty
|
||||
except (APIError, Exception) as e:
|
||||
#no position
|
||||
if e.code == 40410000: return 0,0
|
||||
if hasattr(e, 'code') and e.code == 40410000: return 0,0
|
||||
else:
|
||||
reason = "Exception when calling LIVE interface pos, REPEATING:" + str(e) + format_exc()
|
||||
print("API ERROR: Nepodarilo se ziskat pozici.", reason)
|
||||
@ -178,7 +189,9 @@ class LiveInterface(GeneralInterface):
|
||||
#list of Orders (orderlist[0].id)
|
||||
return orderlist
|
||||
except Exception as e:
|
||||
print("Chyba pri dotazeni objednávek.", str(e))
|
||||
reason = "Chyba pri dotazeni objednávek:" + str(e) + format_exc()
|
||||
print(reason)
|
||||
send_to_telegram(reason)
|
||||
#raise Exception (e)
|
||||
return -1
|
||||
|
||||
|
||||
1411
v2realbot/loader/agg_vect.ipynb
Normal file
1411
v2realbot/loader/agg_vect.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
@ -3,7 +3,7 @@
|
||||
"""
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign
|
||||
from datetime import datetime, timedelta
|
||||
from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, Queue,is_open_hours,zoneNY
|
||||
from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, Queue,is_open_hours,zoneNY, zoneUTC
|
||||
from queue import Queue
|
||||
from rich import print
|
||||
from v2realbot.enums.enums import Mode
|
||||
@ -11,9 +11,10 @@ import threading
|
||||
from copy import deepcopy
|
||||
from msgpack import unpackb
|
||||
import os
|
||||
from v2realbot.config import DATA_DIR, GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN, AGG_EXCLUDED_TRADES
|
||||
import pickle
|
||||
from v2realbot.config import DATA_DIR
|
||||
import dill
|
||||
import gzip
|
||||
import v2realbot.utils.config_handler as cfh
|
||||
|
||||
class TradeAggregator:
|
||||
def __init__(self,
|
||||
@ -24,7 +25,7 @@ class TradeAggregator:
|
||||
align: StartBarAlign = StartBarAlign.ROUND,
|
||||
mintick: int = 0,
|
||||
exthours: bool = False,
|
||||
excludes: list = AGG_EXCLUDED_TRADES,
|
||||
excludes: list = cfh.config_handler.get_val('AGG_EXCLUDED_TRADES'),
|
||||
skip_cache: bool = False):
|
||||
"""
|
||||
UPDATED VERSION - vrací více záznamů
|
||||
@ -47,7 +48,7 @@ class TradeAggregator:
|
||||
self.excludes = excludes
|
||||
self.skip_cache = skip_cache
|
||||
|
||||
if mintick >= resolution:
|
||||
if resolution > 0 and mintick >= resolution:
|
||||
print("Mintick musi byt mensi nez resolution")
|
||||
raise Exception
|
||||
|
||||
@ -149,7 +150,7 @@ class TradeAggregator:
|
||||
# else:
|
||||
data['t'] = parse_alpaca_timestamp(data['t'])
|
||||
|
||||
if not is_open_hours(datetime.fromtimestamp(data['t'])) and self.exthours is False:
|
||||
if not is_open_hours(datetime.fromtimestamp(data['t'], tz=zoneUTC)) and self.exthours is False:
|
||||
#print("AGG: trade not in open hours skipping", datetime.fromtimestamp(data['t']).astimezone(zoneNY))
|
||||
return []
|
||||
|
||||
@ -178,13 +179,29 @@ class TradeAggregator:
|
||||
# return
|
||||
# else: pass
|
||||
|
||||
if self.rectype in (RecordType.BAR, RecordType.CBAR):
|
||||
# if self.rectype in (RecordType.BAR, RecordType.CBAR):
|
||||
# return await self.calculate_time_bar(data, symbol)
|
||||
|
||||
# if self.rectype == RecordType.CBARVOLUME:
|
||||
# return await self.calculate_volume_bar(data, symbol)
|
||||
|
||||
# if self.rectype == RecordType.CBARVOLUME:
|
||||
# return await self.calculate_volume_bar(data, symbol)
|
||||
|
||||
# if self.rectype == RecordType.CBARRENKO:
|
||||
# return await self.calculate_renko_bar(data, symbol)
|
||||
|
||||
match self.rectype:
|
||||
case RecordType.BAR | RecordType.CBAR:
|
||||
return await self.calculate_time_bar(data, symbol)
|
||||
|
||||
if self.rectype == RecordType.CBARVOLUME:
|
||||
case RecordType.CBARVOLUME:
|
||||
return await self.calculate_volume_bar(data, symbol)
|
||||
|
||||
if self.rectype == RecordType.CBARRENKO:
|
||||
case RecordType.CBARDOLLAR:
|
||||
return await self.calculate_dollar_bar(data, symbol)
|
||||
|
||||
case RecordType.CBARRENKO:
|
||||
return await self.calculate_renko_bar(data, symbol)
|
||||
|
||||
async def calculate_time_bar(self, data, symbol):
|
||||
@ -276,7 +293,7 @@ class TradeAggregator:
|
||||
self.diff_price = True
|
||||
self.last_price = data['p']
|
||||
|
||||
if float(data['t']) - float(self.lasttimestamp) < GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN:
|
||||
if float(data['t']) - float(self.lasttimestamp) < cfh.config_handler.get_val('GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN'):
|
||||
self.trades_too_close = True
|
||||
else:
|
||||
self.trades_too_close = False
|
||||
@ -303,13 +320,13 @@ class TradeAggregator:
|
||||
#TODO: do budoucna vymyslet, kdyz bude mene tradu, tak to radit vzdy do spravneho intervalu
|
||||
#zarovname time prvniho baru podle timeframu kam patří (např. 5, 10, 15 ...) (ROUND)
|
||||
if self.align == StartBarAlign.ROUND and self.bar_start == 0:
|
||||
t = datetime.fromtimestamp(data['t'])
|
||||
t = datetime.fromtimestamp(data['t'], tz=zoneUTC)
|
||||
t = t - timedelta(seconds=t.second % self.resolution,microseconds=t.microsecond)
|
||||
self.bar_start = datetime.timestamp(t)
|
||||
#nebo pouzijeme datum tradu zaokrouhlene na vteriny (RANDOM)
|
||||
else:
|
||||
#ulozime si jeho timestamp (odtum pocitame resolution)
|
||||
t = datetime.fromtimestamp(int(data['t']))
|
||||
t = datetime.fromtimestamp(int(data['t']), tz=zoneUTC)
|
||||
#timestamp
|
||||
self.bar_start = int(data['t'])
|
||||
|
||||
@ -359,7 +376,7 @@ class TradeAggregator:
|
||||
if self.mintick != 0 and self.lastBarConfirmed:
|
||||
#d zacatku noveho baru musi ubehnout x sekund nez posilame updazte
|
||||
#pocatek noveho baru + Xs musi byt vetsi nez aktualni trade
|
||||
if (self.newBar['time'] + timedelta(seconds=self.mintick)) > datetime.fromtimestamp(data['t']):
|
||||
if (self.newBar['time'] + timedelta(seconds=self.mintick)) > datetime.fromtimestamp(data['t'], tz=zoneUTC):
|
||||
#print("waiting for mintick")
|
||||
return []
|
||||
else:
|
||||
@ -426,7 +443,7 @@ class TradeAggregator:
|
||||
"trades": 1,
|
||||
"hlcc4": data['p'],
|
||||
"confirmed": 0,
|
||||
"time": datetime.fromtimestamp(data['t']),
|
||||
"time": datetime.fromtimestamp(data['t'], tz=zoneUTC),
|
||||
"updated": data['t'],
|
||||
"vwap": data['p'],
|
||||
"index": self.barindex,
|
||||
@ -460,7 +477,7 @@ class TradeAggregator:
|
||||
"trades": 1,
|
||||
"hlcc4":data['p'],
|
||||
"confirmed": 1,
|
||||
"time": datetime.fromtimestamp(data['t']),
|
||||
"time": datetime.fromtimestamp(data['t'], tz=zoneUTC),
|
||||
"updated": data['t'],
|
||||
"vwap": data['p'],
|
||||
"index": self.barindex,
|
||||
@ -523,7 +540,7 @@ class TradeAggregator:
|
||||
self.diff_price = True
|
||||
self.last_price = data['p']
|
||||
|
||||
if float(data['t']) - float(self.lasttimestamp) < GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN:
|
||||
if float(data['t']) - float(self.lasttimestamp) < cfh.config_handler.get_val('GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN'):
|
||||
self.trades_too_close = True
|
||||
else:
|
||||
self.trades_too_close = False
|
||||
@ -551,6 +568,179 @@ class TradeAggregator:
|
||||
else:
|
||||
return []
|
||||
|
||||
#WIP - revidovant kod a otestovat
|
||||
async def calculate_dollar_bar(self, data, symbol):
|
||||
""""
|
||||
Agreguje DOLLAR BARS -
|
||||
hlavni promenne
|
||||
- self.openedBar (dict) = stavová obsahují aktivní nepotvrzený bar
|
||||
- confirmedBars (list) = nestavová obsahuje confirmnute bary, které budou na konci funkceflushnuty
|
||||
"""""
|
||||
#volume_bucket = 10000 #daily MA volume z emackova na 30 deleno 50ti - dat do configu
|
||||
dollar_bucket = self.resolution
|
||||
#potvrzene pripravene k vraceni
|
||||
confirmedBars = []
|
||||
#potvrdi existujici a nastavi k vraceni
|
||||
def confirm_existing():
|
||||
self.openedBar['confirmed'] = 1
|
||||
self.openedBar['vwap'] = self.vwaphelper / self.openedBar['volume']
|
||||
self.vwaphelper = 0
|
||||
|
||||
#ulozime zacatek potvrzeneho baru
|
||||
#self.lastBarConfirmed = self.openedBar['time']
|
||||
|
||||
self.openedBar['updated'] = data['t']
|
||||
confirmedBars.append(deepcopy(self.openedBar))
|
||||
self.openedBar = None
|
||||
#TBD po každém potvrzení zvýšíme čas o nanosekundu (pro zobrazení v gui)
|
||||
#data['t'] = data['t'] + 0.000001
|
||||
|
||||
#init unconfirmed - velikost bucketu kontrolovana predtim
|
||||
def initialize_unconfirmed(size):
|
||||
#inicializuji pro nový bar
|
||||
self.vwaphelper += (data['p'] * size)
|
||||
self.barindex +=1
|
||||
self.openedBar = {
|
||||
"close": data['p'],
|
||||
"high": data['p'],
|
||||
"low": data['p'],
|
||||
"open": data['p'],
|
||||
"volume": size,
|
||||
"trades": 1,
|
||||
"hlcc4": data['p'],
|
||||
"confirmed": 0,
|
||||
"time": datetime.fromtimestamp(data['t'], tz=zoneUTC),
|
||||
"updated": data['t'],
|
||||
"vwap": data['p'],
|
||||
"index": self.barindex,
|
||||
"resolution":dollar_bucket
|
||||
}
|
||||
|
||||
def update_unconfirmed(size):
|
||||
#spočteme vwap - potřebujeme předchozí hodnoty
|
||||
self.vwaphelper += (data['p'] * size)
|
||||
self.openedBar['updated'] = data['t']
|
||||
self.openedBar['close'] = data['p']
|
||||
self.openedBar['high'] = max(self.openedBar['high'],data['p'])
|
||||
self.openedBar['low'] = min(self.openedBar['low'],data['p'])
|
||||
self.openedBar['volume'] = self.openedBar['volume'] + size
|
||||
self.openedBar['trades'] = self.openedBar['trades'] + 1
|
||||
self.openedBar['vwap'] = self.vwaphelper / self.openedBar['volume']
|
||||
#pohrat si s timto round
|
||||
self.openedBar['hlcc4'] = round((self.openedBar['high']+self.openedBar['low']+self.openedBar['close']+self.openedBar['close'])/4,3)
|
||||
|
||||
#init new - confirmed
|
||||
def initialize_confirmed(size):
|
||||
#ulozime zacatek potvrzeneho baru
|
||||
#self.lastBarConfirmed = datetime.fromtimestamp(data['t'])
|
||||
self.barindex +=1
|
||||
confirmedBars.append({
|
||||
"close": data['p'],
|
||||
"high": data['p'],
|
||||
"low": data['p'],
|
||||
"open": data['p'],
|
||||
"volume": size,
|
||||
"trades": 1,
|
||||
"hlcc4":data['p'],
|
||||
"confirmed": 1,
|
||||
"time": datetime.fromtimestamp(data['t'], tz=zoneUTC),
|
||||
"updated": data['t'],
|
||||
"vwap": data['p'],
|
||||
"index": self.barindex,
|
||||
"resolution": dollar_bucket
|
||||
})
|
||||
|
||||
#current trade dollar value
|
||||
trade_dollar_val = int(data['s'])*float(data['p'])
|
||||
|
||||
#existuje stávající bar a vejdeme se do nej
|
||||
if self.openedBar is not None and trade_dollar_val + self.openedBar['volume']*self.openedBar['close'] < dollar_bucket:
|
||||
#vejdeme se do stávajícího baru (tzn. neprekracujeme bucket)
|
||||
update_unconfirmed(int(data['s']))
|
||||
#updatujeme stávající nepotvrzeny bar
|
||||
#nevejdem se do nej nebo neexistuje predchozi bar
|
||||
else:
|
||||
#1)existuje predchozi bar - doplnime zbytkem do valikosti bucketu a nastavime confirmed
|
||||
if self.openedBar is not None:
|
||||
|
||||
#doplnime je zbytkem (v bucket left-je zbyvajici volume)
|
||||
opened_bar_dollar_val = self.openedBar['volume']*self.openedBar['close']
|
||||
bucket_left = int((dollar_bucket - opened_bar_dollar_val)/float(data['p']))
|
||||
# - update and confirm bar
|
||||
update_unconfirmed(bucket_left)
|
||||
confirm_existing()
|
||||
|
||||
#zbytek mnozství jde do dalsiho zpracovani
|
||||
data['s'] = int(data['s']) - bucket_left
|
||||
#nastavime cas o nanosekundu vyssi
|
||||
data['t'] = round((data['t']) + 0.000001,6)
|
||||
|
||||
#2 vytvarime novy bar (bary) a vejdeme se do nej
|
||||
if int(data['s'])*float(data['p']) < dollar_bucket:
|
||||
#vytvarime novy nepotvrzeny bar
|
||||
initialize_unconfirmed(int(data['s']))
|
||||
#nevejdeme se do nej - pak vytvarime 1 až N dalsich baru (posledni nepotvrzený)
|
||||
else:
|
||||
# >>> for i in range(0, 550, 500):
|
||||
# ... print(i)
|
||||
# ...
|
||||
# 0
|
||||
# 500
|
||||
|
||||
#vytvarime plne potvrzene buckety (kolik se jich plne vejde)
|
||||
for size in range(int(dollar_bucket/float(data['p'])), int(data['s']), int(dollar_bucket/float(data['p']))):
|
||||
initialize_confirmed(dollar_bucket/float(data['p']))
|
||||
#nastavime cas o nanosekundu vyssi
|
||||
data['t'] = round((data['t']) + 0.000001,6)
|
||||
#create complete full bucket with same prices and size
|
||||
#naplnit do return pole
|
||||
|
||||
#pokud je zbytek vytvorime z nej nepotvrzeny bar
|
||||
zbytek = int(data['s'])*float(data['p']) % dollar_bucket
|
||||
|
||||
#ze zbytku vytvorime nepotvrzeny bar
|
||||
if zbytek > 0:
|
||||
#prevedeme zpatky na volume
|
||||
zbytek = int(zbytek/float(data['p']))
|
||||
initialize_unconfirmed(zbytek)
|
||||
#create new open bar with size zbytek s otevrenym
|
||||
|
||||
#je cena stejna od predchoziho tradu? pro nepotvrzeny cbar vracime jen pri zmene ceny
|
||||
if self.last_price == data['p']:
|
||||
self.diff_price = False
|
||||
else:
|
||||
self.diff_price = True
|
||||
self.last_price = data['p']
|
||||
|
||||
if float(data['t']) - float(self.lasttimestamp) < cfh.config_handler.get_val('GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN'):
|
||||
self.trades_too_close = True
|
||||
else:
|
||||
self.trades_too_close = False
|
||||
|
||||
#uložíme do předchozí hodnoty (poznáme tak open a close)
|
||||
self.lasttimestamp = data['t']
|
||||
self.iterace += 1
|
||||
# print(self.iterace, data)
|
||||
|
||||
#pokud mame confirm bary, tak FLUSHNEME confirm a i případný open (zrejme se pak nejaky vytvoril)
|
||||
if len(confirmedBars) > 0:
|
||||
return_set = confirmedBars + ([self.openedBar] if self.openedBar is not None else [])
|
||||
confirmedBars = []
|
||||
return return_set
|
||||
|
||||
#nemame confirm, FLUSHUJEME CBARVOLUME open - neresime zmenu ceny, ale neposilame kulomet (pokud nam nevytvari conf. bar)
|
||||
if self.openedBar is not None and self.rectype == RecordType.CBARDOLLAR:
|
||||
|
||||
#zkousime pustit i stejnou cenu(potrebujeme kvuli MYSELLU), ale blokoval kulomet,tzn. trady mensi nez GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN (1ms)
|
||||
#if self.diff_price is True:
|
||||
if self.trades_too_close is False:
|
||||
return [self.openedBar]
|
||||
else:
|
||||
return []
|
||||
else:
|
||||
return []
|
||||
|
||||
|
||||
async def calculate_renko_bar(self, data, symbol):
|
||||
""""
|
||||
Agreguje RENKO BARS - dle brick size
|
||||
@ -566,8 +756,14 @@ class TradeAggregator:
|
||||
Ve strategii je třeba počítat s tím, že open v nepotvrzeném baru není finální.
|
||||
"""""
|
||||
|
||||
#pocet ticku např. 10ticků, případně pak na procenta
|
||||
if self.resolution < 0: # Treat as percentage
|
||||
reference_price = self.lastConfirmedBar['close'] if self.lastConfirmedBar is not None else float(data['p'])
|
||||
brick_size = abs(self.resolution) * reference_price / 100.0
|
||||
else: # Treat as absolute value pocet ticku
|
||||
brick_size = self.resolution
|
||||
|
||||
#pocet ticku např. 10ticků, případně pak na procenta
|
||||
#brick_size = self.resolution
|
||||
#potvrzene pripravene k vraceni
|
||||
confirmedBars = []
|
||||
#potvrdi existujici a nastavi k vraceni
|
||||
@ -598,7 +794,7 @@ class TradeAggregator:
|
||||
"trades": 1,
|
||||
"hlcc4": data['p'],
|
||||
"confirmed": 0,
|
||||
"time": datetime.fromtimestamp(data['t']),
|
||||
"time": datetime.fromtimestamp(data['t'], tz=zoneUTC),
|
||||
"updated": data['t'],
|
||||
"vwap": data['p'],
|
||||
"index": self.barindex,
|
||||
@ -633,7 +829,7 @@ class TradeAggregator:
|
||||
"trades": 1,
|
||||
"hlcc4":data['p'],
|
||||
"confirmed": 1,
|
||||
"time": datetime.fromtimestamp(data['t']),
|
||||
"time": datetime.fromtimestamp(data['t'], tz=zoneUTC),
|
||||
"updated": data['t'],
|
||||
"vwap": data['p'],
|
||||
"index": self.barindex,
|
||||
@ -676,7 +872,7 @@ class TradeAggregator:
|
||||
self.diff_price = True
|
||||
self.last_price = data['p']
|
||||
|
||||
if float(data['t']) - float(self.lasttimestamp) < GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN:
|
||||
if float(data['t']) - float(self.lasttimestamp) < cfh.config_handler.get_val('GROUP_TRADES_WITH_TIMESTAMP_LESS_THAN'):
|
||||
self.trades_too_close = True
|
||||
else:
|
||||
self.trades_too_close = False
|
||||
@ -709,7 +905,7 @@ class TradeAggregator:
|
||||
#a take excludes result = ''.join(self.excludes.sort())
|
||||
self.excludes.sort() # Sorts the list in place
|
||||
excludes_str = ''.join(map(str, self.excludes)) # Joins the sorted elements after converting them to strings
|
||||
cache_file = self.__class__.__name__ + '-' + self.symbol + '-' + str(int(date_from.timestamp())) + '-' + str(int(date_to.timestamp())) + '-' + str(self.rectype) + "-" + str(self.resolution) + "-" + str(self.minsize) + "-" + str(self.align) + '-' + str(self.mintick) + str(self.exthours) + excludes_str + '.cache'
|
||||
cache_file = self.__class__.__name__ + '-' + self.symbol + '-' + str(int(date_from.timestamp())) + '-' + str(int(date_to.timestamp())) + '-' + str(self.rectype) + "-" + str(self.resolution) + "-" + str(self.minsize) + "-" + str(self.align) + '-' + str(self.mintick) + str(self.exthours) + excludes_str + '.cache.gz'
|
||||
file_path = DATA_DIR + "/aggcache/" + cache_file
|
||||
#print(file_path)
|
||||
return file_path
|
||||
@ -719,7 +915,7 @@ class TradeAggregator:
|
||||
file_path = self.populate_file_name(date_from, date_to)
|
||||
if self.skip_cache is False and os.path.exists(file_path):
|
||||
##daily aggregated file exists
|
||||
with open (file_path, 'rb') as fp:
|
||||
with gzip.open (file_path, 'rb') as fp:
|
||||
cachedobject = dill.load(fp)
|
||||
print("AGG CACHE loaded ", file_path)
|
||||
|
||||
@ -752,7 +948,7 @@ class TradeAggregator:
|
||||
|
||||
file_path = self.populate_file_name(self.cache_from, self.cache_to)
|
||||
|
||||
with open(file_path, 'wb') as fp:
|
||||
with gzip.open(file_path, 'wb') as fp:
|
||||
dill.dump(self.cached_object, fp)
|
||||
print(f"AGG CACHE stored ({num}) :{file_path}")
|
||||
print(f"DATES from:{self.cache_from.strftime('%d.%m.%Y %H:%M')} to:{self.cache_to.strftime('%d.%m.%Y %H:%M')}")
|
||||
@ -772,7 +968,7 @@ class TradeAggregator2Queue(TradeAggregator):
|
||||
Child of TradeAggregator - sends items to given queue
|
||||
In the future others will be added - TradeAggToTxT etc.
|
||||
"""
|
||||
def __init__(self, symbol: str, queue: Queue, rectype: RecordType = RecordType.BAR, resolution: int = 5, minsize: int = 100, update_ltp: bool = False, align: StartBarAlign = StartBarAlign.ROUND, mintick: int = 0, exthours: bool = False, excludes: list = AGG_EXCLUDED_TRADES, skip_cache: bool = False):
|
||||
def __init__(self, symbol: str, queue: Queue, rectype: RecordType = RecordType.BAR, resolution: int = 5, minsize: int = 100, update_ltp: bool = False, align: StartBarAlign = StartBarAlign.ROUND, mintick: int = 0, exthours: bool = False, excludes: list = cfh.config_handler.get_val('AGG_EXCLUDED_TRADES'), skip_cache: bool = False):
|
||||
super().__init__(rectype=rectype, resolution=resolution, minsize=minsize, update_ltp=update_ltp, align=align, mintick=mintick, exthours=exthours, excludes=excludes, skip_cache=skip_cache)
|
||||
self.queue = queue
|
||||
self.symbol = symbol
|
||||
@ -817,7 +1013,7 @@ class TradeAggregator2List(TradeAggregator):
|
||||
""""
|
||||
stores records to the list
|
||||
"""
|
||||
def __init__(self, symbol: str, btdata: list, rectype: RecordType = RecordType.BAR, resolution: int = 5, minsize: int = 100, update_ltp: bool = False, align: StartBarAlign = StartBarAlign.ROUND, mintick: int = 0, exthours: bool = False, excludes: list = AGG_EXCLUDED_TRADES, skip_cache: bool = False):
|
||||
def __init__(self, symbol: str, btdata: list, rectype: RecordType = RecordType.BAR, resolution: int = 5, minsize: int = 100, update_ltp: bool = False, align: StartBarAlign = StartBarAlign.ROUND, mintick: int = 0, exthours: bool = False, excludes: list = cfh.config_handler.get_val('AGG_EXCLUDED_TRADES'), skip_cache: bool = False):
|
||||
super().__init__(rectype=rectype, resolution=resolution, minsize=minsize, update_ltp=update_ltp, align=align, mintick=mintick, exthours=exthours, excludes=excludes, skip_cache=skip_cache)
|
||||
self.btdata = btdata
|
||||
self.symbol = symbol
|
||||
|
||||
570
v2realbot/loader/aggregator_vectorized.py
Normal file
570
v2realbot/loader/aggregator_vectorized.py
Normal file
@ -0,0 +1,570 @@
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from numba import jit
|
||||
from alpaca.data.historical import StockHistoricalDataClient
|
||||
from sqlalchemy import column
|
||||
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR
|
||||
from alpaca.data.requests import StockTradesRequest
|
||||
import time as time_module
|
||||
from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, zoneNY, send_to_telegram, fetch_calendar_data
|
||||
import pyarrow
|
||||
from traceback import format_exc
|
||||
from datetime import timedelta, datetime, time
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import os
|
||||
import gzip
|
||||
import pickle
|
||||
import random
|
||||
from alpaca.data.models import BarSet, QuoteSet, TradeSet
|
||||
import v2realbot.utils.config_handler as cfh
|
||||
from v2realbot.enums.enums import BarType
|
||||
from tqdm import tqdm
|
||||
""""
|
||||
Module used for vectorized aggregation of trades.
|
||||
|
||||
Includes fetch (remote/cached) methods and numba aggregator function for TIME BASED, VOLUME BASED and DOLLAR BARS
|
||||
|
||||
"""""
|
||||
|
||||
def aggregate_trades(symbol: str, trades_df: pd.DataFrame, resolution: int, type: BarType = BarType.TIME):
|
||||
""""
|
||||
Accepts dataframe with trades keyed by symbol. Preparess dataframe to
|
||||
numpy and calls Numba optimized aggregator for given bar type. (time/volume/dollar)
|
||||
"""""
|
||||
trades_df = trades_df.loc[symbol]
|
||||
trades_df= trades_df.reset_index()
|
||||
ticks = trades_df[['timestamp', 'price', 'size']].to_numpy()
|
||||
# Extract the timestamps column (assuming it's the first column)
|
||||
timestamps = ticks[:, 0]
|
||||
# Convert the timestamps to Unix timestamps in seconds with microsecond precision
|
||||
unix_timestamps_s = np.array([ts.timestamp() for ts in timestamps], dtype='float64')
|
||||
# Replace the original timestamps in the NumPy array with the converted Unix timestamps
|
||||
ticks[:, 0] = unix_timestamps_s
|
||||
ticks = ticks.astype(np.float64)
|
||||
#based on type, specific aggregator function is called
|
||||
match type:
|
||||
case BarType.TIME:
|
||||
ohlcv_bars = generate_time_bars_nb(ticks, resolution)
|
||||
case BarType.VOLUME:
|
||||
ohlcv_bars = generate_volume_bars_nb(ticks, resolution)
|
||||
case BarType.DOLLAR:
|
||||
ohlcv_bars = generate_dollar_bars_nb(ticks, resolution)
|
||||
case _:
|
||||
raise ValueError("Invalid bar type. Supported types are 'time', 'volume' and 'dollar'.")
|
||||
# Convert the resulting array back to a DataFrame
|
||||
columns = ['time', 'open', 'high', 'low', 'close', 'volume', 'trades']
|
||||
if type == BarType.DOLLAR:
|
||||
columns.append('amount')
|
||||
columns.append('updated')
|
||||
if type == BarType.TIME:
|
||||
columns.append('vwap')
|
||||
columns.append('buyvolume')
|
||||
columns.append('sellvolume')
|
||||
if type == BarType.VOLUME:
|
||||
columns.append('buyvolume')
|
||||
columns.append('sellvolume')
|
||||
ohlcv_df = pd.DataFrame(ohlcv_bars, columns=columns)
|
||||
ohlcv_df['time'] = pd.to_datetime(ohlcv_df['time'], unit='s').dt.tz_localize('UTC').dt.tz_convert(zoneNY)
|
||||
#print(ohlcv_df['updated'])
|
||||
ohlcv_df['updated'] = pd.to_datetime(ohlcv_df['updated'], unit="s").dt.tz_localize('UTC').dt.tz_convert(zoneNY)
|
||||
# Round to microseconds to maintain six decimal places
|
||||
ohlcv_df['updated'] = ohlcv_df['updated'].dt.round('us')
|
||||
|
||||
ohlcv_df.set_index('time', inplace=True)
|
||||
#ohlcv_df.index = ohlcv_df.index.tz_localize('UTC').tz_convert(zoneNY)
|
||||
return ohlcv_df
|
||||
|
||||
# Function to ensure fractional seconds are present
|
||||
def ensure_fractional_seconds(timestamp):
|
||||
if '.' not in timestamp:
|
||||
# Inserting .000000 before the timezone indicator 'Z'
|
||||
return timestamp.replace('Z', '.000000Z')
|
||||
else:
|
||||
return timestamp
|
||||
|
||||
def convert_dict_to_multiindex_df(tradesResponse):
|
||||
""""
|
||||
Converts dictionary from cache or from remote (raw input) to multiindex dataframe.
|
||||
with microsecond precision (from nanoseconds in the raw data)
|
||||
"""""
|
||||
# Create a DataFrame for each key and add the key as part of the MultiIndex
|
||||
dfs = []
|
||||
for key, values in tradesResponse.items():
|
||||
df = pd.DataFrame(values)
|
||||
# Rename columns
|
||||
# Select and order columns explicitly
|
||||
#print(df)
|
||||
df = df[['t', 'x', 'p', 's', 'i', 'c','z']]
|
||||
df.rename(columns={'t': 'timestamp', 'c': 'conditions', 'p': 'price', 's': 'size', 'x': 'exchange', 'z':'tape', 'i':'id'}, inplace=True)
|
||||
df['symbol'] = key # Add ticker as a column
|
||||
|
||||
# Apply the function to ensure all timestamps have fractional seconds
|
||||
#zvazit zda toto ponechat a nebo dat jen pri urcitem erroru pri to_datetime
|
||||
#pripadne pak pridelat efektivnejsi pristup, aneb nahrazeni NaT - https://chatgpt.com/c/d2be6f87-b38f-4050-a1c6-541d100b1474
|
||||
df['timestamp'] = df['timestamp'].apply(ensure_fractional_seconds)
|
||||
|
||||
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce') # Convert 't' from string to datetime before setting it as an index
|
||||
|
||||
#Adjust to microsecond precision
|
||||
df.loc[df['timestamp'].notna(), 'timestamp'] = df['timestamp'].dt.floor('us')
|
||||
|
||||
df.set_index(['symbol', 'timestamp'], inplace=True) # Set the multi-level index using both 'ticker' and 't'
|
||||
df = df.tz_convert(zoneNY, level='timestamp')
|
||||
dfs.append(df)
|
||||
|
||||
# Concatenate all DataFrames into a single DataFrame with MultiIndex
|
||||
final_df = pd.concat(dfs)
|
||||
|
||||
return final_df
|
||||
|
||||
def dict_to_df(tradesResponse, start, end, exclude_conditions = None, minsize = None):
|
||||
""""
|
||||
Transforms dict to Tradeset, then df and to zone aware
|
||||
Also filters to start and end if necessary (ex. 9:30 to 15:40 is required only)
|
||||
|
||||
NOTE: prepodkladame, ze tradesResponse je dict from Raw data (cached/remote)
|
||||
"""""
|
||||
|
||||
df = convert_dict_to_multiindex_df(tradesResponse)
|
||||
|
||||
#REQUIRED FILTERING
|
||||
#pokud je zacatek pozdeji nebo konec driv tak orizneme
|
||||
if (start.time() > time(9, 30) or end.time() < time(16, 0)):
|
||||
print(f"filtrujeme {start.time()} {end.time()}")
|
||||
# Define the time range
|
||||
# start_time = pd.Timestamp(start.time(), tz=zoneNY).time()
|
||||
# end_time = pd.Timestamp(end.time(), tz=zoneNY).time()
|
||||
|
||||
# Create a mask to filter rows within the specified time range
|
||||
mask = (df.index.get_level_values('timestamp') >= start) & \
|
||||
(df.index.get_level_values('timestamp') <= end)
|
||||
|
||||
# Apply the mask to the DataFrame
|
||||
df = df[mask]
|
||||
|
||||
if exclude_conditions is not None:
|
||||
print(f"excluding conditions {exclude_conditions}")
|
||||
# Create a mask to exclude rows with any of the specified conditions
|
||||
mask = df['conditions'].apply(lambda x: any(cond in exclude_conditions for cond in x))
|
||||
|
||||
# Filter out the rows with specified conditions
|
||||
df = df[~mask]
|
||||
|
||||
if minsize is not None:
|
||||
print(f"minsize {minsize}")
|
||||
#exclude conditions
|
||||
df = df[df['size'] >= minsize]
|
||||
return df
|
||||
|
||||
def fetch_daily_stock_trades(symbol, start, end, exclude_conditions=None, minsize=None, force_remote=False, max_retries=5, backoff_factor=1):
|
||||
#doc for this function
|
||||
"""
|
||||
Attempts to fetch stock trades either from cache or remote. When remote, it uses retry mechanism with exponential backoff.
|
||||
Also it stores the data to cache if it is not already there.
|
||||
by using force_remote - forcess using remote data always and thus refreshing cache for these dates
|
||||
Attributes:
|
||||
:param symbol: The stock symbol to fetch trades for.
|
||||
:param start: The start time for the trade data.
|
||||
:param end: The end time for the trade data.
|
||||
:exclude_conditions: list of string conditions to exclude from the data
|
||||
:minsize minimum size of trade to be included in the data
|
||||
:force_remote will always use remote data and refresh cache
|
||||
:param max_retries: Maximum number of retries.
|
||||
:param backoff_factor: Factor to determine the next sleep time.
|
||||
:return: TradesResponse object.
|
||||
:raises: ConnectionError if all retries fail.
|
||||
|
||||
We use tradecache only for main sessison requests = 9:30 to 16:00
|
||||
Do budoucna ukládat celý den BAC-20240203.cache.gz a z toho si pak filtrovat bud main sesssionu a extended
|
||||
Ale zatim je uloženo jen main session v BAC-timestampopenu-timestampclose.cache.gz
|
||||
"""
|
||||
is_same_day = start.date() == end.date()
|
||||
# Determine if the requested times fall within the main session
|
||||
in_main_session = (time(9, 30) <= start.time() < time(16, 0)) and (time(9, 30) <= end.time() <= time(16, 0))
|
||||
file_path = ''
|
||||
|
||||
if in_main_session:
|
||||
filename_start = zoneNY.localize(datetime.combine(start.date(), time(9, 30)))
|
||||
filename_end = zoneNY.localize(datetime.combine(end.date(), time(16, 0)))
|
||||
daily_file = f"{symbol}-{int(filename_start.timestamp())}-{int(filename_end.timestamp())}.cache.gz"
|
||||
file_path = f"{DATA_DIR}/tradecache/{daily_file}"
|
||||
if not force_remote and os.path.exists(file_path):
|
||||
print(f"Searching {str(start.date())} cache: " + daily_file)
|
||||
with gzip.open(file_path, 'rb') as fp:
|
||||
tradesResponse = pickle.load(fp)
|
||||
print("FOUND in CACHE", daily_file)
|
||||
return dict_to_df(tradesResponse, start, end, exclude_conditions, minsize)
|
||||
|
||||
print("NOT FOUND. Fetching from remote")
|
||||
client = StockHistoricalDataClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=True)
|
||||
stockTradeRequest = StockTradesRequest(symbol_or_symbols=symbol, start=start, end=end)
|
||||
last_exception = None
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
tradesResponse = client.get_stock_trades(stockTradeRequest)
|
||||
is_empty = not tradesResponse[symbol]
|
||||
print(f"Remote fetched: {is_empty=}", start, end)
|
||||
if in_main_session and not is_empty:
|
||||
current_time = datetime.now().astimezone(zoneNY)
|
||||
if not (start < current_time < end):
|
||||
with gzip.open(file_path, 'wb') as fp:
|
||||
pickle.dump(tradesResponse, fp)
|
||||
print("Saving to Trade CACHE", file_path)
|
||||
|
||||
else: # Don't save the cache if the market is still open
|
||||
print("Not saving trade cache, market still open today")
|
||||
return pd.DataFrame() if is_empty else dict_to_df(tradesResponse, start, end, exclude_conditions, minsize)
|
||||
except Exception as e:
|
||||
print(f"Attempt {attempt + 1} failed: {e}")
|
||||
last_exception = e
|
||||
time_module.sleep(backoff_factor * (2 ** attempt) + random.uniform(0, 1)) # Adding random jitter
|
||||
|
||||
print("All attempts to fetch data failed.")
|
||||
raise ConnectionError(f"Failed to fetch stock trades after {max_retries} retries. Last exception: {str(last_exception)} and {format_exc()}")
|
||||
|
||||
|
||||
def fetch_trades_parallel(symbol, start_date, end_date, exclude_conditions = cfh.config_handler.get_val('AGG_EXCLUDED_TRADES'), minsize = 100, force_remote = False, max_workers=None):
|
||||
"""
|
||||
Fetches trades for each day between start_date and end_date during market hours (9:30-16:00) in parallel and concatenates them into a single DataFrame.
|
||||
|
||||
:param symbol: Stock symbol.
|
||||
:param start_date: Start date as datetime.
|
||||
:param end_date: End date as datetime.
|
||||
:return: DataFrame containing all trades from start_date to end_date.
|
||||
"""
|
||||
futures = []
|
||||
results = []
|
||||
|
||||
market_open_days = fetch_calendar_data(start_date, end_date)
|
||||
day_count = len(market_open_days)
|
||||
print("Contains", day_count, " market days")
|
||||
max_workers = min(10, max(2, day_count // 2)) if max_workers is None else max_workers # Heuristic: half the days to process, but at least 1 and no more than 10
|
||||
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
#for single_date in (start_date + timedelta(days=i) for i in range((end_date - start_date).days + 1)):
|
||||
for market_day in tqdm(market_open_days, desc="Processing market days"):
|
||||
#start = datetime.combine(single_date, time(9, 30)) # Market opens at 9:30 AM
|
||||
#end = datetime.combine(single_date, time(16, 0)) # Market closes at 4:00 PM
|
||||
|
||||
interval_from = zoneNY.localize(market_day.open)
|
||||
interval_to = zoneNY.localize(market_day.close)
|
||||
|
||||
#pripadne orizneme pokud je pozadovane pozdejsi zacatek a drivejsi konek
|
||||
start = start_date if interval_from < start_date else interval_from
|
||||
#start = max(start_date, interval_from)
|
||||
end = end_date if interval_to > end_date else interval_to
|
||||
#end = min(end_date, interval_to)
|
||||
|
||||
future = executor.submit(fetch_daily_stock_trades, symbol, start, end, exclude_conditions, minsize, force_remote)
|
||||
futures.append(future)
|
||||
|
||||
for future in tqdm(futures, desc="Fetching data"):
|
||||
try:
|
||||
result = future.result()
|
||||
results.append(result)
|
||||
except Exception as e:
|
||||
print(f"Error fetching data for a day: {e}")
|
||||
|
||||
# Batch concatenation to improve speed
|
||||
batch_size = 10
|
||||
batches = [results[i:i + batch_size] for i in range(0, len(results), batch_size)]
|
||||
final_df = pd.concat([pd.concat(batch, ignore_index=False) for batch in batches], ignore_index=False)
|
||||
|
||||
return final_df
|
||||
|
||||
#original version
|
||||
#return pd.concat(results, ignore_index=False)
|
||||
|
||||
@jit(nopython=True)
|
||||
def generate_dollar_bars_nb(ticks, amount_per_bar):
|
||||
""""
|
||||
Generates Dollar based bars from ticks.
|
||||
|
||||
There is also simple prevention of aggregation from different days
|
||||
as described here https://chatgpt.com/c/17804fc1-a7bc-495d-8686-b8392f3640a2
|
||||
Downside: split days by UTC (which is ok for main session, but when extended hours it should be reworked by preprocessing new column identifying session)
|
||||
|
||||
|
||||
When trade is split into multiple bars it is counted as trade in each of the bars.
|
||||
Other option: trade count can be proportionally distributed by weight (0.2 to 1st bar, 0.8 to 2nd bar) - but this is not implemented yet
|
||||
https://chatgpt.com/c/ff4802d9-22a2-4b72-8ab7-97a91e7a515f
|
||||
"""""
|
||||
ohlcv_bars = []
|
||||
remaining_amount = amount_per_bar
|
||||
|
||||
# Initialize bar values based on the first tick to avoid uninitialized values
|
||||
open_price = ticks[0, 1]
|
||||
high_price = ticks[0, 1]
|
||||
low_price = ticks[0, 1]
|
||||
close_price = ticks[0, 1]
|
||||
volume = 0
|
||||
trades_count = 0
|
||||
current_day = np.floor(ticks[0, 0] / 86400) # Calculate the initial day from the first tick timestamp
|
||||
bar_time = ticks[0, 0] # Initialize bar time with the time of the first tick
|
||||
|
||||
for tick in ticks:
|
||||
tick_time = tick[0]
|
||||
price = tick[1]
|
||||
tick_volume = tick[2]
|
||||
tick_amount = price * tick_volume
|
||||
tick_day = np.floor(tick_time / 86400) # Calculate the day of the current tick
|
||||
|
||||
# Check if the new tick is from a different day, then close the current bar
|
||||
if tick_day != current_day:
|
||||
if trades_count > 0:
|
||||
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, amount_per_bar, tick_time])
|
||||
# Reset for the new day using the current tick data
|
||||
open_price = price
|
||||
high_price = price
|
||||
low_price = price
|
||||
close_price = price
|
||||
volume = 0
|
||||
trades_count = 0
|
||||
remaining_amount = amount_per_bar
|
||||
current_day = tick_day
|
||||
bar_time = tick_time
|
||||
|
||||
# Start new bar if needed because of the dollar value
|
||||
while tick_amount > 0:
|
||||
if tick_amount < remaining_amount:
|
||||
# Add the entire tick to the current bar
|
||||
high_price = max(high_price, price)
|
||||
low_price = min(low_price, price)
|
||||
close_price = price
|
||||
volume += tick_volume
|
||||
remaining_amount -= tick_amount
|
||||
trades_count += 1
|
||||
tick_amount = 0
|
||||
else:
|
||||
# Calculate the amount of volume that fits within the remaining dollar amount
|
||||
volume_to_add = remaining_amount / price
|
||||
volume += volume_to_add # Update the volume here before appending and resetting
|
||||
|
||||
# Append the partially filled bar to the list
|
||||
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count + 1, amount_per_bar, tick_time])
|
||||
|
||||
# Fill the current bar and continue with a new bar
|
||||
tick_volume -= volume_to_add
|
||||
tick_amount -= remaining_amount
|
||||
|
||||
# Reset bar values for the new bar using the current tick data
|
||||
open_price = price
|
||||
high_price = price
|
||||
low_price = price
|
||||
close_price = price
|
||||
volume = 0 # Reset volume for the new bar
|
||||
trades_count = 0
|
||||
remaining_amount = amount_per_bar
|
||||
|
||||
# Increment bar time if splitting a trade
|
||||
if tick_volume > 0: #pokud v tradu je jeste zbytek nastavujeme cas o nanosekundu vetsi
|
||||
bar_time = tick_time + 1e-6
|
||||
else:
|
||||
bar_time = tick_time #jinak nastavujeme cas ticku
|
||||
#bar_time = tick_time
|
||||
|
||||
# Add the last bar if it contains any trades
|
||||
if trades_count > 0:
|
||||
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, amount_per_bar, tick_time])
|
||||
|
||||
return np.array(ohlcv_bars)
|
||||
|
||||
|
||||
@jit(nopython=True)
|
||||
def generate_volume_bars_nb(ticks, volume_per_bar):
|
||||
""""
|
||||
Generates Volume based bars from ticks.
|
||||
|
||||
NOTE: UTC day split here (doesnt aggregate trades from different days)
|
||||
but realized from UTC (ok for main session) - but needs rework for extension by preprocessing ticks_df and introduction sesssion column
|
||||
|
||||
When trade is split into multiple bars it is counted as trade in each of the bars.
|
||||
Other option: trade count can be proportionally distributed by weight (0.2 to 1st bar, 0.8 to 2nd bar) - but this is not implemented yet
|
||||
https://chatgpt.com/c/ff4802d9-22a2-4b72-8ab7-97a91e7a515f
|
||||
"""""
|
||||
ohlcv_bars = []
|
||||
remaining_volume = volume_per_bar
|
||||
|
||||
# Initialize bar values based on the first tick to avoid uninitialized values
|
||||
open_price = ticks[0, 1]
|
||||
high_price = ticks[0, 1]
|
||||
low_price = ticks[0, 1]
|
||||
close_price = ticks[0, 1]
|
||||
volume = 0
|
||||
trades_count = 0
|
||||
current_day = np.floor(ticks[0, 0] / 86400) # Calculate the initial day from the first tick timestamp
|
||||
bar_time = ticks[0, 0] # Initialize bar time with the time of the first tick
|
||||
buy_volume = 0 # Volume of buy trades
|
||||
sell_volume = 0 # Volume of sell trades
|
||||
prev_price = ticks[0, 1] # Initialize previous price for the first tick
|
||||
|
||||
for tick in ticks:
|
||||
tick_time = tick[0]
|
||||
price = tick[1]
|
||||
tick_volume = tick[2]
|
||||
tick_day = np.floor(tick_time / 86400) # Calculate the day of the current tick
|
||||
|
||||
# Check if the new tick is from a different day, then close the current bar
|
||||
if tick_day != current_day:
|
||||
if trades_count > 0:
|
||||
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, tick_time, buy_volume, sell_volume])
|
||||
# Reset for the new day using the current tick data
|
||||
open_price = price
|
||||
high_price = price
|
||||
low_price = price
|
||||
close_price = price
|
||||
volume = 0
|
||||
trades_count = 0
|
||||
remaining_volume = volume_per_bar
|
||||
current_day = tick_day
|
||||
bar_time = tick_time # Update bar time to the current tick time
|
||||
buy_volume = 0
|
||||
sell_volume = 0
|
||||
# Reset previous tick price (calulating imbalance for each day from the start)
|
||||
prev_price = price
|
||||
|
||||
# Start new bar if needed because of the volume
|
||||
while tick_volume > 0:
|
||||
if tick_volume < remaining_volume:
|
||||
# Add the entire tick to the current bar
|
||||
high_price = max(high_price, price)
|
||||
low_price = min(low_price, price)
|
||||
close_price = price
|
||||
volume += tick_volume
|
||||
remaining_volume -= tick_volume
|
||||
trades_count += 1
|
||||
|
||||
# Update buy and sell volumes
|
||||
if price > prev_price:
|
||||
buy_volume += tick_volume
|
||||
elif price < prev_price:
|
||||
sell_volume += tick_volume
|
||||
|
||||
tick_volume = 0
|
||||
else:
|
||||
# Fill the current bar and continue with a new bar
|
||||
volume_to_add = remaining_volume
|
||||
volume += volume_to_add
|
||||
tick_volume -= volume_to_add
|
||||
trades_count += 1
|
||||
|
||||
# Update buy and sell volumes
|
||||
if price > prev_price:
|
||||
buy_volume += volume_to_add
|
||||
elif price < prev_price:
|
||||
sell_volume += volume_to_add
|
||||
|
||||
# Append the completed bar to the list
|
||||
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, tick_time, buy_volume, sell_volume])
|
||||
|
||||
# Reset bar values for the new bar using the current tick data
|
||||
open_price = price
|
||||
high_price = price
|
||||
low_price = price
|
||||
close_price = price
|
||||
volume = 0
|
||||
trades_count = 0
|
||||
remaining_volume = volume_per_bar
|
||||
buy_volume = 0
|
||||
sell_volume = 0
|
||||
|
||||
# Increment bar time if splitting a trade
|
||||
if tick_volume > 0: # If there's remaining volume in the trade, set bar time slightly later
|
||||
bar_time = tick_time + 1e-6
|
||||
else:
|
||||
bar_time = tick_time # Otherwise, set bar time to the tick time
|
||||
|
||||
prev_price = price
|
||||
|
||||
# Add the last bar if it contains any trades
|
||||
if trades_count > 0:
|
||||
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, tick_time, buy_volume, sell_volume])
|
||||
|
||||
return np.array(ohlcv_bars)
|
||||
|
||||
@jit(nopython=True)
|
||||
def generate_time_bars_nb(ticks, resolution):
|
||||
# Initialize the start and end time
|
||||
start_time = np.floor(ticks[0, 0] / resolution) * resolution
|
||||
end_time = np.floor(ticks[-1, 0] / resolution) * resolution
|
||||
|
||||
# # Calculate number of bars
|
||||
# num_bars = int((end_time - start_time) // resolution + 1)
|
||||
|
||||
# Using a list to append data only when trades exist
|
||||
ohlcv_bars = []
|
||||
|
||||
# Variables to track the current bar
|
||||
current_bar_index = -1
|
||||
open_price = 0
|
||||
high_price = -np.inf
|
||||
low_price = np.inf
|
||||
close_price = 0
|
||||
volume = 0
|
||||
trades_count = 0
|
||||
vwap_cum_volume_price = 0 # Cumulative volume * price
|
||||
cum_volume = 0 # Cumulative volume for VWAP
|
||||
buy_volume = 0 # Volume of buy trades
|
||||
sell_volume = 0 # Volume of sell trades
|
||||
prev_price = ticks[0, 1] # Initialize previous price for the first tick
|
||||
prev_day = np.floor(ticks[0, 0] / 86400) # Calculate the initial day from the first tick timestamp
|
||||
|
||||
for tick in ticks:
|
||||
curr_time = tick[0] #updated time
|
||||
tick_time = np.floor(tick[0] / resolution) * resolution
|
||||
price = tick[1]
|
||||
tick_volume = tick[2]
|
||||
tick_day = np.floor(tick_time / 86400) # Calculate the day of the current tick
|
||||
|
||||
#if the new tick is from a new day, reset previous tick price (calculating imbalance starts over)
|
||||
if tick_day != prev_day:
|
||||
prev_price = price
|
||||
prev_day = tick_day
|
||||
|
||||
# Check if the tick belongs to a new bar
|
||||
if tick_time != start_time + current_bar_index * resolution:
|
||||
if current_bar_index >= 0 and trades_count > 0: # Save the previous bar if trades happened
|
||||
vwap = vwap_cum_volume_price / cum_volume if cum_volume > 0 else 0
|
||||
ohlcv_bars.append([start_time + current_bar_index * resolution, open_price, high_price, low_price, close_price, volume, trades_count, curr_time, vwap, buy_volume, sell_volume])
|
||||
|
||||
# Reset bar values
|
||||
current_bar_index = int((tick_time - start_time) / resolution)
|
||||
open_price = price
|
||||
high_price = price
|
||||
low_price = price
|
||||
volume = 0
|
||||
trades_count = 0
|
||||
vwap_cum_volume_price = 0
|
||||
cum_volume = 0
|
||||
buy_volume = 0
|
||||
sell_volume = 0
|
||||
|
||||
# Update the OHLCV values for the current bar
|
||||
high_price = max(high_price, price)
|
||||
low_price = min(low_price, price)
|
||||
close_price = price
|
||||
volume += tick_volume
|
||||
trades_count += 1
|
||||
vwap_cum_volume_price += price * tick_volume
|
||||
cum_volume += tick_volume
|
||||
|
||||
# Update buy and sell volumes
|
||||
if price > prev_price:
|
||||
buy_volume += tick_volume
|
||||
elif price < prev_price:
|
||||
sell_volume += tick_volume
|
||||
|
||||
prev_price = price
|
||||
|
||||
# Save the last processed bar
|
||||
if trades_count > 0:
|
||||
vwap = vwap_cum_volume_price / cum_volume if cum_volume > 0 else 0
|
||||
ohlcv_bars.append([start_time + current_bar_index * resolution, open_price, high_price, low_price, close_price, volume, trades_count, curr_time, vwap, buy_volume, sell_volume])
|
||||
|
||||
return np.array(ohlcv_bars)
|
||||
|
||||
# Example usage
|
||||
if __name__ == '__main__':
|
||||
pass
|
||||
#example in agg_vect.ipynb
|
||||
@ -1,6 +1,7 @@
|
||||
from threading import Thread
|
||||
from threading import Thread, current_thread
|
||||
from alpaca.trading.stream import TradingStream
|
||||
from v2realbot.config import Keys
|
||||
from v2realbot.common.model import Account
|
||||
|
||||
#jelikoz Alpaca podporuje pripojeni libovolneho poctu websocket instanci na order updates
|
||||
#vytvorime pro kazdou bezici instanci vlastni webservisu (jinak bychom museli delat instanci pro kombinaci ACCOUNT1 - LIVE, ACCOUNT1 - PAPER, ACCOUNT2 - PAPER ..)
|
||||
@ -14,15 +15,16 @@ As Alpaca supports connecting of any number of trade updates clients
|
||||
new instance of this websocket thread is created for each strategy instance.
|
||||
"""""
|
||||
class LiveOrderUpdatesStreamer(Thread):
|
||||
def __init__(self, key: Keys, name: str) -> None:
|
||||
def __init__(self, key: Keys, name: str, account: Account) -> None:
|
||||
self.key = key
|
||||
self.account = account
|
||||
self.strategy = None
|
||||
self.client = TradingStream(api_key=key.API_KEY, secret_key=key.SECRET_KEY, paper=key.PAPER)
|
||||
Thread.__init__(self, name=name)
|
||||
|
||||
#notif dispatcher - pouze 1 strategie
|
||||
async def distributor(self,data):
|
||||
if self.strategy.symbol == data.order.symbol: await self.strategy.order_updates(data)
|
||||
if self.strategy.symbol == data.order.symbol: await self.strategy.order_updates(data, self.account)
|
||||
|
||||
# connects callback to interface object - responses for given symbol are routed to interface callback
|
||||
def connect_callback(self, st):
|
||||
@ -39,6 +41,6 @@ class LiveOrderUpdatesStreamer(Thread):
|
||||
print("connect strategy first")
|
||||
return
|
||||
self.client.subscribe_trade_updates(self.distributor)
|
||||
print("*"*10, "WS Order Update Streamer started for", self.strategy.name, "*"*10)
|
||||
print("*"*10, "WS Order Update Streamer started for", current_thread().name,"*"*10)
|
||||
self.client.run()
|
||||
|
||||
|
||||
@ -1,14 +1,13 @@
|
||||
from v2realbot.loader.aggregator import TradeAggregator, TradeAggregator2List, TradeAggregator2Queue
|
||||
#from v2realbot.loader.cacher import get_cached_agg_data
|
||||
from alpaca.trading.requests import GetCalendarRequest
|
||||
from alpaca.trading.client import TradingClient
|
||||
from alpaca.data.live import StockDataStream
|
||||
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR, OFFLINE_MODE
|
||||
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR
|
||||
from alpaca.data.enums import DataFeed
|
||||
from alpaca.data.historical import StockHistoricalDataClient
|
||||
from alpaca.data.requests import StockLatestQuoteRequest, StockBarsRequest, StockTradesRequest
|
||||
from threading import Thread, current_thread
|
||||
from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, zoneNY
|
||||
from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, zoneNY, send_to_telegram, fetch_calendar_data
|
||||
from v2realbot.utils.tlog import tlog
|
||||
from datetime import datetime, timedelta, date
|
||||
from threading import Thread
|
||||
@ -16,6 +15,7 @@ import asyncio
|
||||
from msgpack.ext import Timestamp
|
||||
from msgpack import packb
|
||||
from pandas import to_datetime
|
||||
import gzip
|
||||
import pickle
|
||||
import os
|
||||
from rich import print
|
||||
@ -25,13 +25,15 @@ from tqdm import tqdm
|
||||
import time
|
||||
from traceback import format_exc
|
||||
from collections import defaultdict
|
||||
import requests
|
||||
import v2realbot.utils.config_handler as cfh
|
||||
"""
|
||||
Trade offline data streamer, based on Alpaca historical data.
|
||||
"""
|
||||
class Trade_Offline_Streamer(Thread):
|
||||
#pro BT se pripojujeme vzdy k primarnimu uctu - pouze tahame historicka data + calendar
|
||||
client = StockHistoricalDataClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=True)
|
||||
clientTrading = TradingClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=False)
|
||||
#clientTrading = TradingClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=False)
|
||||
def __init__(self, time_from: datetime, time_to: datetime, btdata) -> None:
|
||||
# Call the Thread class's init function
|
||||
Thread.__init__(self)
|
||||
@ -63,6 +65,35 @@ class Trade_Offline_Streamer(Thread):
|
||||
def stop(self):
|
||||
pass
|
||||
|
||||
def fetch_stock_trades(self, symbol, start, end, max_retries=5, backoff_factor=1):
|
||||
"""
|
||||
Attempts to fetch stock trades with exponential backoff. Raises an exception if all retries fail.
|
||||
|
||||
:param symbol: The stock symbol to fetch trades for.
|
||||
:param start: The start time for the trade data.
|
||||
:param end: The end time for the trade data.
|
||||
:param max_retries: Maximum number of retries.
|
||||
:param backoff_factor: Factor to determine the next sleep time.
|
||||
:return: TradesResponse object.
|
||||
:raises: ConnectionError if all retries fail.
|
||||
"""
|
||||
stockTradeRequest = StockTradesRequest(symbol_or_symbols=symbol, start=start, end=end)
|
||||
last_exception = None
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
tradesResponse = self.client.get_stock_trades(stockTradeRequest)
|
||||
print("Remote Fetch DAY DATA Complete", start, end)
|
||||
return tradesResponse
|
||||
except Exception as e:
|
||||
print(f"Attempt {attempt + 1} failed: {e}")
|
||||
last_exception = e
|
||||
time.sleep(backoff_factor * (2 ** attempt))
|
||||
|
||||
print("All attempts to fetch data failed.")
|
||||
send_to_telegram(f"Failed to fetch stock trades after {max_retries} retries. Last exception: {str(last_exception)} and {format_exc()}")
|
||||
raise ConnectionError(f"Failed to fetch stock trades after {max_retries} retries. Last exception: {str(last_exception)} and {format_exc()}")
|
||||
|
||||
# Override the run() function of Thread class
|
||||
#odebrano async
|
||||
def main(self):
|
||||
@ -73,6 +104,8 @@ class Trade_Offline_Streamer(Thread):
|
||||
print("call add streams to queue first")
|
||||
return 0
|
||||
|
||||
cfh.config_handler.print_current_config()
|
||||
|
||||
#iterujeme nad streamy
|
||||
for i in self.streams:
|
||||
self.uniquesymbols.add(i.symbol)
|
||||
@ -107,24 +140,20 @@ class Trade_Offline_Streamer(Thread):
|
||||
#REFACTOR STARTS HERE
|
||||
#print(f"{self.time_from=} {self.time_to=}")
|
||||
|
||||
if OFFLINE_MODE:
|
||||
if cfh.config_handler.get_val('OFFLINE_MODE'):
|
||||
#just one day - same like time_from
|
||||
den = str(self.time_to.date())
|
||||
bt_day = Calendar(date=den,open="9:30",close="16:00")
|
||||
cal_dates = [bt_day]
|
||||
else:
|
||||
calendar_request = GetCalendarRequest(start=self.time_from,end=self.time_to)
|
||||
|
||||
#toto zatim workaround - dat do retry funkce a obecne vymyslet exception handling, abych byl notifikovan a bylo videt okamzite v logu a na frontendu
|
||||
try:
|
||||
cal_dates = self.clientTrading.get_calendar(calendar_request)
|
||||
except Exception as e:
|
||||
print("CHYBA - retrying in 4s: " + str(e) + format_exc())
|
||||
time.sleep(5)
|
||||
cal_dates = self.clientTrading.get_calendar(calendar_request)
|
||||
start_date = self.time_from # Assuming this is your start date
|
||||
end_date = self.time_to # Assuming this is your end date
|
||||
cal_dates = fetch_calendar_data(start_date, end_date)
|
||||
|
||||
#zatim podpora pouze main session
|
||||
|
||||
live_data_feed = cfh.config_handler.get_val('LIVE_DATA_FEED')
|
||||
|
||||
#zatim podpora pouze 1 symbolu, predelat na froloop vsech symbolu ze symbpole
|
||||
#minimalni jednotka pro CACHE je 1 den - a to jen marketopen to marketclose (extended hours not supported yet)
|
||||
for day in cal_dates:
|
||||
@ -167,9 +196,10 @@ class Trade_Offline_Streamer(Thread):
|
||||
# stream.send_cache_to_output(cache)
|
||||
# to_rem.append(stream)
|
||||
|
||||
#cache resime jen kdyz backtestujeme cely den
|
||||
#cache resime jen kdyz backtestujeme cely den a mame sip datapoint (iex necachujeme)
|
||||
#pokud ne tak ani necteme, ani nezapisujeme do cache
|
||||
if self.time_to >= day.close:
|
||||
|
||||
if (self.time_to >= day.close and self.time_from <= day.open) and live_data_feed == DataFeed.SIP:
|
||||
#tento odstavec obchazime pokud je nastaveno "dont_use_cache"
|
||||
stream_btdata = self.to_run[symbpole[0]][0]
|
||||
cache_btdata, file_btdata = stream_btdata.get_cache(day.open, day.close)
|
||||
@ -197,7 +227,7 @@ class Trade_Offline_Streamer(Thread):
|
||||
stream_main.enable_cache_output(day.open, day.close)
|
||||
|
||||
#trade daily file
|
||||
daily_file = str(symbpole[0]) + '-' + str(int(day.open.timestamp())) + '-' + str(int(day.close.timestamp())) + '.cache'
|
||||
daily_file = str(symbpole[0]) + '-' + str(int(day.open.timestamp())) + '-' + str(int(day.close.timestamp())) + '.cache.gz'
|
||||
print(daily_file)
|
||||
file_path = DATA_DIR + "/tradecache/"+daily_file
|
||||
|
||||
@ -207,23 +237,31 @@ class Trade_Offline_Streamer(Thread):
|
||||
#pokud je start_time < trade < end_time
|
||||
#odesíláme do queue
|
||||
#jinak pass
|
||||
with open (file_path, 'rb') as fp:
|
||||
with gzip.open (file_path, 'rb') as fp:
|
||||
tradesResponse = pickle.load(fp)
|
||||
print("Loading from Trade CACHE", file_path)
|
||||
#daily file doesnt exist
|
||||
else:
|
||||
# TODO refactor pro zpracovani vice symbolu najednou(multithreads), nyni predpokladame pouze 1
|
||||
stockTradeRequest = StockTradesRequest(symbol_or_symbols=symbpole[0], start=day.open,end=day.close)
|
||||
tradesResponse = self.client.get_stock_trades(stockTradeRequest)
|
||||
|
||||
#implement retry mechanism
|
||||
symbol = symbpole[0] # Assuming symbpole[0] is your target symbol
|
||||
day_open = day.open # Assuming day.open is the start time
|
||||
day_close = day.close # Assuming day.close is the end time
|
||||
|
||||
tradesResponse = self.fetch_stock_trades(symbol, day_open, day_close)
|
||||
|
||||
# # TODO refactor pro zpracovani vice symbolu najednou(multithreads), nyni predpokladame pouze 1
|
||||
# stockTradeRequest = StockTradesRequest(symbol_or_symbols=symbpole[0], start=day.open,end=day.close)
|
||||
# tradesResponse = self.client.get_stock_trades(stockTradeRequest)
|
||||
print("Remote Fetch DAY DATA Complete", day.open, day.close)
|
||||
|
||||
#pokud jde o dnešní den a nebyl konec trhu tak cache neukládáme
|
||||
if day.open < datetime.now().astimezone(zoneNY) < day.close:
|
||||
print("not saving trade cache, market still open today")
|
||||
#pokud jde o dnešní den a nebyl konec trhu tak cache neukládáme, pripadne pri iex datapointu necachujeme
|
||||
if (day.open < datetime.now().astimezone(zoneNY) < day.close) or live_data_feed == DataFeed.IEX:
|
||||
print("not saving trade cache, market still open today or IEX datapoint")
|
||||
#ic(datetime.now().astimezone(zoneNY))
|
||||
#ic(day.open, day.close)
|
||||
else:
|
||||
with open(file_path, 'wb') as fp:
|
||||
with gzip.open(file_path, 'wb') as fp:
|
||||
pickle.dump(tradesResponse, fp)
|
||||
|
||||
#zde už máme daily data
|
||||
@ -257,7 +295,7 @@ class Trade_Offline_Streamer(Thread):
|
||||
cnt = 1
|
||||
|
||||
|
||||
for t in tqdm(tradesResponse[symbol]):
|
||||
for t in tqdm(tradesResponse[symbol], desc="Loading Trades"):
|
||||
|
||||
#protoze je zde cely den, poustime dal, jen ty relevantni
|
||||
#pokud je start_time < trade < end_time
|
||||
@ -270,6 +308,9 @@ class Trade_Offline_Streamer(Thread):
|
||||
#tmp = to_datetime(t['t'], utc=True).timestamp()
|
||||
|
||||
|
||||
#obcas se v response objevoval None radek
|
||||
if t is None:
|
||||
continue
|
||||
|
||||
datum = to_datetime(t['t'], utc=True)
|
||||
|
||||
|
||||
@ -4,7 +4,7 @@
|
||||
"""
|
||||
from v2realbot.loader.aggregator import TradeAggregator2Queue
|
||||
from alpaca.data.live import StockDataStream
|
||||
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, ACCOUNT1_PAPER_FEED
|
||||
from v2realbot.config import LIVE_DATA_API_KEY, LIVE_DATA_SECRET_KEY
|
||||
from alpaca.data.historical import StockHistoricalDataClient
|
||||
from alpaca.data.requests import StockLatestQuoteRequest, StockBarsRequest, StockTradesRequest
|
||||
from threading import Thread, current_thread
|
||||
@ -12,6 +12,7 @@ from v2realbot.utils.utils import parse_alpaca_timestamp, ltp
|
||||
from datetime import datetime, timedelta
|
||||
from threading import Thread, Lock
|
||||
from msgpack import packb
|
||||
import v2realbot.utils.config_handler as cfh
|
||||
|
||||
"""
|
||||
Shared streamer (can be shared amongst concurrently running strategies)
|
||||
@ -19,9 +20,12 @@ from msgpack import packb
|
||||
by strategies
|
||||
"""
|
||||
class Trade_WS_Streamer(Thread):
|
||||
|
||||
live_data_feed = cfh.config_handler.get_val('LIVE_DATA_FEED')
|
||||
##tento ws streamer je pouze jeden pro vsechny, tzn. vyuziváme natvrdo placena data primarniho uctu (nezalezi jestli paper nebo live)
|
||||
client = StockDataStream(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=True, websocket_params={}, feed=ACCOUNT1_PAPER_FEED)
|
||||
msg = f"Realtime Websocket connection will use FEED: {live_data_feed} and credential of ACCOUNT1"
|
||||
print(msg)
|
||||
#cfh.config_handler.print_current_config()
|
||||
client = StockDataStream(LIVE_DATA_API_KEY, LIVE_DATA_SECRET_KEY, raw_data=True, websocket_params={}, feed=live_data_feed)
|
||||
#uniquesymbols = set()
|
||||
_streams = []
|
||||
#to_run = dict()
|
||||
@ -38,10 +42,23 @@ class Trade_WS_Streamer(Thread):
|
||||
return False
|
||||
|
||||
def add_stream(self, obj: TradeAggregator2Queue):
|
||||
print(Trade_WS_Streamer.msg)
|
||||
print("stav pred pridavanim", Trade_WS_Streamer._streams)
|
||||
Trade_WS_Streamer._streams.append(obj)
|
||||
if Trade_WS_Streamer.client._running is False:
|
||||
print("websocket zatim nebezi, pouze pridavame do pole")
|
||||
|
||||
#zde delame refresh clienta (pokud se zmenilo live_data_feed)
|
||||
|
||||
# live_data_feed = cfh.config_handler.get_val('LIVE_DATA_FEED')
|
||||
# #po otestování přepnout jen pokud se live_data_feed změnil
|
||||
# #if live_data_feed != Trade_WS_Streamer.live_data_feed:
|
||||
# # Trade_WS_Streamer.live_data_feed = live_data_feed
|
||||
# msg = f"REFRESH OF CLIENT! Realtime Websocket connection will use FEED: {live_data_feed} and credential of ACCOUNT1"
|
||||
# print(msg)
|
||||
# #cfh.config_handler.print_current_config()
|
||||
# Trade_WS_Streamer.client = StockDataStream(LIVE_DATA_API_KEY, LIVE_DATA_SECRET_KEY, raw_data=True, websocket_params={}, feed=live_data_feed)
|
||||
|
||||
else:
|
||||
print("websocket client bezi")
|
||||
if self.symbol_exists(obj.symbol):
|
||||
@ -59,7 +76,12 @@ class Trade_WS_Streamer(Thread):
|
||||
#if it is the last item at all, stop the client from running
|
||||
if len(Trade_WS_Streamer._streams) == 0:
|
||||
print("removed last item from WS, stopping the client")
|
||||
Trade_WS_Streamer.client.stop()
|
||||
#Trade_WS_Streamer.client.stop_ws()
|
||||
#Trade_WS_Streamer.client.stop()
|
||||
#zkusíme explicitně zavolat kroky pro disconnect od ws
|
||||
if Trade_WS_Streamer.client._stop_stream_queue.empty():
|
||||
Trade_WS_Streamer.client._stop_stream_queue.put_nowait({"should_stop": True})
|
||||
Trade_WS_Streamer.client._should_run = False
|
||||
return
|
||||
|
||||
if not self.symbol_exists(obj.symbol):
|
||||
|
||||
@ -1,26 +1,26 @@
|
||||
import os,sys
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY, LOG_FILE
|
||||
os.environ["KERAS_BACKEND"] = "jax"
|
||||
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY, LOG_PATH, MODEL_DIR
|
||||
from alpaca.data.timeframe import TimeFrame, TimeFrameUnit
|
||||
from datetime import datetime
|
||||
import os
|
||||
from rich import print
|
||||
from fastapi import FastAPI, Depends, HTTPException, status
|
||||
from fastapi import FastAPI, Depends, HTTPException, status, File, UploadFile, Response
|
||||
from fastapi.security import APIKeyHeader
|
||||
import uvicorn
|
||||
from uuid import UUID
|
||||
import v2realbot.controller.services as cs
|
||||
from v2realbot.utils.ilog import get_log_window
|
||||
from v2realbot.common.model import StrategyInstance, RunnerView, RunRequest, Trade, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, Bar, RunArchiveChange, TestList, ConfigItem, InstantIndicator, DataTablesRequest, AnalyzerInputs
|
||||
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends, HTTPException, status, WebSocketException, Cookie, Query
|
||||
from fastapi.responses import FileResponse, StreamingResponse
|
||||
from v2realbot.common.model import RunManagerRecord, StrategyInstance, RunnerView, RunRequest, TradeView, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, Bar, RunArchiveChange, TestList, ConfigItem, InstantIndicator, DataTablesRequest, AnalyzerInputs
|
||||
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends, HTTPException, status, WebSocketException, Cookie, Query, Request
|
||||
from fastapi.responses import FileResponse, StreamingResponse, JSONResponse
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.security import HTTPBasic, HTTPBasicCredentials
|
||||
from v2realbot.enums.enums import Env, Mode
|
||||
from typing import Annotated
|
||||
import os
|
||||
import psutil
|
||||
import uvicorn
|
||||
import json
|
||||
import orjson
|
||||
from queue import Queue, Empty
|
||||
from threading import Thread
|
||||
import asyncio
|
||||
@ -35,6 +35,16 @@ from v2realbot.reporting.metricstoolsimage import generate_trading_report_image
|
||||
from traceback import format_exc
|
||||
#from v2realbot.reporting.optimizecutoffs import find_optimal_cutoff
|
||||
import v2realbot.reporting.analyzer as ci
|
||||
import shutil
|
||||
from starlette.responses import JSONResponse, HTMLResponse, FileResponse, RedirectResponse
|
||||
import mlroom
|
||||
import mlroom.utils.mlutils as ml
|
||||
from typing import List
|
||||
import v2realbot.controller.run_manager as rm
|
||||
import v2realbot.scheduler.ap_scheduler as aps
|
||||
import re
|
||||
import v2realbot.controller.configs as cf
|
||||
import v2realbot.controller.services as cs
|
||||
#from async io import Queue, QueueEmpty
|
||||
#
|
||||
# install()
|
||||
@ -66,13 +76,51 @@ def api_key_auth(api_key: str = Depends(X_API_KEY)):
|
||||
detail="Forbidden"
|
||||
)
|
||||
|
||||
def authenticate_user(credentials: HTTPBasicCredentials = Depends(HTTPBasic())):
|
||||
correct_username = "david"
|
||||
correct_password = "david"
|
||||
|
||||
if credentials.username == correct_username and credentials.password == correct_password:
|
||||
return True
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Incorrect username or password",
|
||||
headers={"WWW-Authenticate": "Basic"},
|
||||
)
|
||||
|
||||
|
||||
app = FastAPI()
|
||||
root = os.path.dirname(os.path.abspath(__file__))
|
||||
app.mount("/static", StaticFiles(html=True, directory=os.path.join(root, 'static')), name="static")
|
||||
#app.mount("/static", StaticFiles(html=True, directory=os.path.join(root, 'static')), name="static")
|
||||
app.mount("/media", StaticFiles(directory=str(MEDIA_DIRECTORY)), name="media")
|
||||
#app.mount("/", StaticFiles(html=True, directory=os.path.join(root, 'static')), name="www")
|
||||
|
||||
security = HTTPBasic()
|
||||
@app.get("/static/{path:path}")
|
||||
async def static_files(request: Request, path: str, authenticated: bool = Depends(authenticate_user)):
|
||||
root = os.path.dirname(os.path.abspath(__file__))
|
||||
static_dir = os.path.join(root, 'static')
|
||||
|
||||
if not path or path == "/":
|
||||
file_path = os.path.join(static_dir, 'index.html')
|
||||
else:
|
||||
file_path = os.path.join(static_dir, path)
|
||||
|
||||
# Check if path is a directory
|
||||
if os.path.isdir(file_path):
|
||||
# If it's a directory, try to serve index.html within that directory
|
||||
index_path = os.path.join(file_path, 'index.html')
|
||||
if os.path.exists(index_path):
|
||||
return FileResponse(index_path)
|
||||
else:
|
||||
# Optionally, you can return a directory listing or a custom 404 page here
|
||||
return HTMLResponse("Directory listing not enabled.", status_code=403)
|
||||
|
||||
if not os.path.exists(file_path):
|
||||
raise HTTPException(status_code=404, detail="File not found")
|
||||
|
||||
return FileResponse(file_path)
|
||||
|
||||
def get_current_username(
|
||||
credentials: Annotated[HTTPBasicCredentials, Depends(security)]
|
||||
@ -94,9 +142,9 @@ async def get_api_key(
|
||||
return session or api_key
|
||||
|
||||
#TODO predelat z Async?
|
||||
@app.get("/static")
|
||||
async def get(username: Annotated[str, Depends(get_current_username)]):
|
||||
return FileResponse("index.html")
|
||||
# @app.get("/static")
|
||||
# async def get(username: Annotated[str, Depends(get_current_username)]):
|
||||
# return FileResponse("index.html")
|
||||
|
||||
@app.websocket("/runners/{runner_id}/ws")
|
||||
async def websocket_endpoint(
|
||||
@ -245,11 +293,13 @@ def _run_stratin(stratin_id: UUID, runReq: RunRequest):
|
||||
runReq.bt_to = zoneNY.localize(runReq.bt_to)
|
||||
#pokud jedeme nad test intervaly anebo je požadováno více dní - pouštíme jako batch day by day
|
||||
#do budoucna dát na FE jako flag
|
||||
if runReq.mode != Mode.LIVE and runReq.test_batch_id is not None or (runReq.bt_from.date() != runReq.bt_to.date()):
|
||||
#print(runReq)
|
||||
if runReq.mode not in [Mode.LIVE, Mode.PAPER] and (runReq.test_batch_id is not None or (runReq.bt_from is not None and runReq.bt_to is not None and runReq.bt_from.date() != runReq.bt_to.date())):
|
||||
res, id = cs.run_batch_stratin(id=stratin_id, runReq=runReq)
|
||||
else:
|
||||
if runReq.weekdays_filter is not None:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Weekday only for backtest mode with batch (not single day)")
|
||||
#not necessary for live/paper the weekdays are simply ignored, in the future maybe add validation if weekdays are presented
|
||||
#if runReq.weekdays_filter is not None:
|
||||
# raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Weekday only for backtest mode with batch (not single day)")
|
||||
res, id = cs.run_stratin(id=stratin_id, runReq=runReq)
|
||||
if res == 0: return id
|
||||
elif res < 0:
|
||||
@ -284,7 +334,7 @@ def stop_all_runners():
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"Error: {res}:{id}")
|
||||
|
||||
@app.get("/tradehistory/{symbol}", dependencies=[Depends(api_key_auth)])
|
||||
def get_trade_history(symbol: str, timestamp_from: float, timestamp_to:float) -> list[Trade]:
|
||||
def get_trade_history(symbol: str, timestamp_from: float, timestamp_to:float) -> list[TradeView]:
|
||||
res, set = cs.get_trade_history(symbol, timestamp_from, timestamp_to)
|
||||
if res == 0:
|
||||
return set
|
||||
@ -325,14 +375,14 @@ def migrate():
|
||||
end_positions=row.get('end_positions'),
|
||||
end_positions_avgp=row.get('end_positions_avgp'),
|
||||
metrics=row.get('open_orders'),
|
||||
#metrics=json.loads(row.get('metrics')) if row.get('metrics') else None,
|
||||
#metrics=orjson.loads(row.get('metrics')) if row.get('metrics') else None,
|
||||
stratvars_toml=row.get('stratvars_toml')
|
||||
)
|
||||
|
||||
def get_all_archived_runners():
|
||||
conn = pool.get_connection()
|
||||
try:
|
||||
conn.row_factory = lambda c, r: json.loads(r[0])
|
||||
conn.row_factory = lambda c, r: orjson.loads(r[0])
|
||||
c = conn.cursor()
|
||||
res = c.execute(f"SELECT data FROM runner_header")
|
||||
finally:
|
||||
@ -377,7 +427,7 @@ def migrate():
|
||||
SET strat_id=?, batch_id=?, symbol=?, name=?, note=?, started=?, stopped=?, mode=?, account=?, bt_from=?, bt_to=?, strat_json=?, settings=?, ilog_save=?, profit=?, trade_count=?, end_positions=?, end_positions_avgp=?, metrics=?, stratvars_toml=?
|
||||
WHERE runner_id=?
|
||||
''',
|
||||
(str(ra.strat_id), ra.batch_id, ra.symbol, ra.name, ra.note, ra.started, ra.stopped, ra.mode, ra.account, ra.bt_from, ra.bt_to, json.dumps(ra.strat_json), json.dumps(ra.settings), ra.ilog_save, ra.profit, ra.trade_count, ra.end_positions, ra.end_positions_avgp, json.dumps(ra.metrics), ra.stratvars_toml, str(ra.id)))
|
||||
(str(ra.strat_id), ra.batch_id, ra.symbol, ra.name, ra.note, ra.started, ra.stopped, ra.mode, ra.account, ra.bt_from, ra.bt_to, orjson.dumps(ra.strat_json).decode('utf-8'), orjson.dumps(ra.settings).decode('utf-8'), ra.ilog_save, ra.profit, ra.trade_count, ra.end_positions, ra.end_positions_avgp, orjson.dumps(ra.metrics).decode('utf-8'), ra.stratvars_toml, str(ra.id)))
|
||||
|
||||
conn.commit()
|
||||
finally:
|
||||
@ -455,6 +505,16 @@ def _delete_archived_runners_byIDs(runner_ids: list[UUID]):
|
||||
elif res < 0:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"Error: {res}:{id}")
|
||||
|
||||
#get runners list based on batch_id
|
||||
@app.get("/archived_runners/batch/{batch_id}", dependencies=[Depends(api_key_auth)])
|
||||
def _get_archived_runnerslist_byBatchID(batch_id: str) -> list[UUID]:
|
||||
res, set =cs.get_archived_runnerslist_byBatchID(batch_id)
|
||||
if res == 0:
|
||||
return set
|
||||
else:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"No data found")
|
||||
|
||||
|
||||
#delete archive runner from header and detail
|
||||
@app.delete("/archived_runners/batch/{batch_id}", dependencies=[Depends(api_key_auth)], status_code=status.HTTP_200_OK)
|
||||
def _delete_archived_runners_byBatchID(batch_id: str):
|
||||
@ -466,10 +526,11 @@ def _delete_archived_runners_byBatchID(batch_id: str):
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error not changed: {res}:{batch_id}:{id}")
|
||||
|
||||
|
||||
#WIP - TOM indicator preview from frontend
|
||||
#return indicator value for archived runner
|
||||
#WIP - TOM indicator preview from frontend f
|
||||
#return indicator value for archived runner, return values list0 - bar indicators, list1 - ticks indicators
|
||||
#TBD mozna predelat na dict pro prehlednost
|
||||
@app.put("/archived_runners/{runner_id}/previewindicator", dependencies=[Depends(api_key_auth)], status_code=status.HTTP_200_OK)
|
||||
def _preview_indicator_byTOML(runner_id: UUID, indicator: InstantIndicator) -> list[float]:
|
||||
def _preview_indicator_byTOML(runner_id: UUID, indicator: InstantIndicator) -> list[dict]:
|
||||
#mozna pak pridat name
|
||||
res, vals = cs.preview_indicator_byTOML(id=runner_id, indicator=indicator)
|
||||
if res == 0: return vals
|
||||
@ -510,13 +571,23 @@ def _get_all_archived_runners_detail() -> list[RunArchiveDetail]:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"No data found")
|
||||
|
||||
#get archived runners detail by id
|
||||
# @app.get("/archived_runners_detail/{runner_id}", dependencies=[Depends(api_key_auth)])
|
||||
# def _get_archived_runner_details_byID(runner_id) -> RunArchiveDetail:
|
||||
# res, set = cs.get_archived_runner_details_byID(runner_id)
|
||||
# if res == 0:
|
||||
# return set
|
||||
# else:
|
||||
# raise HTTPException(status_code=404, detail=f"No runner with id: {runner_id} a {set}")
|
||||
|
||||
#this is the variant of above that skips parsing of json and returns JSON string returned from db
|
||||
@app.get("/archived_runners_detail/{runner_id}", dependencies=[Depends(api_key_auth)])
|
||||
def _get_archived_runner_details_byID(runner_id) -> RunArchiveDetail:
|
||||
res, set = cs.get_archived_runner_details_byID(runner_id)
|
||||
def _get_archived_runner_details_byID(runner_id: UUID):
|
||||
res, data = cs.get_archived_runner_details_byID(id=runner_id, parsed=False)
|
||||
if res == 0:
|
||||
return set
|
||||
# Return the raw JSON string as a plain Response
|
||||
return Response(content=data, media_type="application/json")
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail=f"No runner with id: {runner_id} a {set}")
|
||||
raise HTTPException(status_code=404, detail=f"No runner with id: {runner_id}. {data}")
|
||||
|
||||
#get archived runners detail by id
|
||||
@app.get("/archived_runners_log/{runner_id}", dependencies=[Depends(api_key_auth)])
|
||||
@ -527,30 +598,68 @@ def _get_archived_runner_log_byID(runner_id: UUID, timestamp_from: float, timest
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail=f"No logs found with id: {runner_id} and between {timestamp_from} and {timestamp_to}")
|
||||
|
||||
def remove_ansi_codes(text):
|
||||
ansi_escape = re.compile(r'\x1B[@-_][0-?]*[ -/]*[@-~]')
|
||||
return ansi_escape.sub('', text)
|
||||
|
||||
# endregion
|
||||
# A simple function to read the last lines of a file
|
||||
def tail(file_path, n=10, buffer_size=1024):
|
||||
# def tail(file_path, n=10, buffer_size=1024):
|
||||
# try:
|
||||
# with open(file_path, 'rb') as f:
|
||||
# f.seek(0, 2) # Move to the end of the file
|
||||
# file_size = f.tell()
|
||||
# lines = []
|
||||
# buffer = bytearray()
|
||||
|
||||
# for i in range(file_size // buffer_size + 1):
|
||||
# read_start = max(-buffer_size * (i + 1), -file_size)
|
||||
# f.seek(read_start, 2)
|
||||
# read_size = min(buffer_size, file_size - buffer_size * i)
|
||||
# buffer[0:0] = f.read(read_size) # Prepend to buffer
|
||||
|
||||
# if buffer.count(b'\n') >= n + 1:
|
||||
# break
|
||||
|
||||
# lines = buffer.decode(errors='ignore').splitlines()[-n:]
|
||||
# lines = [remove_ansi_codes(line) for line in lines]
|
||||
# return lines
|
||||
# except Exception as e:
|
||||
# return [str(e) + format_exc()]
|
||||
|
||||
#updated version that reads lines line by line
|
||||
def tail(file_path, n=10):
|
||||
try:
|
||||
with open(file_path, 'rb') as f:
|
||||
f.seek(0, 2) # Move to the end of the file
|
||||
file_size = f.tell()
|
||||
lines = []
|
||||
buffer = bytearray()
|
||||
line = b''
|
||||
|
||||
for i in range(file_size // buffer_size + 1):
|
||||
read_start = max(-buffer_size * (i + 1), -file_size)
|
||||
f.seek(read_start, 2)
|
||||
read_size = min(buffer_size, file_size - buffer_size * i)
|
||||
buffer[0:0] = f.read(read_size) # Prepend to buffer
|
||||
f.seek(-1, 2) # Start at the last byte
|
||||
while len(lines) < n and f.tell() != 0:
|
||||
byte = f.read(1)
|
||||
if byte == b'\n':
|
||||
# Decode, remove ANSI codes, and append the line
|
||||
lines.append(remove_ansi_codes(line.decode(errors='ignore')))
|
||||
line = b''
|
||||
else:
|
||||
line = byte + line
|
||||
f.seek(-2, 1) # Move backwards by two bytes
|
||||
|
||||
if line:
|
||||
# Append any remaining line after removing ANSI codes
|
||||
lines.append(remove_ansi_codes(line.decode(errors='ignore')))
|
||||
|
||||
return lines[::-1] # Reverse the list to get the lines in correct order
|
||||
except Exception as e:
|
||||
return [str(e)]
|
||||
|
||||
if buffer.count(b'\n') >= n + 1:
|
||||
break
|
||||
|
||||
lines = buffer.decode(errors='ignore').splitlines()[-n:]
|
||||
return lines
|
||||
|
||||
@app.get("/log", dependencies=[Depends(api_key_auth)])
|
||||
def read_log(lines: int = 10):
|
||||
log_path = LOG_FILE
|
||||
def read_log(lines: int = 700, logfile: str = "strat.log"):
|
||||
log_path = LOG_PATH / logfile
|
||||
return {"lines": tail(log_path, lines)}
|
||||
|
||||
#get alpaca history bars
|
||||
@ -584,7 +693,7 @@ def _generate_report_image(runner_ids: list[UUID]):
|
||||
res, stream = generate_trading_report_image(runner_ids=runner_ids,stream=True)
|
||||
if res == 0: return StreamingResponse(stream, media_type="image/png",headers={"Content-Disposition": "attachment; filename=report.png"})
|
||||
elif res < 0:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error: {res}:{id}")
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error: {res}:{stream}")
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error: {str(e)}" + format_exc())
|
||||
|
||||
@ -620,7 +729,8 @@ def _generate_analysis(analyzerInputs: AnalyzerInputs):
|
||||
|
||||
if res == 0: return StreamingResponse(stream, media_type="image/png")
|
||||
elif res < 0:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error: {res}:{id}")
|
||||
print("Error when generating analysis: ",str(stream))
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error: {res}:{stream}")
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error: {str(e)}" + format_exc())
|
||||
|
||||
@ -633,7 +743,7 @@ def create_record(testlist: TestList):
|
||||
# Insert the record into the database
|
||||
conn = pool.get_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("INSERT INTO test_list (id, name, dates) VALUES (?, ?, ?)", (testlist.id, testlist.name, json.dumps(testlist.dates, default=json_serial)))
|
||||
cursor.execute("INSERT INTO test_list (id, name, dates) VALUES (?, ?, ?)", (testlist.id, testlist.name, orjson.dumps(testlist.dates, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME).decode('utf-8')))
|
||||
conn.commit()
|
||||
pool.release_connection(conn)
|
||||
return testlist
|
||||
@ -649,7 +759,7 @@ def get_testlists():
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"No data found")
|
||||
|
||||
# API endpoint to retrieve a single record by ID
|
||||
@app.get('/testlists/{record_id}')
|
||||
@app.get('/testlists/{record_id}', dependencies=[Depends(api_key_auth)])
|
||||
def get_testlist(record_id: str):
|
||||
res, testlist = cs.get_testlist_byID(record_id=record_id)
|
||||
|
||||
@ -659,7 +769,7 @@ def get_testlist(record_id: str):
|
||||
raise HTTPException(status_code=404, detail='Record not found')
|
||||
|
||||
# API endpoint to update a record
|
||||
@app.put('/testlists/{record_id}')
|
||||
@app.put('/testlists/{record_id}', dependencies=[Depends(api_key_auth)])
|
||||
def update_testlist(record_id: str, testlist: TestList):
|
||||
# Check if the record exists
|
||||
conn = pool.get_connection()
|
||||
@ -671,7 +781,7 @@ def update_testlist(record_id: str, testlist: TestList):
|
||||
raise HTTPException(status_code=404, detail='Record not found')
|
||||
|
||||
# Update the record in the database
|
||||
cursor.execute("UPDATE test_list SET name = ?, dates = ? WHERE id = ?", (testlist.name, json.dumps(testlist.dates, default=json_serial), record_id))
|
||||
cursor.execute("UPDATE test_list SET name = ?, dates = ? WHERE id = ?", (testlist.name, orjson.dumps(testlist.dates, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME).decode('utf-8'), record_id))
|
||||
conn.commit()
|
||||
pool.release_connection(conn)
|
||||
|
||||
@ -679,7 +789,7 @@ def update_testlist(record_id: str, testlist: TestList):
|
||||
return testlist
|
||||
|
||||
# API endpoint to delete a record
|
||||
@app.delete('/testlists/{record_id}')
|
||||
@app.delete('/testlists/{record_id}', dependencies=[Depends(api_key_auth)])
|
||||
def delete_testlist(record_id: str):
|
||||
# Check if the record exists
|
||||
conn = pool.get_connection()
|
||||
@ -702,7 +812,7 @@ def delete_testlist(record_id: str):
|
||||
# Get all config items
|
||||
@app.get("/config-items/", dependencies=[Depends(api_key_auth)])
|
||||
def get_all_items() -> list[ConfigItem]:
|
||||
res, sada = cs.get_all_config_items()
|
||||
res, sada = cf.get_all_config_items()
|
||||
if res == 0:
|
||||
return sada
|
||||
else:
|
||||
@ -712,7 +822,7 @@ def get_all_items() -> list[ConfigItem]:
|
||||
# Get a config item by ID
|
||||
@app.get("/config-items/{item_id}", dependencies=[Depends(api_key_auth)])
|
||||
def get_item(item_id: int)-> ConfigItem:
|
||||
res, sada = cs.get_config_item_by_id(item_id)
|
||||
res, sada = cf.get_config_item_by_id(item_id)
|
||||
if res == 0:
|
||||
return sada
|
||||
else:
|
||||
@ -721,7 +831,7 @@ def get_item(item_id: int)-> ConfigItem:
|
||||
# Get a config item by Name
|
||||
@app.get("/config-items-by-name/", dependencies=[Depends(api_key_auth)])
|
||||
def get_item(item_name: str)-> ConfigItem:
|
||||
res, sada = cs.get_config_item_by_name(item_name)
|
||||
res, sada = cf.get_config_item_by_name(item_name)
|
||||
if res == 0:
|
||||
return sada
|
||||
else:
|
||||
@ -730,7 +840,7 @@ def get_item(item_name: str)-> ConfigItem:
|
||||
# Create a new config item
|
||||
@app.post("/config-items/", dependencies=[Depends(api_key_auth)], status_code=status.HTTP_200_OK)
|
||||
def create_item(config_item: ConfigItem) -> ConfigItem:
|
||||
res, sada = cs.create_config_item(config_item)
|
||||
res, sada = cf.create_config_item(config_item)
|
||||
if res == 0: return sada
|
||||
else:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error not created: {res}:{id} {sada}")
|
||||
@ -739,11 +849,11 @@ def create_item(config_item: ConfigItem) -> ConfigItem:
|
||||
# Update a config item by ID
|
||||
@app.put("/config-items/{item_id}", dependencies=[Depends(api_key_auth)])
|
||||
def update_item(item_id: int, config_item: ConfigItem) -> ConfigItem:
|
||||
res, sada = cs.get_config_item_by_id(item_id)
|
||||
res, sada = cf.get_config_item_by_id(item_id)
|
||||
if res != 0:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"No data found")
|
||||
|
||||
res, sada = cs.update_config_item(item_id, config_item)
|
||||
res, sada = cf.update_config_item(item_id, config_item)
|
||||
if res == 0: return sada
|
||||
else:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error not created: {res}:{id}")
|
||||
@ -752,17 +862,189 @@ def update_item(item_id: int, config_item: ConfigItem) -> ConfigItem:
|
||||
# Delete a config item by ID
|
||||
@app.delete("/config-items/{item_id}", dependencies=[Depends(api_key_auth)])
|
||||
def delete_item(item_id: int) -> dict:
|
||||
res, sada = cs.get_config_item_by_id(item_id)
|
||||
res, sada = cf.get_config_item_by_id(item_id)
|
||||
if res != 0:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"No data found")
|
||||
|
||||
res, sada = cs.delete_config_item(item_id)
|
||||
res, sada = cf.delete_config_item(item_id)
|
||||
if res == 0: return sada
|
||||
else:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error not created: {res}:{id}")
|
||||
|
||||
# endregion
|
||||
|
||||
# region scheduler
|
||||
# 1. Fetch All RunManagerRecords
|
||||
@app.get("/run_manager_records/", dependencies=[Depends(api_key_auth)], response_model=List[RunManagerRecord])
|
||||
#TODO zvazit rozsireni vystupu o strat_status (running/stopped)
|
||||
def get_all_run_manager_records():
|
||||
result, records = rm.fetch_all_run_manager_records()
|
||||
if result != 0:
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail="Error fetching records")
|
||||
return records
|
||||
|
||||
# 2. Fetch RunManagerRecord by ID
|
||||
@app.get("/run_manager_records/{record_id}", dependencies=[Depends(api_key_auth)], response_model=RunManagerRecord)
|
||||
#TODO zvazit rozsireni vystupu o strat_status (running/stopped)
|
||||
def get_run_manager_record(record_id: UUID):
|
||||
result, record = rm.fetch_run_manager_record_by_id(record_id)
|
||||
if result == -2: # Record not found
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Record not found")
|
||||
elif result != 0:
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail="Error fetching record")
|
||||
return record
|
||||
|
||||
# 3. Update RunManagerRecord
|
||||
@app.patch("/run_manager_records/{record_id}", dependencies=[Depends(api_key_auth)], status_code=status.HTTP_200_OK)
|
||||
def update_run_manager_record(record_id: UUID, update_data: RunManagerRecord):
|
||||
#make dates zone aware zoneNY
|
||||
# if update_data.valid_from is not None:
|
||||
# update_data.valid_from = zoneNY.localize(update_data.valid_from)
|
||||
# if update_data.valid_to is not None:
|
||||
# update_data.valid_to = zoneNY.localize(update_data.valid_to)
|
||||
result, message = rm.update_run_manager_record(record_id, update_data)
|
||||
if result == -2: # Update failed
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=message)
|
||||
elif result != 0:
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Error during update {result} {message}")
|
||||
return {"message": "Record updated successfully"}
|
||||
|
||||
# 4. Delete RunManagerRecord
|
||||
@app.delete("/run_manager_records/{record_id}", dependencies=[Depends(api_key_auth)], status_code=status.HTTP_200_OK)
|
||||
def delete_run_manager_record(record_id: UUID):
|
||||
result, message = rm.delete_run_manager_record(record_id)
|
||||
if result == -2: # Delete failed
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=message)
|
||||
elif result != 0:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error during deletion {result} {message}")
|
||||
return {"message": "Record deleted successfully"}
|
||||
|
||||
@app.post("/run_manager_records/", status_code=status.HTTP_201_CREATED)
|
||||
def create_run_manager_record(new_record: RunManagerRecord, api_key_auth: Depends = Depends(api_key_auth)):
|
||||
#make date zone aware - convert to zoneNY
|
||||
# if new_record.valid_from is not None:
|
||||
# new_record.valid_from = zoneNY.localize(new_record.valid_from)
|
||||
# if new_record.valid_to is not None:
|
||||
# new_record.valid_to = zoneNY.localize(new_record.valid_to)
|
||||
|
||||
result, record_id = rm.add_run_manager_record(new_record)
|
||||
if result != 0:
|
||||
raise HTTPException(status_code=status.HTTP_406_NOT_ACCEPTABLE, detail=f"Error during record creation: {result} {record_id}")
|
||||
return {"id": record_id}
|
||||
# endregion
|
||||
|
||||
#model section
|
||||
#UPLOAD MODEL
|
||||
@app.post("/model/upload_model", dependencies=[Depends(api_key_auth)])
|
||||
async def _upload_model(file: UploadFile = File(...)):
|
||||
# Specify the directory to save the file
|
||||
#save_directory = DATA_DIR+'/models/'
|
||||
save_directory = MODEL_DIR
|
||||
|
||||
os.makedirs(save_directory, exist_ok=True)
|
||||
|
||||
# Extract just the filename, discarding any path information
|
||||
base_filename = os.path.basename(file.filename)
|
||||
file_path = os.path.join(save_directory, base_filename)
|
||||
|
||||
# Save the uploaded file
|
||||
with open(file_path, "wb") as buffer:
|
||||
while True:
|
||||
data = await file.read(1024) # Read in chunks
|
||||
if not data:
|
||||
break
|
||||
buffer.write(data)
|
||||
|
||||
print(f"saved to {file_path=} file:{base_filename=}")
|
||||
|
||||
return {"filename": base_filename, "location": file_path}
|
||||
|
||||
#LIST MODELS
|
||||
@app.get("/model/list-models", dependencies=[Depends(api_key_auth)])
|
||||
def list_models():
|
||||
#models_directory = DATA_DIR + '/models/'
|
||||
models_directory = MODEL_DIR
|
||||
# Ensure the directory exists
|
||||
if not os.path.exists(models_directory):
|
||||
return {"error": "Models directory does not exist."}
|
||||
|
||||
# List all files in the directory
|
||||
model_files = sorted(os.listdir(models_directory))
|
||||
return {"models": model_files}
|
||||
|
||||
@app.post("/model/upload-model", dependencies=[Depends(api_key_auth)])
|
||||
def upload_model(file: UploadFile = File(...)):
|
||||
if not file:
|
||||
raise HTTPException(status_code=400, detail="No file uploaded.")
|
||||
file_location = os.path.join(MODEL_DIR, file.filename)
|
||||
with open(file_location, "wb+") as file_object:
|
||||
shutil.copyfileobj(file.file, file_object)
|
||||
|
||||
return JSONResponse(status_code=200, content={"message": "Model uploaded successfully."})
|
||||
|
||||
@app.delete("/model/delete-model/{model_name}", dependencies=[Depends(api_key_auth)])
|
||||
def delete_model(model_name: str):
|
||||
model_path = os.path.join(MODEL_DIR, model_name)
|
||||
if os.path.exists(model_path):
|
||||
os.remove(model_path)
|
||||
return {"message": "Model deleted successfully."}
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail="Model not found.")
|
||||
|
||||
@app.get("/model/download-model/{model_name}", dependencies=[Depends(api_key_auth)])
|
||||
def download_model(model_name: str):
|
||||
model_path = os.path.join(MODEL_DIR, model_name)
|
||||
if os.path.exists(model_path):
|
||||
return FileResponse(path=model_path, filename=model_name, media_type='application/octet-stream')
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail="Model not found.")
|
||||
|
||||
@app.get("/model/metadata/{model_name}", dependencies=[Depends(api_key_auth)])
|
||||
def get_metadata(model_name: str):
|
||||
try:
|
||||
#loadujeme pouze v modu cfg only
|
||||
model_instance = ml.load_model(file=model_name, directory=MODEL_DIR, cfg_only = True)
|
||||
try:
|
||||
metadata = model_instance.metadata
|
||||
except AttributeError:
|
||||
metadata = model_instance.__dict__
|
||||
del metadata["scalerX"]
|
||||
del metadata["scalerY"]
|
||||
del metadata["model"]
|
||||
except Exception as e:
|
||||
metadata = "No Metada" + str(e) + format_exc()
|
||||
return metadata
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=404, detail="Model not found."+str(e) + format_exc())
|
||||
|
||||
# model_path = os.path.join(MODEL_DIR, model_name)
|
||||
# if os.path.exists(model_path):
|
||||
# # Example: Retrieve metadata from a file or generate it
|
||||
# metadata = {
|
||||
# "name": model_name,
|
||||
# "size": os.path.getsize(model_path),
|
||||
# "last_modified": os.path.getmtime(model_path),
|
||||
# # ... other metadata fields ...
|
||||
# }
|
||||
@app.get("/system-info")
|
||||
def get_system_info():
|
||||
"""Get system info, e.g. disk free space, used percentage ... """
|
||||
disk_total = round(psutil.disk_usage('/').total / 1024**3, 1)
|
||||
disk_used = round(psutil.disk_usage('/').used / 1024**3, 1)
|
||||
disk_free = round(psutil.disk_usage('/').free / 1024**3, 1)
|
||||
disk_used_percentage = round(psutil.disk_usage('/').percent, 1)
|
||||
# memory_total = round(psutil.virtual_memory().total / 1024**3, 1)
|
||||
# memory_perc = round(psutil.virtual_memory().percent, 1)
|
||||
# cpu_time_user = round(psutil.cpu_times().user,1)
|
||||
# cpu_time_system = round(psutil.cpu_times().system,1)
|
||||
# cpu_time_idle = round(psutil.cpu_times().idle,1)
|
||||
# network_sent = round(psutil.net_io_counters().bytes_sent / 1024**3, 6)
|
||||
# network_recv = round(psutil.net_io_counters().bytes_recv / 1024**3, 6)
|
||||
return {"disk_space": {"total": disk_total, "used": disk_used, "free" : disk_free, "used_percentage" : disk_used_percentage},
|
||||
# "memory": {"total": memory_total, "used_percentage": memory_perc},
|
||||
# "cpu_time" : {"user": cpu_time_user, "system": cpu_time_system, "idle": cpu_time_idle},
|
||||
# "network": {"sent": network_sent, "received": network_recv}
|
||||
}
|
||||
|
||||
# Thread function to insert data from the queue into the database
|
||||
def insert_queue2db():
|
||||
@ -777,7 +1059,7 @@ def insert_queue2db():
|
||||
c = insert_conn.cursor()
|
||||
insert_data = []
|
||||
for i in loglist:
|
||||
row = (str(runner_id), i["time"], json.dumps(i, default=json_serial))
|
||||
row = (str(runner_id), i["time"], orjson.dumps(i, default=json_serial, option=orjson.OPT_PASSTHROUGH_DATETIME|orjson.OPT_NON_STR_KEYS).decode('utf-8'))
|
||||
insert_data.append(row)
|
||||
c.executemany("INSERT INTO runner_logs VALUES (?,?,?)", insert_data)
|
||||
insert_conn.commit()
|
||||
@ -790,6 +1072,9 @@ def insert_queue2db():
|
||||
sleep(1) # You can adjust the sleep duration
|
||||
else:
|
||||
raise # If it's another error, raise it
|
||||
except Exception as e:
|
||||
print("ERROR INSERT LOGQUEUE MODULE:" + str(e)+format_exc())
|
||||
print(data)
|
||||
|
||||
#join cekej na dokonceni vsech
|
||||
for i in cs.db.runners:
|
||||
@ -802,15 +1087,25 @@ if __name__ == "__main__":
|
||||
insert_thread = Thread(target=insert_queue2db)
|
||||
insert_thread.start()
|
||||
|
||||
#attach debugGER to be able to debug scheduler jobs (run in separate threads)
|
||||
# debugpy.listen(('localhost', 5678))
|
||||
# print("Waiting for debugger to attach...")
|
||||
# debugpy.wait_for_client() # Script will pause here until debugger is attached
|
||||
|
||||
#init scheduled tasks from schedule table
|
||||
#Add APS scheduler job refresh
|
||||
res, result = aps.initialize_jobs()
|
||||
if res < 0:
|
||||
#raise exception
|
||||
raise Exception(f"Error {res} initializing APS jobs, error {result}")
|
||||
|
||||
uvicorn.run("__main__:app", host="0.0.0.0", port=8000, reload=False)
|
||||
except Exception as e:
|
||||
print("Error intializing app: " + str(e) + format_exc())
|
||||
aps.scheduler.shutdown(wait=False)
|
||||
finally:
|
||||
print("closing insert_conn connection")
|
||||
insert_conn.close()
|
||||
print("closed")
|
||||
##TODO pridat moznost behu na PAPER a LIVE per strategie
|
||||
|
||||
# zjistit zda order notification websocket muze bezet na obou soucasne
|
||||
# pokud ne, mohl bych vyuzivat jen zive data
|
||||
# a pro paper trading(live interface) a notifications bych pouzival separatni paper ucet
|
||||
# to by asi slo
|
||||
|
||||
|
||||
@ -1,389 +0,0 @@
|
||||
# from sklearn.preprocessing import StandardScaler
|
||||
# # from keras.models import Sequential
|
||||
# from v2realbot.enums.enums import PredOutput, Source, TargetTRFM
|
||||
# from v2realbot.config import DATA_DIR
|
||||
# from joblib import dump
|
||||
# # import v2realbot.ml.mlutils as mu
|
||||
# from v2realbot.utils.utils import slice_dict_lists
|
||||
# import numpy as np
|
||||
# from copy import deepcopy
|
||||
# import v2realbot.controller.services as cs
|
||||
# #Basic classes for machine learning
|
||||
# #drzi model a jeho zakladni nastaveni
|
||||
|
||||
# #Sample Data
|
||||
# sample_bars = {
|
||||
# 'time': [1, 2, 3, 4, 5,6,7,8,9,10,11,12,13,14,15],
|
||||
# 'high': [10, 11, 12, 13, 14,10, 11, 12, 13, 14,10, 11, 12, 13, 14],
|
||||
# 'low': [8, 9, 7, 6, 8,8, 9, 7, 6, 8,8, 9, 7, 6, 8],
|
||||
# 'volume': [1000, 1200, 900, 1100, 1300,1000, 1200, 900, 1100, 1300,1000, 1200, 900, 1100, 1300],
|
||||
# 'close': [9, 10, 11, 12, 13,9, 10, 11, 12, 13,9, 10, 11, 12, 13],
|
||||
# 'open': [9, 10, 8, 8, 8,9, 10, 8, 8, 8,9, 10, 8, 8, 8],
|
||||
# 'resolution': [1, 1, 1, 1, 1,1, 1, 1, 1, 1,1, 1, 1, 1, 1]
|
||||
# }
|
||||
|
||||
# sample_indicators = {
|
||||
# 'time': [1, 2, 3, 4, 5,6,7,8,9,10,11,12,13,14,15],
|
||||
# 'fastslope': [90, 95, 100, 110, 115,90, 95, 100, 110, 115,90, 95, 100, 110, 115],
|
||||
# 'fsdelta': [90, 95, 100, 110, 115,90, 95, 100, 110, 115,90, 95, 100, 110, 115],
|
||||
# 'fastslope2': [90, 95, 100, 110, 115,90, 95, 100, 110, 115,90, 95, 100, 110, 115],
|
||||
# 'ema': [1000, 1200, 900, 1100, 1300,1000, 1200, 900, 1100, 1300,1000, 1200, 900, 1100, 1300]
|
||||
# }
|
||||
|
||||
# #Trida, která drzi instanci ML modelu a jeho konfigurace
|
||||
# #take se pouziva jako nastroj na pripravu dat pro train a predikci
|
||||
# #pozor samotna data trida neobsahuje, jen konfiguraci a pak samotny model
|
||||
# class ModelML:
|
||||
# def __init__(self, name: str,
|
||||
# pred_output: PredOutput,
|
||||
# bar_features: list,
|
||||
# ind_features: list,
|
||||
# input_sequences: int,
|
||||
# target: str,
|
||||
# target_reference: str,
|
||||
# train_target_steps: int, #train
|
||||
# train_target_transformation: TargetTRFM, #train
|
||||
# train_epochs: int, #train
|
||||
# train_runner_ids: list = None, #train
|
||||
# train_batch_id: str = None, #train
|
||||
# version: str = "1",
|
||||
# note : str = None,
|
||||
# use_bars: bool = True,
|
||||
# train_remove_cross_sequences: bool = False, #train
|
||||
# #standardne StandardScaler
|
||||
# scalerX: StandardScaler = StandardScaler(),
|
||||
# scalerY: StandardScaler = StandardScaler(),
|
||||
# model, #Sequential = Sequential()
|
||||
# )-> None:
|
||||
|
||||
# self.name = name
|
||||
# self.version = version
|
||||
# self.note = note
|
||||
# self.pred_output: PredOutput = pred_output
|
||||
# #model muze byt take bez barů, tzn. jen indikatory
|
||||
# self.use_bars = use_bars
|
||||
# #zajistime poradi
|
||||
# bar_features.sort()
|
||||
# ind_features.sort()
|
||||
# self.bar_features = bar_features
|
||||
# self.ind_features = ind_features
|
||||
# if (train_runner_ids is None or len(train_runner_ids) == 0) and train_batch_id is None:
|
||||
# raise Exception("train_runner_ids nebo train_batch_id musi byt vyplnene")
|
||||
# self.train_runner_ids = train_runner_ids
|
||||
# self.train_batch_id = train_batch_id
|
||||
# #target cílový sloupec, který je používám přímo nebo transformován na binary
|
||||
# self.target = target
|
||||
# self.target_reference = target_reference
|
||||
# self.train_target_steps = train_target_steps
|
||||
# self.train_target_transformation = train_target_transformation
|
||||
# self.input_sequences = input_sequences
|
||||
# self.train_epochs = train_epochs
|
||||
# #keep cross sequences between runners
|
||||
# self.train_remove_cross_sequences = train_remove_cross_sequences
|
||||
# self.scalerX = scalerX
|
||||
# self.scalerY = scalerY
|
||||
# self.model = model
|
||||
|
||||
# def save(self):
|
||||
# filename = mu.get_full_filename(self.name,self.version)
|
||||
# dump(self, filename)
|
||||
# print(f"model {self.name} save")
|
||||
|
||||
# #create X data with features
|
||||
# def column_stack_source(self, bars, indicators, verbose = 1) -> np.array:
|
||||
# #create SOURCE DATA with features
|
||||
# # bars and indicators dictionary and features as input
|
||||
# poradi_sloupcu_inds = [feature for feature in self.ind_features if feature in indicators]
|
||||
# indicator_data = np.column_stack([indicators[feature] for feature in self.ind_features if feature in indicators])
|
||||
|
||||
# if len(bars)>0:
|
||||
# bar_data = np.column_stack([bars[feature] for feature in self.bar_features if feature in bars])
|
||||
# poradi_sloupcu_bars = [feature for feature in self.bar_features if feature in bars]
|
||||
# if verbose == 1:
|
||||
# print("poradi sloupce v source_data", str(poradi_sloupcu_bars + poradi_sloupcu_inds))
|
||||
# combined_day_data = np.column_stack([bar_data,indicator_data])
|
||||
# else:
|
||||
# combined_day_data = indicator_data
|
||||
# if verbose == 1:
|
||||
# print("poradi sloupce v source_data", str(poradi_sloupcu_inds))
|
||||
# return combined_day_data
|
||||
|
||||
# #create TARGET(Y) data
|
||||
# def column_stack_target(self, bars, indicators) -> np.array:
|
||||
# target_base = []
|
||||
# target_reference = []
|
||||
# try:
|
||||
# try:
|
||||
# target_base = bars[self.target]
|
||||
# except KeyError:
|
||||
# target_base = indicators[self.target]
|
||||
# try:
|
||||
# target_reference = bars[self.target_reference]
|
||||
# except KeyError:
|
||||
# target_reference = indicators[self.target_reference]
|
||||
# except KeyError:
|
||||
# pass
|
||||
# target_day_data = np.column_stack([target_base, target_reference])
|
||||
# return target_day_data
|
||||
|
||||
# def load_runners_as_list(self, runner_id_list = None, batch_id = None):
|
||||
# """Loads all runners data (bars, indicators) for given runners into list of dicts.
|
||||
|
||||
# List of runners/train_batch_id may be provided, or self.train_runner_ids/train_batch_id is taken instead.
|
||||
|
||||
# Returns:
|
||||
# tuple (barslist, indicatorslist,) - lists with dictionaries for each runner
|
||||
# """
|
||||
# if runner_id_list is not None:
|
||||
# runner_ids = runner_id_list
|
||||
# print("loading runners for ",str(runner_id_list))
|
||||
# elif batch_id is not None:
|
||||
# print("Loading runners for train_batch_id:", batch_id)
|
||||
# res, runner_ids = cs.get_archived_runnerslist_byBatchID(batch_id)
|
||||
# elif self.train_batch_id is not None:
|
||||
# print("Loading runners for TRAINING BATCH self.train_batch_id:", self.train_batch_id)
|
||||
# res, runner_ids = cs.get_archived_runnerslist_byBatchID(self.train_batch_id)
|
||||
# #pripadne bereme z listu runneru
|
||||
# else:
|
||||
# runner_ids = self.train_runner_ids
|
||||
# print("loading runners for TRAINING runners ",str(self.train_runner_ids))
|
||||
|
||||
|
||||
# barslist = []
|
||||
# indicatorslist = []
|
||||
# ind_keys = None
|
||||
# for runner_id in runner_ids:
|
||||
# bars, indicators = mu.load_runner(runner_id)
|
||||
# print(f"runner:{runner_id}")
|
||||
# if self.use_bars:
|
||||
# barslist.append(bars)
|
||||
# print(f"bars keys {len(bars)} lng {len(bars[self.bar_features[0]])}")
|
||||
# indicatorslist.append(indicators)
|
||||
# print(f"indi keys {len(indicators)} lng {len(indicators[self.ind_features[0]])}")
|
||||
# if ind_keys is not None and ind_keys != len(indicators):
|
||||
# raise Exception("V runnerech musi byt stejny pocet indikatoru")
|
||||
# else:
|
||||
# ind_keys = len(indicators)
|
||||
|
||||
# return barslist, indicatorslist
|
||||
|
||||
# #toto nejspis rozdelit na TRAIN mod (kdy ma smysl si brat nataveni napr. remove cross)
|
||||
# def create_sequences(self, combined_data, target_data = None, remove_cross_sequences: bool = False, rows_in_day = None):
|
||||
# """Creates sequences of given length seq and optionally target N steps in the future.
|
||||
|
||||
# Returns X(source) a Y(transformed target) - vrací take Y_untransformed - napr. referencni target column pro zobrazeni v grafu (napr. cenu)
|
||||
|
||||
# Volby pro transformaci targetu:
|
||||
# - KEEPVAL (keep value as is)
|
||||
# - KEEPVAL_MOVE(keep value, move target N steps in the future)
|
||||
|
||||
# další na zámysl (nejspíš ale data budu připravovat ve stratu a využívat jen KEEPy nahoře)
|
||||
# - BINARY_prefix - sloupec založený na podmínce, výsledek je 0,1
|
||||
# - BINARY_TREND RISING - podmínka založena, že v target columnu stoupají/klesají po target N steps
|
||||
# (podvarianty BINARY TREND RISING(0-1), FALLING(0-1), BOTH(-1 - ))
|
||||
# - BINARY_READY - předpřipravený sloupec(vytvořený ve strategii jako indikator), stačí jen posunout o target step
|
||||
# - BINARY_READY_POSUNUTY - předpřipraveny sloupec (již posunutýo o target M) - stačí brát as is
|
||||
|
||||
# Args:
|
||||
# combined_data: A list of combined data.
|
||||
# target_data: A list of target data (0-target,1-target ref.column)
|
||||
# remove_cross_sequences: If to remove crossday sequences
|
||||
# rows_in_day: helper dict to remove crossday sequences
|
||||
# return_untr: whether to return untransformed reference column
|
||||
|
||||
# Returns:
|
||||
# A list of X sequences and a list of y sequences.
|
||||
# """
|
||||
|
||||
# if remove_cross_sequences is True and rows_in_day is None:
|
||||
# raise Exception("To remove crossday sequences, rows_in_day param required.")
|
||||
|
||||
# if target_data is not None and len(target_data) > 0:
|
||||
# target_data_untr = target_data[:,1]
|
||||
# target_data = target_data[:,0]
|
||||
# else:
|
||||
# target_data_untr = []
|
||||
# target_data = []
|
||||
|
||||
# X_train = []
|
||||
# y_train = []
|
||||
# y_untr = []
|
||||
# #comb data shape (4073, 13)
|
||||
# #target shape (4073, 1)
|
||||
# print("Start Sequencing")
|
||||
# #range sekvence podle toho jestli je pozadovan MOVE nebo NE
|
||||
# if self.train_target_transformation == TargetTRFM.KEEPVAL_MOVE:
|
||||
# right_offset = self.input_sequences + self.train_target_steps
|
||||
# else:
|
||||
# right_offset= self.input_sequences
|
||||
# for i in range(len(combined_data) - right_offset):
|
||||
|
||||
# #take neresime cross sekvence kdyz neni vyplneni target nebo neni vyplnena rowsinaday
|
||||
# if remove_cross_sequences is True and not self.is_same_day(i,i + right_offset, rows_in_day):
|
||||
# print(f"sekvence vyrazena. NEW Zacatek {combined_data[i, 0]} konec {combined_data[i + right_offset, 0]}")
|
||||
# continue
|
||||
|
||||
# #pridame sekvenci
|
||||
# X_train.append(combined_data[i:i + self.input_sequences])
|
||||
|
||||
# #target hodnotu bude ponecha (na radku mame jiz cilovy target)
|
||||
# #nebo vezme hodnotu z N(train_target_steps) baru vpredu a da jako target k radku
|
||||
# #je rizeno nastavenim right_offset vyse
|
||||
# if target_data is not None and len(target_data) > 0:
|
||||
# y_train.append(target_data[i + right_offset])
|
||||
|
||||
# #udela binary transformaci targetu
|
||||
# # elif self.target_transformation == TargetTRFM.BINARY_TREND_UP:
|
||||
# # #mini loop od 0 do počtu target steps - zda jsou successively rising
|
||||
# # #radeji budu resit vizualne conditional indikatorem pri priprave dat
|
||||
# # rising = False
|
||||
# # for step in range(0,self.train_target_steps):
|
||||
# # if target_data[i + self.input_sequences + step] < target_data[i + self.input_sequences + step + 1]:
|
||||
# # rising = True
|
||||
# # else:
|
||||
# # rising = False
|
||||
# # break
|
||||
# # y_train.append([1] if rising else [0])
|
||||
# # #tato zakomentovana varianta porovnava jen cenu ted a cenu na target baru
|
||||
# # #y_train.append([1] if target_data[i + self.input_sequences] < target_data[i + self.input_sequences + self.train_target_steps] else [0])
|
||||
# if target_data is not None and len(target_data) > 0:
|
||||
# y_untr.append(target_data_untr[i + self.input_sequences])
|
||||
# return np.array(X_train), np.array(y_train), np.array(y_untr)
|
||||
|
||||
# def is_same_day(self, idx_start, idx_end, rows_in_day):
|
||||
# """Helper for sequencing enables to recognize if the start/end index are from the same day.
|
||||
|
||||
# Used for sequences to remove cross runner(day) sequences.
|
||||
|
||||
# Args:
|
||||
# idx_start: Start index
|
||||
# idx_end: End index
|
||||
# rows_in_day: 1D array containing number of rows(bars,inds) for each day.
|
||||
# Cumsumed defines edges where each day ends. [10,30,60]
|
||||
|
||||
# Returns:
|
||||
# A boolean
|
||||
|
||||
# refactor to vectors if possible
|
||||
# i_b, i_e
|
||||
# podm_pole = i_b<pole and i_s >= pole
|
||||
# [10,30,60]
|
||||
# """
|
||||
# for i in rows_in_day:
|
||||
# #jde o polozku na pomezi - vyhazujeme
|
||||
# if idx_start < i and idx_end >= i:
|
||||
# return False
|
||||
# if idx_start < i and idx_end < i:
|
||||
# return True
|
||||
# return None
|
||||
|
||||
# #vytvori X a Y data z nastaveni self
|
||||
# #pro vybrane runnery stahne data, vybere sloupce dle faature a target
|
||||
# #a vrátí jako sloupce v numpy poli
|
||||
# #zaroven vraci i rows_in_day pro nasledny sekvencing
|
||||
# def load_data(self, runners_ids: list = None, batch_id: list = None, source: Source = Source.RUNNERS):
|
||||
# """Service to load data for the model. Can be used for training or for vector prediction.
|
||||
|
||||
# If input data are not provided, it will get the value from training model configuration (train_runners_ids, train_batch_id)
|
||||
|
||||
# Args:
|
||||
# runner_ids:
|
||||
# batch_id:
|
||||
# source: To load sample data.
|
||||
|
||||
# Returns:
|
||||
# source_data,target_data,rows_in_day
|
||||
# """
|
||||
# rows_in_day = []
|
||||
# indicatorslist = []
|
||||
# #bud natahneme samply
|
||||
# if source == Source.SAMPLES:
|
||||
# if self.use_bars:
|
||||
# bars = sample_bars
|
||||
# else:
|
||||
# bars = {}
|
||||
# indicators = sample_indicators
|
||||
# indicatorslist.append(indicators)
|
||||
# #nebo dotahneme pozadovane runnery
|
||||
# else:
|
||||
# #nalodujeme vsechny runnery jako listy (bud z runnerids nebo dle batchid)
|
||||
# barslist, indicatorslist = self.load_runners_as_list(runner_id_list=runners_ids, batch_id=batch_id)
|
||||
# #nerozumim
|
||||
# bl = deepcopy(barslist)
|
||||
# il = deepcopy(indicatorslist)
|
||||
# #a zmergujeme jejich data dohromady
|
||||
# bars = mu.merge_dicts(bl)
|
||||
# indicators = mu.merge_dicts(il)
|
||||
|
||||
# #zaroven vytvarime pomocny list, kde stale drzime pocet radku per day (pro nasledny sekvencing)
|
||||
# #zatim nad indikatory - v budoucnu zvazit, kdyby jelo neco jen nad barama
|
||||
# for i, val in enumerate(indicatorslist):
|
||||
# #pro prvni klic z indikatoru pocteme cnt
|
||||
# pocet = len(indicatorslist[i][self.ind_features[0]])
|
||||
# print("pro runner vkladame pocet", pocet)
|
||||
# rows_in_day.append(pocet)
|
||||
|
||||
# rows_in_day = np.array(rows_in_day)
|
||||
# rows_in_day = np.cumsum(rows_in_day)
|
||||
# print("celkove pole rows_in_day(cumsum):", rows_in_day)
|
||||
|
||||
# print("Data LOADED.")
|
||||
# print(f"number of indicators {len(indicators)}")
|
||||
# print(f"number of bar elements{len(bars)}")
|
||||
# print(f"ind list length {len(indicators['time'])}")
|
||||
# print(f"bar list length {len(bars['time'])}")
|
||||
|
||||
# self.validate_available_features(bars, indicators)
|
||||
|
||||
# print("Preparing FEATURES")
|
||||
# source_data, target_data = self.stack_bars_indicators(bars, indicators)
|
||||
# return source_data, target_data, rows_in_day
|
||||
|
||||
# def validate_available_features(self, bars, indicators):
|
||||
# for k in self.bar_features:
|
||||
# if not k in bars.keys():
|
||||
# raise Exception(f"Missing bar feature {k}")
|
||||
|
||||
# for k in self.ind_features:
|
||||
# if not k in indicators.keys():
|
||||
# raise Exception(f"Missing ind feature {k}")
|
||||
|
||||
# def stack_bars_indicators(self, bars, indicators):
|
||||
# print("Stacking dicts to numpy")
|
||||
# print("Source - X")
|
||||
# source_data = self.column_stack_source(bars, indicators)
|
||||
# print("shape", np.shape(source_data))
|
||||
# print("Target - Y", self.target)
|
||||
# target_data = self.column_stack_target(bars, indicators)
|
||||
# print("shape", np.shape(target_data))
|
||||
|
||||
# return source_data, target_data
|
||||
|
||||
# #pomocna sluzba, ktera provede vsechny transformace a inverzni scaling a vyleze z nej predikce
|
||||
# #vstupem je standardni format ve strategii (state.bars, state.indicators)
|
||||
# #vystupem je jedna hodnota
|
||||
# def predict(self, bars, indicators) -> float:
|
||||
# #oriznuti podle seqence - pokud je nastaveno v modelu
|
||||
# lastNbars = slice_dict_lists(bars, self.input_sequences)
|
||||
# lastNindicators = slice_dict_lists(indicators, self.input_sequences)
|
||||
# # print("last5bars", lastNbars)
|
||||
# # print("last5indicators",lastNindicators)
|
||||
|
||||
# combined_live_data = self.column_stack_source(lastNbars, lastNindicators, verbose=0)
|
||||
# #print("combined_live_data",combined_live_data)
|
||||
# combined_live_data = self.scalerX.transform(combined_live_data)
|
||||
# combined_live_data = np.array(combined_live_data)
|
||||
# #print("last 5 values combined data shape", np.shape(combined_live_data))
|
||||
|
||||
# #converts to 3D array
|
||||
# # 1 number of samples in the array.
|
||||
# # 2 represents the sequence length.
|
||||
# # 3 represents the number of features in the data.
|
||||
# combined_live_data = combined_live_data.reshape((1, self.input_sequences, combined_live_data.shape[1]))
|
||||
|
||||
# # Make a prediction
|
||||
# prediction = self.model(combined_live_data, training=False)
|
||||
# #prediction = prediction.reshape((1, 1))
|
||||
# # Convert the prediction back to the original scale
|
||||
# prediction = self.scalerY.inverse_transform(prediction)
|
||||
# return float(prediction)
|
||||
@ -1,55 +0,0 @@
|
||||
import numpy as np
|
||||
# import v2realbot.controller.services as cs
|
||||
from joblib import load
|
||||
from v2realbot.config import DATA_DIR
|
||||
|
||||
def get_full_filename(name, version = "1"):
|
||||
return DATA_DIR+'/models/'+name+'_v'+version+'.pkl'
|
||||
|
||||
def load_model(name, version = "1"):
|
||||
filename = get_full_filename(name, version)
|
||||
return load(filename)
|
||||
|
||||
#pomocne funkce na manipulaci s daty
|
||||
|
||||
def merge_dicts(dict_list):
|
||||
# Initialize an empty merged dictionary
|
||||
merged_dict = {}
|
||||
|
||||
# Iterate through the dictionaries in the list
|
||||
for i,d in enumerate(dict_list):
|
||||
for key, value in d.items():
|
||||
if key in merged_dict:
|
||||
merged_dict[key] += value
|
||||
else:
|
||||
merged_dict[key] = value
|
||||
#vlozime element s idenitfikaci runnera
|
||||
|
||||
return merged_dict
|
||||
|
||||
# # Initialize the merged dictionary with the first dictionary in the list
|
||||
# merged_dict = dict_list[0].copy()
|
||||
# merged_dict["index"] = []
|
||||
|
||||
# # Iterate through the remaining dictionaries and concatenate their lists
|
||||
# for i, d in enumerate(dict_list[1:]):
|
||||
# merged_dict["index"] =
|
||||
# for key, value in d.items():
|
||||
# if key in merged_dict:
|
||||
# merged_dict[key] += value
|
||||
# else:
|
||||
# merged_dict[key] = value
|
||||
|
||||
# return merged_dict
|
||||
|
||||
def load_runner(runner_id):
|
||||
res, sada = cs.get_archived_runner_details_byID(runner_id)
|
||||
if res == 0:
|
||||
print("ok")
|
||||
else:
|
||||
print("error",res,sada)
|
||||
raise Exception(f"error loading runner {runner_id} : {res} {sada}")
|
||||
|
||||
bars = sada["bars"]
|
||||
indicators = sada["indicators"][0]
|
||||
return bars, indicators
|
||||
104
v2realbot/reporting/analyzer/WIP_daily_profit_distribution.py
Normal file
104
v2realbot/reporting/analyzer/WIP_daily_profit_distribution.py
Normal file
@ -0,0 +1,104 @@
|
||||
import matplotlib
|
||||
import matplotlib.dates as mdates
|
||||
matplotlib.use('Agg') # Set the Matplotlib backend to 'Agg'
|
||||
import matplotlib.pyplot as plt
|
||||
from matplotlib.ticker import MaxNLocator
|
||||
import seaborn as sns
|
||||
import pandas as pd
|
||||
from datetime import datetime
|
||||
from typing import List
|
||||
from enum import Enum
|
||||
import numpy as np
|
||||
import v2realbot.controller.services as cs
|
||||
from rich import print
|
||||
from v2realbot.common.model import AnalyzerInputs
|
||||
from v2realbot.common.model import TradeDirection, TradeStatus, Trade, TradeStoplossType
|
||||
from v2realbot.utils.utils import isrising, isfalling,zoneNY, price2dec, safe_get#, print
|
||||
from pathlib import Path
|
||||
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account, OrderSide
|
||||
from io import BytesIO
|
||||
from v2realbot.utils.historicals import get_historical_bars
|
||||
from alpaca.data.timeframe import TimeFrame, TimeFrameUnit
|
||||
from collections import defaultdict
|
||||
from scipy.stats import zscore
|
||||
from io import BytesIO
|
||||
from v2realbot.reporting.load_trades import load_trades
|
||||
from typing import Tuple, Optional, List
|
||||
from traceback import format_exc
|
||||
import pandas as pd
|
||||
|
||||
def daily_profit_distribution(runner_ids: list = None, batch_id: str = None, stream: bool = False):
|
||||
try:
|
||||
res, trades, days_cnt = load_trades(runner_ids, batch_id)
|
||||
if res != 0:
|
||||
raise Exception("Error in loading trades")
|
||||
|
||||
#print(trades)
|
||||
|
||||
# Convert list of Trade objects to DataFrame
|
||||
trades_df = pd.DataFrame([t.__dict__ for t in trades if t.status == "closed"])
|
||||
|
||||
# Ensure 'exit_time' is a datetime object and make it timezone-naive if necessary
|
||||
trades_df['exit_time'] = pd.to_datetime(trades_df['exit_time']).dt.tz_convert(zoneNY)
|
||||
trades_df['date'] = trades_df['exit_time'].dt.date
|
||||
|
||||
daily_profit = trades_df.groupby(['date', 'direction']).profit.sum().unstack(fill_value=0)
|
||||
#print("dp",daily_profit)
|
||||
daily_cumulative_profit = trades_df.groupby('date').profit.sum().cumsum()
|
||||
|
||||
# Create the plot
|
||||
fig, ax1 = plt.subplots(figsize=(10, 6))
|
||||
|
||||
# Bar chart for daily profit composition
|
||||
daily_profit.plot(kind='bar', stacked=True, ax=ax1, color=['green', 'red'], zorder=2)
|
||||
ax1.set_ylabel('Daily Profit')
|
||||
ax1.set_xlabel('Date')
|
||||
#ax1.xaxis.set_major_locator(MaxNLocator(10))
|
||||
|
||||
# Line chart for cumulative daily profit
|
||||
#ax2 = ax1.twinx()
|
||||
#print(daily_cumulative_profit)
|
||||
#print(daily_cumulative_profit.index)
|
||||
#ax2.plot(daily_cumulative_profit.index, daily_cumulative_profit, color='yellow', linestyle='-', linewidth=2, zorder=3)
|
||||
#ax2.set_ylabel('Cumulative Profit')
|
||||
|
||||
# Setting the secondary y-axis range dynamically based on cumulative profit values
|
||||
# ax2.set_ylim(daily_cumulative_profit.min() - (daily_cumulative_profit.std() * 2),
|
||||
# daily_cumulative_profit.max() + (daily_cumulative_profit.std() * 2))
|
||||
|
||||
# Dark mode settings
|
||||
ax1.set_facecolor('black')
|
||||
# ax1.grid(True)
|
||||
#ax2.set_facecolor('black')
|
||||
fig.patch.set_facecolor('black')
|
||||
ax1.tick_params(colors='white')
|
||||
#ax2.tick_params(colors='white')
|
||||
# ax1.xaxis_date()
|
||||
# ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.', tz=zoneNY))
|
||||
ax1.tick_params(axis='x', rotation=45)
|
||||
|
||||
# Footer
|
||||
footer_text = f'Days Count: {days_cnt} | Parameters: {{"runner_ids": {len(runner_ids) if runner_ids is not None else None}, "batch_id": {batch_id}, "stream": {stream}}}'
|
||||
plt.figtext(0.5, 0.01, footer_text, wrap=True, horizontalalignment='center', fontsize=8, color='white')
|
||||
|
||||
# Save or stream the plot
|
||||
if stream:
|
||||
img_stream = BytesIO()
|
||||
plt.savefig(img_stream, format='png', bbox_inches='tight', facecolor=fig.get_facecolor(), edgecolor='none')
|
||||
img_stream.seek(0)
|
||||
plt.close(fig)
|
||||
return (0, img_stream)
|
||||
else:
|
||||
plt.savefig(f'{__name__}.png', bbox_inches='tight', facecolor=fig.get_facecolor(), edgecolor='none')
|
||||
plt.close(fig)
|
||||
return (0, None)
|
||||
|
||||
except Exception as e:
|
||||
# Detailed error reporting
|
||||
return (-1, str(e) + format_exc())
|
||||
# Local debugging
|
||||
if __name__ == '__main__':
|
||||
batch_id = "6f9b012c"
|
||||
res, val = daily_profit_distribution(batch_id=batch_id)
|
||||
print(res, val)
|
||||
104
v2realbot/reporting/analyzer/daily_profit_distribution.py
Normal file
104
v2realbot/reporting/analyzer/daily_profit_distribution.py
Normal file
@ -0,0 +1,104 @@
|
||||
import matplotlib
|
||||
import matplotlib.dates as mdates
|
||||
matplotlib.use('Agg') # Set the Matplotlib backend to 'Agg'
|
||||
import matplotlib.pyplot as plt
|
||||
from matplotlib.ticker import MaxNLocator
|
||||
import seaborn as sns
|
||||
import pandas as pd
|
||||
from datetime import datetime
|
||||
from typing import List
|
||||
from enum import Enum
|
||||
import numpy as np
|
||||
import v2realbot.controller.services as cs
|
||||
from rich import print
|
||||
from v2realbot.common.model import AnalyzerInputs
|
||||
from v2realbot.common.model import TradeDirection, TradeStatus, Trade, TradeStoplossType
|
||||
from v2realbot.utils.utils import isrising, isfalling,zoneNY, price2dec, safe_get#, print
|
||||
from pathlib import Path
|
||||
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account, OrderSide
|
||||
from io import BytesIO
|
||||
from v2realbot.utils.historicals import get_historical_bars
|
||||
from alpaca.data.timeframe import TimeFrame, TimeFrameUnit
|
||||
from collections import defaultdict
|
||||
from scipy.stats import zscore
|
||||
from io import BytesIO
|
||||
from v2realbot.reporting.load_trades import load_trades
|
||||
from typing import Tuple, Optional, List
|
||||
from traceback import format_exc
|
||||
import pandas as pd
|
||||
|
||||
def daily_profit_distribution(runner_ids: list = None, batch_id: str = None, stream: bool = False):
|
||||
try:
|
||||
res, trades, days_cnt = load_trades(runner_ids, batch_id)
|
||||
if res != 0:
|
||||
raise Exception("Error in loading trades")
|
||||
|
||||
#print(trades)
|
||||
|
||||
# Convert list of Trade objects to DataFrame
|
||||
trades_df = pd.DataFrame([t.__dict__ for t in trades if t.status == "closed"])
|
||||
|
||||
# Ensure 'exit_time' is a datetime object and make it timezone-naive if necessary
|
||||
trades_df['exit_time'] = pd.to_datetime(trades_df['exit_time']).dt.tz_convert(zoneNY)
|
||||
trades_df['date'] = trades_df['exit_time'].dt.date
|
||||
|
||||
daily_profit = trades_df.groupby(['date', 'direction']).profit.sum().unstack(fill_value=0)
|
||||
#print("dp",daily_profit)
|
||||
daily_cumulative_profit = trades_df.groupby('date').profit.sum().cumsum()
|
||||
|
||||
# Create the plot
|
||||
fig, ax1 = plt.subplots(figsize=(10, 6))
|
||||
|
||||
# Bar chart for daily profit composition
|
||||
daily_profit.plot(kind='bar', stacked=True, ax=ax1, color=['green', 'red'], zorder=2)
|
||||
ax1.set_ylabel('Daily Profit')
|
||||
ax1.set_xlabel('Date')
|
||||
#ax1.xaxis.set_major_locator(MaxNLocator(10))
|
||||
|
||||
# Line chart for cumulative daily profit
|
||||
#ax2 = ax1.twinx()
|
||||
#print(daily_cumulative_profit)
|
||||
#print(daily_cumulative_profit.index)
|
||||
#ax2.plot(daily_cumulative_profit.index, daily_cumulative_profit, color='yellow', linestyle='-', linewidth=2, zorder=3)
|
||||
#ax2.set_ylabel('Cumulative Profit')
|
||||
|
||||
# Setting the secondary y-axis range dynamically based on cumulative profit values
|
||||
# ax2.set_ylim(daily_cumulative_profit.min() - (daily_cumulative_profit.std() * 2),
|
||||
# daily_cumulative_profit.max() + (daily_cumulative_profit.std() * 2))
|
||||
|
||||
# Dark mode settings
|
||||
ax1.set_facecolor('black')
|
||||
# ax1.grid(True)
|
||||
#ax2.set_facecolor('black')
|
||||
fig.patch.set_facecolor('black')
|
||||
ax1.tick_params(colors='white')
|
||||
#ax2.tick_params(colors='white')
|
||||
# ax1.xaxis_date()
|
||||
# ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.', tz=zoneNY))
|
||||
ax1.tick_params(axis='x', rotation=45)
|
||||
|
||||
# Footer
|
||||
footer_text = f'Days Count: {days_cnt} | Parameters: {{"runner_ids": {len(runner_ids) if runner_ids is not None else None}, "batch_id": {batch_id}, "stream": {stream}}}'
|
||||
plt.figtext(0.5, 0.01, footer_text, wrap=True, horizontalalignment='center', fontsize=8, color='white')
|
||||
|
||||
# Save or stream the plot
|
||||
if stream:
|
||||
img_stream = BytesIO()
|
||||
plt.savefig(img_stream, format='png', bbox_inches='tight', facecolor=fig.get_facecolor(), edgecolor='none')
|
||||
img_stream.seek(0)
|
||||
plt.close(fig)
|
||||
return (0, img_stream)
|
||||
else:
|
||||
plt.savefig(f'{__name__}.png', bbox_inches='tight', facecolor=fig.get_facecolor(), edgecolor='none')
|
||||
plt.close(fig)
|
||||
return (0, None)
|
||||
|
||||
except Exception as e:
|
||||
# Detailed error reporting
|
||||
return (-1, str(e) + format_exc())
|
||||
# Local debugging
|
||||
if __name__ == '__main__':
|
||||
batch_id = "6f9b012c"
|
||||
res, val = daily_profit_distribution(batch_id=batch_id)
|
||||
print(res, val)
|
||||
@ -11,7 +11,7 @@ import numpy as np
|
||||
import v2realbot.controller.services as cs
|
||||
from rich import print
|
||||
from v2realbot.common.model import AnalyzerInputs
|
||||
from v2realbot.common.PrescribedTradeModel import TradeDirection, TradeStatus, Trade, TradeStoplossType
|
||||
from v2realbot.common.model import TradeDirection, TradeStatus, Trade, TradeStoplossType
|
||||
from v2realbot.utils.utils import isrising, isfalling,zoneNY, price2dec, safe_get#, print
|
||||
from pathlib import Path
|
||||
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY
|
||||
|
||||
@ -11,7 +11,7 @@ import numpy as np
|
||||
import v2realbot.controller.services as cs
|
||||
from rich import print
|
||||
from v2realbot.common.model import AnalyzerInputs
|
||||
from v2realbot.common.PrescribedTradeModel import TradeDirection, TradeStatus, Trade, TradeStoplossType
|
||||
from v2realbot.common.model import TradeDirection, TradeStatus, Trade, TradeStoplossType
|
||||
from v2realbot.utils.utils import isrising, isfalling,zoneNY, price2dec, safe_get#, print
|
||||
from pathlib import Path
|
||||
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY
|
||||
@ -25,7 +25,7 @@ from io import BytesIO
|
||||
# Assuming Trade, TradeStatus, TradeDirection, TradeStoplossType classes are defined elsewhere
|
||||
|
||||
#LOSS and PROFIT without GRAPH
|
||||
def find_optimal_cutoff(runner_ids: list = None, batch_id: str = None, stream: bool = False, rem_outliers:bool = False, file: str = "optimalcutoff.png",steps:int = 50):
|
||||
def find_optimal_cutoff(runner_ids: list = None, batch_id: str = None, stream: bool = False, mode:str="absolute", rem_outliers:bool = False, z_score_threshold:int = 3, file: str = "optimalcutoff.png",steps:int = 50):
|
||||
|
||||
#TODO dopracovat drawdown a minimalni a maximalni profity nikoliv cumulovane, zamyslet se
|
||||
#TODO list of runner_ids
|
||||
@ -115,7 +115,11 @@ def find_optimal_cutoff(runner_ids: list = None, batch_id: str = None, stream: b
|
||||
for trade in trades:
|
||||
if trade.status == TradeStatus.CLOSED and trade.exit_time:
|
||||
day = trade.exit_time.date()
|
||||
if mode == "absolute":
|
||||
daily_cumulative_profits[day].append(trade.profit)
|
||||
#relative profit
|
||||
else:
|
||||
daily_cumulative_profits[day].append(trade.rel_profit)
|
||||
|
||||
for day in daily_cumulative_profits:
|
||||
daily_cumulative_profits[day] = np.cumsum(daily_cumulative_profits[day])
|
||||
@ -131,7 +135,7 @@ def find_optimal_cutoff(runner_ids: list = None, batch_id: str = None, stream: b
|
||||
for day, profits in cumulative_profits.items():
|
||||
if len(profits) > 0:
|
||||
day_z_score = z_scores[list(cumulative_profits.keys()).index(day)]
|
||||
if abs(day_z_score) < 3: # Adjust threshold as needed
|
||||
if abs(day_z_score) < z_score_threshold: # Adjust threshold as needed
|
||||
filtered_profits[day] = profits
|
||||
return filtered_profits
|
||||
|
||||
@ -145,26 +149,25 @@ def find_optimal_cutoff(runner_ids: list = None, batch_id: str = None, stream: b
|
||||
# profit_range = (0, max_profit) if max_profit > 0 else (0, 0)
|
||||
# loss_range = (min_profit, 0) if min_profit < 0 else (0, 0)
|
||||
|
||||
if mode == "absolute":
|
||||
# OPT2 Calculate profit_range and loss_range based on all cumulative profits
|
||||
all_cumulative_profits = np.concatenate([profits for profits in daily_cumulative_profits.values()])
|
||||
max_cumulative_profit = np.max(all_cumulative_profits)
|
||||
min_cumulative_profit = np.min(all_cumulative_profits)
|
||||
profit_range = (0, max_cumulative_profit) if max_cumulative_profit > 0 else (0, 0)
|
||||
loss_range = (min_cumulative_profit, 0) if min_cumulative_profit < 0 else (0, 0)
|
||||
else:
|
||||
#for relative - hardcoded
|
||||
profit_range = (0, 1) # Adjust based on your data
|
||||
loss_range = (-1, 0)
|
||||
|
||||
print("Calculated ranges", profit_range, loss_range)
|
||||
print("Ranges", profit_range, loss_range)
|
||||
|
||||
num_points = steps # Adjust for speed vs accuracy
|
||||
profit_cutoffs = np.linspace(*profit_range, num_points)
|
||||
loss_cutoffs = np.linspace(*loss_range, num_points)
|
||||
|
||||
# OPT 3Statically define ranges for loss and profit cutoffs
|
||||
# profit_range = (0, 1000) # Adjust based on your data
|
||||
# loss_range = (-1000, 0)
|
||||
# num_points = 20 # Adjust for speed vs accuracy
|
||||
|
||||
profit_cutoffs = np.linspace(*profit_range, num_points)
|
||||
loss_cutoffs = np.linspace(*loss_range, num_points)
|
||||
|
||||
total_profits_matrix = np.zeros((len(profit_cutoffs), len(loss_cutoffs)))
|
||||
|
||||
@ -207,12 +210,12 @@ def find_optimal_cutoff(runner_ids: list = None, batch_id: str = None, stream: b
|
||||
}
|
||||
plt.rcParams.update(params)
|
||||
plt.figure(figsize=(10, 8))
|
||||
sns.heatmap(total_profits_matrix, xticklabels=np.rint(loss_cutoffs).astype(int), yticklabels=np.rint(profit_cutoffs).astype(int), cmap="viridis")
|
||||
sns.heatmap(total_profits_matrix, xticklabels=np.rint(loss_cutoffs).astype(int) if mode == "absolute" else np.around(loss_cutoffs, decimals=3), yticklabels=np.rint(profit_cutoffs).astype(int) if mode == "absolute" else np.around(profit_cutoffs, decimals=3), cmap="viridis")
|
||||
plt.xticks(rotation=90) # Rotate x-axis labels to be vertical
|
||||
plt.yticks(rotation=0) # Keep y-axis labels horizontal
|
||||
plt.gca().invert_yaxis()
|
||||
plt.gca().invert_xaxis()
|
||||
plt.suptitle(f"Total Profit for Combinations of Profit/Loss Cutoffs ({cnt_max})", fontsize=16)
|
||||
plt.suptitle(f"Total {mode} Profit for Profit/Loss Cutoffs ({cnt_max})", fontsize=16)
|
||||
plt.title(f"Optimal Profit Cutoff: {optimal_profit_cutoff:.2f}, Optimal Loss Cutoff: {optimal_loss_cutoff:.2f}, Max Profit: {max_profit:.2f}", fontsize=10)
|
||||
plt.xlabel("Loss Cutoff")
|
||||
plt.ylabel("Profit Cutoff")
|
||||
@ -236,8 +239,8 @@ if __name__ == '__main__':
|
||||
# id_list = ["e8938b2e-8462-441a-8a82-d823c6a025cb"]
|
||||
# generate_trading_report_image(runner_ids=id_list)
|
||||
batch_id = "c76b4414"
|
||||
vstup = AnalyzerInputs(**params)
|
||||
res, val = find_optimal_cutoff(batch_id=batch_id, file="optimal_cutoff_vectorized.png",steps=20)
|
||||
#vstup = AnalyzerInputs(**params)
|
||||
res, val = find_optimal_cutoff(batch_id=batch_id, mode="relative", z_score_threshold=2, file="optimal_cutoff_vectorized.png",steps=20)
|
||||
#res, val = find_optimal_cutoff(batch_id=batch_id, rem_outliers=True, file="optimal_cutoff_vectorized_nooutliers.png")
|
||||
|
||||
print(res,val)
|
||||
244
v2realbot/reporting/analyzer/find_optimal_cutoff_REL.py
Normal file
244
v2realbot/reporting/analyzer/find_optimal_cutoff_REL.py
Normal file
@ -0,0 +1,244 @@
|
||||
import matplotlib
|
||||
import matplotlib.dates as mdates
|
||||
#matplotlib.use('Agg') # Set the Matplotlib backend to 'Agg'
|
||||
import matplotlib.pyplot as plt
|
||||
import seaborn as sns
|
||||
import pandas as pd
|
||||
from datetime import datetime
|
||||
from typing import List
|
||||
from enum import Enum
|
||||
import numpy as np
|
||||
import v2realbot.controller.services as cs
|
||||
from rich import print
|
||||
from v2realbot.common.model import AnalyzerInputs
|
||||
from v2realbot.common.model import TradeDirection, TradeStatus, Trade, TradeStoplossType
|
||||
from v2realbot.utils.utils import isrising, isfalling,zoneNY, price2dec, safe_get#, print
|
||||
from pathlib import Path
|
||||
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY
|
||||
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account, OrderSide
|
||||
from io import BytesIO
|
||||
from v2realbot.utils.historicals import get_historical_bars
|
||||
from alpaca.data.timeframe import TimeFrame, TimeFrameUnit
|
||||
from collections import defaultdict
|
||||
from scipy.stats import zscore
|
||||
from io import BytesIO
|
||||
# Assuming Trade, TradeStatus, TradeDirection, TradeStoplossType classes are defined elsewhere
|
||||
|
||||
#HEATMAPA pro RELATIVNI PROFIT - WIP
|
||||
#po dodelani dat do stejné funkce jen s parametrem typ
|
||||
def find_optimal_cutoff(runner_ids: list = None, batch_id: str = None, stream: bool = False, rem_outliers:bool = False, z_score_threshold:int = 3, file: str = "optimalcutoff.png",steps:int = 50):
|
||||
|
||||
#TODO dopracovat drawdown a minimalni a maximalni profity nikoliv cumulovane, zamyslet se
|
||||
#TODO list of runner_ids
|
||||
#TODO pridelat na vytvoreni runnera a batche, samostatne REST API + na remove archrunnera
|
||||
|
||||
if runner_ids is None and batch_id is None:
|
||||
return -2, f"runner_id or batch_id must be present"
|
||||
|
||||
if batch_id is not None:
|
||||
res, runner_ids =cs.get_archived_runnerslist_byBatchID(batch_id)
|
||||
|
||||
if res != 0:
|
||||
print(f"no batch {batch_id} found")
|
||||
return -1, f"no batch {batch_id} found"
|
||||
|
||||
trades = []
|
||||
cnt_max = len(runner_ids)
|
||||
cnt = 0
|
||||
#zatim zjistujeme start a end z min a max dni - jelikoz muze byt i seznam runner_ids a nejenom batch
|
||||
end_date = None
|
||||
start_date = None
|
||||
for id in runner_ids:
|
||||
cnt += 1
|
||||
#get runner
|
||||
res, sada =cs.get_archived_runner_header_byID(id)
|
||||
if res != 0:
|
||||
print(f"no runner {id} found")
|
||||
return -1, f"no runner {id} found"
|
||||
|
||||
#print("archrunner")
|
||||
#print(sada)
|
||||
|
||||
if cnt == 1:
|
||||
start_date = sada.bt_from if sada.mode in [Mode.BT,Mode.PREP] else sada.started
|
||||
if cnt == cnt_max:
|
||||
end_date = sada.bt_to if sada.mode in [Mode.BT or Mode.PREP] else sada.stopped
|
||||
# Parse trades
|
||||
|
||||
trades_dicts = sada.metrics["prescr_trades"]
|
||||
|
||||
for trade_dict in trades_dicts:
|
||||
trade_dict['last_update'] = datetime.fromtimestamp(trade_dict.get('last_update')).astimezone(zoneNY) if trade_dict['last_update'] is not None else None
|
||||
trade_dict['entry_time'] = datetime.fromtimestamp(trade_dict.get('entry_time')).astimezone(zoneNY) if trade_dict['entry_time'] is not None else None
|
||||
trade_dict['exit_time'] = datetime.fromtimestamp(trade_dict.get('exit_time')).astimezone(zoneNY) if trade_dict['exit_time'] is not None else None
|
||||
trades.append(Trade(**trade_dict))
|
||||
|
||||
#print(trades)
|
||||
|
||||
# symbol = sada.symbol
|
||||
# #hour bars for backtested period
|
||||
# print(start_date,end_date)
|
||||
# bars= get_historical_bars(symbol, start_date, end_date, TimeFrame.Hour)
|
||||
# print("bars for given period",bars)
|
||||
# """Bars a dictionary with the following keys:
|
||||
# * high: A list of high prices
|
||||
# * low: A list of low prices
|
||||
# * volume: A list of volumes
|
||||
# * close: A list of close prices
|
||||
# * hlcc4: A list of HLCC4 indicators
|
||||
# * open: A list of open prices
|
||||
# * time: A list of times in UTC (ISO 8601 format)
|
||||
# * trades: A list of number of trades
|
||||
# * resolution: A list of resolutions (all set to 'D')
|
||||
# * confirmed: A list of booleans (all set to True)
|
||||
# * vwap: A list of VWAP indicator
|
||||
# * updated: A list of booleans (all set to True)
|
||||
# * index: A list of integers (from 0 to the length of the list of daily bars)
|
||||
# """
|
||||
|
||||
# Filter to only use trades with status 'CLOSED'
|
||||
closed_trades = [trade for trade in trades if trade.status == TradeStatus.CLOSED]
|
||||
|
||||
#print(closed_trades)
|
||||
|
||||
if len(closed_trades) == 0:
|
||||
return -1, "image generation no closed trades"
|
||||
|
||||
# # Group trades by date and calculate daily profits
|
||||
# trades_by_day = defaultdict(list)
|
||||
# for trade in trades:
|
||||
# if trade.status == TradeStatus.CLOSED and trade.exit_time:
|
||||
# trade_day = trade.exit_time.date()
|
||||
# trades_by_day[trade_day].append(trade)
|
||||
|
||||
# Precompute daily cumulative profits
|
||||
daily_cumulative_profits = defaultdict(list)
|
||||
for trade in trades:
|
||||
if trade.status == TradeStatus.CLOSED and trade.exit_time:
|
||||
day = trade.exit_time.date()
|
||||
daily_cumulative_profits[day].append(trade.profit)
|
||||
|
||||
for day in daily_cumulative_profits:
|
||||
daily_cumulative_profits[day] = np.cumsum(daily_cumulative_profits[day])
|
||||
|
||||
|
||||
if rem_outliers:
|
||||
# Remove outliers based on z-scores
|
||||
def remove_outliers(cumulative_profits):
|
||||
all_profits = [profit[-1] for profit in cumulative_profits.values() if len(profit) > 0]
|
||||
z_scores = zscore(all_profits)
|
||||
print(z_scores)
|
||||
filtered_profits = {}
|
||||
for day, profits in cumulative_profits.items():
|
||||
if len(profits) > 0:
|
||||
day_z_score = z_scores[list(cumulative_profits.keys()).index(day)]
|
||||
if abs(day_z_score) < z_score_threshold: # Adjust threshold as needed
|
||||
filtered_profits[day] = profits
|
||||
return filtered_profits
|
||||
|
||||
daily_cumulative_profits = remove_outliers(daily_cumulative_profits)
|
||||
|
||||
|
||||
# OPT1 Dynamically calculate profit_range and loss_range - based on eod daily profit
|
||||
# all_final_profits = [profits[-1] for profits in daily_cumulative_profits.values() if len(profits) > 0]
|
||||
# max_profit = max(all_final_profits)
|
||||
# min_profit = min(all_final_profits)
|
||||
# profit_range = (0, max_profit) if max_profit > 0 else (0, 0)
|
||||
# loss_range = (min_profit, 0) if min_profit < 0 else (0, 0)
|
||||
|
||||
# OPT2 Calculate profit_range and loss_range based on all cumulative profits
|
||||
all_cumulative_profits = np.concatenate([profits for profits in daily_cumulative_profits.values()])
|
||||
max_cumulative_profit = np.max(all_cumulative_profits)
|
||||
min_cumulative_profit = np.min(all_cumulative_profits)
|
||||
profit_range = (0, max_cumulative_profit) if max_cumulative_profit > 0 else (0, 0)
|
||||
loss_range = (min_cumulative_profit, 0) if min_cumulative_profit < 0 else (0, 0)
|
||||
|
||||
print("Calculated ranges", profit_range, loss_range)
|
||||
|
||||
num_points = steps # Adjust for speed vs accuracy
|
||||
profit_cutoffs = np.linspace(*profit_range, num_points)
|
||||
loss_cutoffs = np.linspace(*loss_range, num_points)
|
||||
|
||||
# OPT 3Statically define ranges for loss and profit cutoffs
|
||||
# profit_range = (0, 1000) # Adjust based on your data
|
||||
# loss_range = (-1000, 0)
|
||||
# num_points = 20 # Adjust for speed vs accuracy
|
||||
|
||||
profit_cutoffs = np.linspace(*profit_range, num_points)
|
||||
loss_cutoffs = np.linspace(*loss_range, num_points)
|
||||
|
||||
total_profits_matrix = np.zeros((len(profit_cutoffs), len(loss_cutoffs)))
|
||||
|
||||
for i, profit_cutoff in enumerate(profit_cutoffs):
|
||||
for j, loss_cutoff in enumerate(loss_cutoffs):
|
||||
total_profit = 0
|
||||
for daily_profit in daily_cumulative_profits.values():
|
||||
cutoff_index = np.where((daily_profit >= profit_cutoff) | (daily_profit <= loss_cutoff))[0]
|
||||
if cutoff_index.size > 0:
|
||||
total_profit += daily_profit[cutoff_index[0]]
|
||||
else:
|
||||
total_profit += daily_profit[-1] if daily_profit.size > 0 else 0
|
||||
total_profits_matrix[i, j] = total_profit
|
||||
|
||||
# Find the optimal combination
|
||||
optimal_idx = np.unravel_index(total_profits_matrix.argmax(), total_profits_matrix.shape)
|
||||
optimal_profit_cutoff = profit_cutoffs[optimal_idx[0]]
|
||||
optimal_loss_cutoff = loss_cutoffs[optimal_idx[1]]
|
||||
max_profit = total_profits_matrix[optimal_idx]
|
||||
|
||||
# Plotting
|
||||
# Setting up dark mode for the plots
|
||||
plt.style.use('dark_background')
|
||||
|
||||
# Optionally, you can further customize colors, labels, and axes
|
||||
params = {
|
||||
'axes.titlesize': 9,
|
||||
'axes.labelsize': 8,
|
||||
'xtick.labelsize': 9,
|
||||
'ytick.labelsize': 9,
|
||||
'axes.labelcolor': '#a9a9a9', #a1a3aa',
|
||||
'axes.facecolor': '#121722', #'#0e0e0e', #202020', # Dark background for plot area
|
||||
'axes.grid': False, # Turn off the grid globally
|
||||
'grid.color': 'gray', # If the grid is on, set grid line color
|
||||
'grid.linestyle': '--', # Grid line style
|
||||
'grid.linewidth': 1,
|
||||
'xtick.color': '#a9a9a9',
|
||||
'ytick.color': '#a9a9a9',
|
||||
'axes.edgecolor': '#a9a9a9'
|
||||
}
|
||||
plt.rcParams.update(params)
|
||||
plt.figure(figsize=(10, 8))
|
||||
sns.heatmap(total_profits_matrix, xticklabels=np.rint(loss_cutoffs).astype(int), yticklabels=np.rint(profit_cutoffs).astype(int), cmap="viridis")
|
||||
plt.xticks(rotation=90) # Rotate x-axis labels to be vertical
|
||||
plt.yticks(rotation=0) # Keep y-axis labels horizontal
|
||||
plt.gca().invert_yaxis()
|
||||
plt.gca().invert_xaxis()
|
||||
plt.suptitle(f"Total Profit for Combinations of Profit/Loss Cutoffs ({cnt_max})", fontsize=16)
|
||||
plt.title(f"Optimal Profit Cutoff: {optimal_profit_cutoff:.2f}, Optimal Loss Cutoff: {optimal_loss_cutoff:.2f}, Max Profit: {max_profit:.2f}", fontsize=10)
|
||||
plt.xlabel("Loss Cutoff")
|
||||
plt.ylabel("Profit Cutoff")
|
||||
|
||||
if stream is False:
|
||||
plt.savefig(file)
|
||||
plt.close()
|
||||
print(f"Optimal Profit Cutoff(rem_outliers:{rem_outliers}): {optimal_profit_cutoff}, Optimal Loss Cutoff: {optimal_loss_cutoff}, Max Profit: {max_profit}")
|
||||
return 0, None
|
||||
else:
|
||||
# Return the image as a BytesIO stream
|
||||
img_stream = BytesIO()
|
||||
plt.savefig(img_stream, format='png')
|
||||
plt.close()
|
||||
img_stream.seek(0) # Rewind the stream to the beginning
|
||||
return 0, img_stream
|
||||
|
||||
# Example usage
|
||||
# trades = [list of Trade objects]
|
||||
if __name__ == '__main__':
|
||||
# id_list = ["e8938b2e-8462-441a-8a82-d823c6a025cb"]
|
||||
# generate_trading_report_image(runner_ids=id_list)
|
||||
batch_id = "c76b4414"
|
||||
vstup = AnalyzerInputs(**params)
|
||||
res, val = find_optimal_cutoff(batch_id=batch_id, file="optimal_cutoff_vectorized.png",steps=20)
|
||||
#res, val = find_optimal_cutoff(batch_id=batch_id, rem_outliers=True, file="optimal_cutoff_vectorized_nooutliers.png")
|
||||
|
||||
print(res,val)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user