48 Commits

Author SHA1 Message Date
f3e7229051 hard stop / soft stop for cutoff (#177) martingale base 2024-03-15 13:31:28 +01:00
a6343abe88 highlight logs on gui (#176) 2024-03-15 11:06:18 +01:00
075984fcff archrunner db query searches for symbol, name (#175) 2024-03-15 10:04:46 +01:00
5fce627fe3 toml validation to frontend (#174) 2024-03-14 17:39:52 +01:00
8de1356aa8 #163 transferables (#172) 2024-03-14 14:16:01 +01:00
7f47890cad #168 #166 and additional fixes (#169) 2024-03-13 12:31:06 +01:00
8cf1aea2a8 run updte 2024-03-07 14:07:46 +01:00
9231c1d273 bugfix - kontrolu na maxloss provadime az u eventy FILL, kdy je znama celkova castka 2024-03-06 15:50:16 +01:00
9391d89aab #148 #158 config refactoring to support profiles/reloading (#165) 2024-03-06 14:30:24 +01:00
9cff5fe6a1 #155 + presun row_to from db.py to transform.py 2024-03-06 13:31:09 +01:00
0e5cf5f3e0 Merge pull request #161 from drew2323/local
Minor changes for installation on windows
2024-03-04 17:03:50 +01:00
90c33c0528 Delete run.sh 2024-03-04 17:01:47 +01:00
e9e6534d2b primary live account api and secret changed 2024-03-04 16:57:10 +01:00
5874528d23 line 29 has deleted integrity and crossorigin value 2024-02-28 08:08:21 +01:00
985445d814 user_data_dir function has a second parameter author, ACCOUNT1_LIVE has still PAPER_API_KEY and SECRET_KEY 2024-02-28 08:04:02 +01:00
6c1f7f0e2e changed VIRTUAL_ENV_DIR and PYTHON_TO_USE 2024-02-27 18:15:35 +01:00
20aaa2ac23 #135 -> BT same period button 2024-02-27 12:03:57 +07:00
691514b102 all dates in gui are in market time zone (even start/stop) 2024-02-27 10:53:30 +07:00
84903aff77 batchprofit/batchcount columns hidden from archiverunners gui 2024-02-27 08:15:07 +07:00
4887e32665 #149 2024-02-26 22:42:03 +07:00
ce99448a48 moved config related services into separated package 2024-02-26 19:35:19 +07:00
887ea0ef00 #147 2024-02-26 11:30:13 +07:00
af7b678699 zpet debug podminka 2024-02-24 21:23:17 +07:00
04c63df045 docasny disable pro testing 2024-02-24 21:17:10 +07:00
ebac207489 #143 2024-02-24 20:32:01 +07:00
9f99ddc86a live_data_feed stored in runner_archive 2024-02-23 21:20:07 +07:00
e75fbc7194 bugfix 2024-02-23 21:04:23 +07:00
c4d05f47ff #139 konfigurace LIVE_DATA_FEED 2024-02-23 12:35:02 +07:00
f6e31f45f9 #136 bugfix properly closing ws 2024-02-23 10:30:12 +07:00
c42b1c4e1e fix 2024-02-22 23:23:20 +07:00
1bf11d0dc4 fix 2024-02-22 23:20:54 +07:00
1abbb07390 Scheduler support #24sched 2024-02-22 23:05:49 +07:00
b58639454b unknown symbol msg 2024-02-12 10:45:23 +07:00
a7e83fe051 bugfix create batch image (check for None from Alpaca) 2024-02-11 15:26:15 +07:00
6795338eba createbatch image tool + send to telefram enrichment 2024-02-11 12:37:19 +07:00
9aa8b58877 updatnute requirements.txt 2024-02-10 21:35:53 +07:00
eff78e8157 keys to env variables, optimalizations 2024-02-10 21:02:00 +07:00
d8bcc4bb8f Merge branch 'master' of https://github.com/drew2323/v2trading 2024-02-06 11:16:58 +07:00
7abdf47545 ok 2024-02-06 11:16:09 +07:00
1f8afef042 calendar wrapper with retry, histo bars with retry 2024-02-06 11:14:38 +07:00
df60d16eb4 Update README.md 2024-02-06 09:52:53 +07:00
535c2824b0 Update README.md 2024-02-06 09:34:33 +07:00
9cf936672d Update README.md 2024-02-06 09:30:56 +07:00
c1ad713a12 bugfix None in trade response 2024-02-05 10:22:20 +07:00
e9bb8b84ec fixes 2024-02-04 17:55:43 +07:00
603736d441 Merge branch 'master' of https://github.com/drew2323/v2trading 2024-02-04 17:54:09 +07:00
2c968691d1 Update README.md 2024-01-31 13:39:33 +07:00
435b4d899a Create README.md 2024-01-31 13:37:45 +07:00
56 changed files with 182 additions and 1832503 deletions

110
README.md
View File

@ -1,24 +1,29 @@
# V2TRADING - Algorithmic Trading Platform with Frontend
**README - V2TRADING - Advanced Algorithmic Trading Platform**
## Overview
Custom-built algorithmic trading platform for research, backtesting and live trading. Trading engine capable of processing tick data, providing custom aggregation, managing trades, and supporting backtesting in a highly accurate and efficient manner.
**Overview**
Custom-built algorithmic trading platform for research, backtesting and automated trading. Trading engine capable of processing tick data, managing trades, and supporting backtesting in a highly accurate and efficient manner.
## Key Features
- **Trading Engine**: Processes tick data in real time, aggregating data and managing trade execution.
**Key Features**
- **Trading Engine**: At the core of the platform is a trading engine that processes tick data in real time. This engine is responsible for aggregating data and managing the execution of trades, ensuring precision and speed in trade placement and execution.
- **Backtesting**: tick-by tick backtesting, down to millisecond accuracy, mirrors live trading environments and is vital for developing and testing high(er)-frequency trading strategies.
- **High-Fidelity Backtesting Environment**: ability to backtest strategies with 1:1 precision - meaning a tick-by-tick backtesting. This level of precision in backtesting, down to millisecond accuracy, mirrors live trading environments and is vital for developing and testing high-frequency trading strategies.
- **Configuration**: robust configuration via TOML
- **Frontend**: Frontend to support research to backtesting to paper trading workflow, including lightweight charts.
- **Custom Data Aggregation:** Custom time based, volume based, dollar based and renko bars aggregators based on tick-by-tick data.
- **Custom Data Aggregation:** The platform includes a data aggregator that allows for custom aggregation rules. This flexibility supports a variety of data analysis approaches, including non-time based bars and other unique criteria.
- **Indicators** Contains inbuild [tulipy](https://tulipindicators.org/list) [ta-lib](https://ta-lib.github.io/ta-lib-python/) and templates for custom build multioutputs stateful indicators.
- **Machine Learning Integration:** Includes modules for both training and inference, supporting the complete ML lifecycle.
- **Machine Learning Integration:** Recently, the platform has expanded to incorporate machine learning capabilities. This includes modules for both training and inference, supporting the complete ML lifecycle. These ML models can be utilized within trading strategies for classification and exploiting statistical advantages.
**Gui examples**
**Technology Stack**
**Backend and API:** The backbone of the platform is built with Python, utilizing libraries such as FastAPI, NumPy, Keras, and JAX, ensuring high performance and scalability.
**Frontend:** The client-side is developed with Vanilla JavaScript and jQuery, employing LightweightCharts for charting purposes. Additional modules enhance the platform's functionality. The frontend is slated for a future refactoring to modern frameworks like Vue.js and Vuetify for a more robust user interface.
While the platform is fully functional and growing, ongoing development is planned, particularly in the realm of frontend enhancements and further integration of advanced machine learning techniques.
**Contributions**
Contributions to this project are welcome. Whether it's improving the frontend, enhancing the backend capabilities, or experimenting with new trading strategies and machine learning models, your input can help take this platform to the next level.
This repository represents a sophisticated and evolving tool for algorithmic traders, offering precision, speed, and a level of customization that is unparalleled in open-source systems. Join us in shaping the future of algorithmic trading.
<p align="center">
Main screen with entry/exit points and stoploss lines<br>
@ -45,83 +50,4 @@ Custom-built algorithmic trading platform for research, backtesting and live tra
<img width="700" alt="Strategy analytical tools" src="https://github.com/drew2323/v2trading/assets/28433232/4bf8b3c3-e430-4250-831a-e5876bb6b743">
</p>
**Backend and API:** The backbone of the platform is built with Python, utilizing libraries such as FastAPI, NumPy, Keras, and JAX, ensuring high performance and scalability.
**Frontend:** The client-side is developed with Vanilla JavaScript and jQuery, employing LightweightCharts for charting purposes. Additional modules enhance the platform's functionality. The frontend is slated for a future refactoring to modern frameworks like Vue.js and Vuetify for a more robust user interface.
**Documentation** Public docs in in progress. Some can be found on [knowledge base](trading.mujdenik.eu) but first please request access. Some analysis documents can be found on [shared google doc folder](https://drive.google.com/drive/folders/1WmYG8oDGXO-lVTLVs9knAmMTmQL4dZt6?usp=drive_link).
# Installation Instructions
This document outlines the steps for installing and setting up the necessary environment for the application. These instructions are applicable for both Windows and Linux operating systems. Please follow the steps carefully to ensure a smooth setup.
## Prerequisites
Before beginning the installation process, ensure the following prerequisites are met:
- TA-Lib Library:
- Windows: Download and build the TA-Lib library. Install Visual Studio Community with the Visual C++ feature. Navigate to `C:\ta-lib\c\make\cdr\win32\msvc` in the command prompt and build the library using the available makefile.
- Linux: Install TA-Lib using your distribution's package manager or compile from source following the instructions available on the TA-Lib GitHub repository.
- Alpaca Paper Trading Account: Create an account at [Alpaca Markets](https://alpaca.markets/) and generate `API_KEY` and `SECRET_KEY` for your paper trading account.
## Installation Steps
**Clone the Repository:** Clone the remote repository to your local machine.
`git clone git@github.com:drew2323/v2trading.git <name_of_local_folder>`
**Install Python:** Ensure Python 3.10.11 is installed on your system.
**Create a Virtual Environment:** Set up a Python virtual environment.
`python -m venv <path_to_venv_folder>`
**Activate Virtual Environment:**
- Windows: `source ./<venv_folder>/Scripts/activate`
- Linux: `source ./<venv_folder>/bin/activate`
**Install Dependencies:** Install the program requirements.
pip install -r requirements.txt
Note: It's permissible to comment out references to `keras` and `tensorflow` modules, as well as the `ml-room` repository in `requirements.txt`.
**Environment Variables:** In `run.sh`, modify the `VIRTUAL_ENV_DIR` and `PYTHON_TO_USE` variables as necessary.
**Data Directory:** Navigate to `DATA_DIR` and create folders: `aggcache`, `tradecache`, and `models`.
**Media and Static Folders:** Create `media` and `static` folders one level above the repository directory. Also create `.env` file there.
**Database Setup:** Create the `v2trading.db` file using SQL commands from `v2trading_create_db.sql`.
```
import sqlite3
with open("v2trading_create_db.sql", "r") as f:
sql_statements = f.read()
conn = sqlite3.connect('v2trading.db')
cursor = conn.cursor()
cursor.executescript(sql_statements)
conn.commit()
conn.close()
```
Ensure the `config_table` is not empty by making an initial entry.
**Start the Application:** Run `main.py` in VSCode to start the application.
**Accessing the Application:** If the uvicorn server runs successfully at `http://0.0.0.0:8000`, access the application at `http://localhost:8000/static/`.
**Database Configuration:** Add dynamic button and JS configurations to the `config_table` in `v2trading.db` via the "Config" section on the main page.
Please replace placeholders (e.g., `<name_of_local_folder>`, `<path_to_venv_folder>`) with your actual paths and details. Follow these instructions to ensure the application is set up correctly and ready for use.
## Environmental variables
Trading platform can support N different accounts. Their API keys are stored as environmental variables in .env file located in the root directory.
Account for trading api is selected when each strategy is run. However for realtime websocket data), always ACCOUNT1 is used for all strategies. The data point selection (iex vs sip) is set by LIVE_DATA_FEED environment variable.
.env file should contain:
```
ACCOUNT1_LIVE_API_KEY=<ACCOUNT1_LIVE_API_KEY>
ACCOUNT1_LIVE_SECRET_KEY=<ACCOUNT1_LIVE_SECRET_KEY>
ACCOUNT1_LIVE_FEED=sip
ACCOUNT1_PAPER_API_KEY=<ACCOUNT1_PAPER_API_KEY>
ACCOUNT1_PAPER_SECRET_KEY=<ACCOUNT1_PAPER_SECRET_KEY>
ACCOUNT1_PAPER_FEED=sip
ACCOUNT2_PAPER_API_KEY=<ACCOUNT2_PAPER_API_KEY>
ACCOUNT2_PAPER_SECRET_KEY=ACCOUNT2_PAPER_SECRET_KEY<>
ACCOUNT2_PAPER_FEED=iex
WEB_API_KEY=<pass-for-webapi>
```

View File

@ -1,34 +1,21 @@
absl-py==2.0.0
alpaca==1.0.0
alpaca-py==0.18.1
alpaca-py==0.7.1
altair==4.2.2
annotated-types==0.6.0
anyio==3.6.2
appdirs==1.4.4
appnope==0.1.3
APScheduler==3.10.4
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.2.1
astunparse==1.6.3
async-lru==2.0.4
attrs==22.2.0
Babel==2.15.0
beautifulsoup4==4.12.3
better-exceptions==0.3.3
bleach==6.0.0
blinker==1.5
bottle==0.12.25
cachetools==5.3.0
CD==1.1.0
certifi==2022.12.7
cffi==1.16.0
chardet==5.1.0
charset-normalizer==3.0.1
click==8.1.3
colorama==0.4.6
comm==0.1.4
contourpy==1.0.7
cycler==0.11.0
dash==2.9.1
@ -36,189 +23,90 @@ dash-bootstrap-components==1.4.1
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-table==5.0.0
dateparser==1.1.8
debugpy==1.8.1
decorator==5.1.1
defusedxml==0.7.1
dill==0.3.7
dm-tree==0.1.8
entrypoints==0.4
exceptiongroup==1.1.3
executing==1.2.0
fastapi==0.109.2
fastjsonschema==2.19.1
filelock==3.13.1
fastapi==0.95.0
Flask==2.2.3
flatbuffers==23.5.26
fonttools==4.39.0
fpdf2==2.7.6
fqdn==1.5.1
gast==0.4.0
gitdb==4.0.10
GitPython==3.1.31
google-auth==2.23.0
google-auth-oauthlib==1.0.0
google-pasta==0.2.0
greenlet==3.0.3
grpcio==1.58.0
h11==0.14.0
h5py==3.10.0
html2text==2024.2.26
httpcore==1.0.5
httpx==0.27.0
humanize==4.9.0
h5py==3.9.0
icecream==2.1.3
idna==3.4
imageio==2.31.6
importlib-metadata==6.1.0
ipykernel==6.29.4
ipython==8.17.2
ipywidgets==8.1.1
isoduration==20.11.0
itables==2.0.1
itsdangerous==2.1.2
jax==0.4.23
jaxlib==0.4.23
jedi==0.19.1
Jinja2==3.1.2
joblib==1.3.2
json5==0.9.25
jsonpointer==2.4
jsonschema==4.22.0
jsonschema-specifications==2023.12.1
jupyter-events==0.10.0
jupyter-lsp==2.2.5
jupyter_client==8.6.1
jupyter_core==5.7.2
jupyter_server==2.14.0
jupyter_server_terminals==0.5.3
jupyterlab==4.1.8
jupyterlab-widgets==3.0.9
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.1
kaleido==0.2.1
keras==3.0.2
keras-core==0.1.7
keras-nightly==3.0.3.dev2024010203
keras-nlp-nightly==0.7.0.dev2024010203
keras-tcn @ git+https://github.com/drew2323/keras-tcn.git@4bddb17a02cb2f31c9fe2e8f616b357b1ddb0e11
jsonschema==4.17.3
keras==2.13.1
kiwisolver==1.4.4
libclang==16.0.6
lightweight-charts @ git+https://github.com/drew2323/lightweight-charts-python@10fd42f785182edfbf6b46a19a4ef66e85985a23
llvmlite==0.39.1
Markdown==3.4.3
markdown-it-py==2.2.0
MarkupSafe==2.1.2
matplotlib==3.8.2
matplotlib-inline==0.1.6
matplotlib==3.7.1
mdurl==0.1.2
mistune==3.0.2
ml-dtypes==0.3.1
mlroom @ git+https://github.com/drew2323/mlroom.git@692900e274c4e0542d945d231645c270fc508437
mplfinance==0.12.10b0
msgpack==1.0.4
mypy-extensions==1.0.0
namex==0.0.7
nbclient==0.10.0
nbconvert==7.16.4
nbformat==5.10.4
nest-asyncio==1.6.0
newtulipy==0.4.6
notebook_shim==0.2.4
numba==0.56.4
numpy==1.23.5
numpy==1.24.2
oauthlib==3.2.2
opt-einsum==3.3.0
orjson==3.9.10
overrides==7.7.0
packaging==23.0
pandas==2.2.1
pandocfilters==1.5.1
pandas==1.5.3
param==1.13.0
parso==0.8.3
patsy==0.5.6
pexpect==4.8.0
Pillow==9.4.0
platformdirs==4.2.0
plotly==5.22.0
prometheus_client==0.20.0
prompt-toolkit==3.0.39
plotly==5.13.1
proto-plus==1.22.2
protobuf==3.20.3
proxy-tools==0.1.0
psutil==5.9.8
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==11.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.22
pyct==0.5.0
pydantic==2.6.4
pydantic_core==2.16.3
pydantic==1.10.5
pydeck==0.8.0
Pygments==2.14.0
pyinstrument==4.5.3
Pympler==1.0.1
pyobjc-core==10.3
pyobjc-framework-Cocoa==10.3
pyobjc-framework-Security==10.3
pyobjc-framework-WebKit==10.3
pyparsing==3.0.9
pyrsistent==0.19.3
pysos==1.3.0
python-dateutil==2.8.2
python-dotenv==1.0.0
python-json-logger==2.0.7
python-multipart==0.0.6
pytz==2022.7.1
pytz-deprecation-shim==0.1.0.post0
pyviz-comms==2.2.1
PyWavelets==1.5.0
pywebview==5.1
PyYAML==6.0
pyzmq==25.1.2
referencing==0.35.1
regex==2023.10.3
requests==2.31.0
requests-oauthlib==1.3.1
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.3.1
rpds-py==0.18.0
rsa==4.9
schedule==1.2.1
scikit-learn==1.3.2
scikit-learn==1.3.1
scipy==1.11.2
seaborn==0.12.2
semver==2.13.0
Send2Trash==1.8.3
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.27
sseclient-py==1.7.2
stack-data==0.6.3
starlette==0.36.3
statsmodels==0.14.1
starlette==0.26.1
streamlit==1.20.0
structlog==23.1.0
TA-Lib==0.4.28
tb-nightly==2.16.0a20240102
tenacity==8.2.2
tensorboard==2.15.1
tensorboard==2.13.0
tensorboard-data-server==0.7.1
tensorflow-addons==0.23.0
tensorflow-estimator==2.15.0
tensorflow==2.13.0
tensorflow-estimator==2.13.0
tensorflow-io-gcs-filesystem==0.34.0
termcolor==2.3.0
terminado==0.18.1
tf-estimator-nightly==2.14.0.dev2023080308
tf-nightly==2.16.0.dev20240101
tf_keras-nightly==2.16.0.dev2023123010
threadpoolctl==3.2.0
tinycss2==1.3.0
tinydb==4.7.1
tinydb-serialization==2.1.0
tinyflux==0.4.0
@ -227,24 +115,15 @@ tomli==2.0.1
toolz==0.12.0
tornado==6.2
tqdm==4.65.0
traitlets==5.13.0
typeguard==2.13.3
types-python-dateutil==2.9.0.20240316
typing_extensions==4.9.0
typing_extensions==4.5.0
tzdata==2023.2
tzlocal==4.3
uri-template==1.3.0
urllib3==1.26.14
uvicorn==0.21.1
-e git+https://github.com/drew2323/v2trading.git@1f85b271dba2b9baf2c61b591a08849e9d684374#egg=v2realbot
#-e git+https://github.com/drew2323/v2trading.git@940348412f67ecd551ef8d0aaedf84452abf1320#egg=v2realbot
validators==0.20.0
vectorbtpro @ file:///Users/davidbrazda/Downloads/vectorbt.pro-2024.2.22
wcwidth==0.2.9
webcolors==1.13
webencodings==0.5.1
websocket-client==1.7.0
websockets==11.0.3
websockets==10.4
Werkzeug==2.2.3
widgetsnbextension==4.0.9
wrapt==1.14.1
wrapt==1.15.0
zipp==3.15.0

View File

@ -1,243 +0,0 @@
absl-py
alpaca
alpaca-py
altair
annotated-types
anyio
appdirs
appnope
APScheduler
argon2-cffi
argon2-cffi-bindings
arrow
asttokens
astunparse
async-lru
attrs
Babel
beautifulsoup4
better-exceptions
bleach
blinker
bottle
cachetools
CD
certifi
cffi
chardet
charset-normalizer
click
colorama
comm
contourpy
cycler
dash
dash-bootstrap-components
dash-core-components
dash-html-components
dash-table
dateparser
debugpy
decorator
defusedxml
dill
dm-tree
entrypoints
exceptiongroup
executing
fastapi
fastjsonschema
filelock
Flask
flatbuffers
fonttools
fpdf2
fqdn
gast
gitdb
GitPython
google-auth
google-auth-oauthlib
google-pasta
greenlet
grpcio
h11
h5py
html2text
httpcore
httpx
humanize
icecream
idna
imageio
importlib-metadata
ipykernel
ipython
ipywidgets
isoduration
itables
itsdangerous
jax
jaxlib
jedi
Jinja2
joblib
json5
jsonpointer
jsonschema
jsonschema-specifications
jupyter-events
jupyter-lsp
jupyter_client
jupyter_core
jupyter_server
jupyter_server_terminals
jupyterlab
jupyterlab-widgets
jupyterlab_pygments
jupyterlab_server
kaleido
keras
keras-core
keras-nightly
keras-nlp-nightly
keras-tcn @ git+https://github.com/drew2323/keras-tcn.git
kiwisolver
libclang
lightweight-charts @ git+https://github.com/drew2323/lightweight-charts-python.git
llvmlite
Markdown
markdown-it-py
MarkupSafe
matplotlib
matplotlib-inline
mdurl
mistune
ml-dtypes
mlroom @ git+https://github.com/drew2323/mlroom.git
mplfinance
msgpack
mypy-extensions
namex
nbclient
nbconvert
nbformat
nest-asyncio
newtulipy
notebook_shim
numba
numpy
oauthlib
opt-einsum
orjson
overrides
packaging
pandas
pandocfilters
param
parso
patsy
pexpect
Pillow
platformdirs
plotly
prometheus_client
prompt-toolkit
proto-plus
protobuf
proxy-tools
psutil
ptyprocess
pure-eval
pyarrow
pyasn1
pyasn1-modules
pycparser
pyct
pydantic
pydantic_core
pydeck
Pygments
pyinstrument
pyparsing
pyrsistent
pysos
python-dateutil
python-dotenv
python-json-logger
python-multipart
pytz
pytz-deprecation-shim
pyviz-comms
PyWavelets
pywebview
PyYAML
pyzmq
referencing
regex
requests
requests-oauthlib
rfc3339-validator
rfc3986-validator
rich
rpds-py
rsa
schedule
scikit-learn
scipy
seaborn
semver
Send2Trash
six
smmap
sniffio
soupsieve
SQLAlchemy
sseclient-py
stack-data
starlette
statsmodels
streamlit
structlog
TA-Lib
tb-nightly
tenacity
tensorboard
tensorboard-data-server
tensorflow-addons
tensorflow-estimator
tensorflow-io-gcs-filesystem
termcolor
terminado
tf-estimator-nightly
tf-nightly
tf_keras-nightly
threadpoolctl
tinycss2
tinydb
tinydb-serialization
tinyflux
toml
tomli
toolz
tornado
tqdm
traitlets
typeguard
types-python-dateutil
typing_extensions
tzdata
tzlocal
uri-template
urllib3
uvicorn
validators
wcwidth
webcolors
webencodings
websocket-client
websockets
Werkzeug
widgetsnbextension
wrapt
zipp

File diff suppressed because it is too large Load Diff

View File

@ -1,410 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Loading trades and vectorized aggregation\n",
"Describes how to fetch trades (remote/cached) and use new vectorized aggregation to aggregate bars of given type (time, volume, dollar) and resolution\n",
"\n",
"`fetch_trades_parallel` enables to fetch trades of given symbol and interval, also can filter conditions and minimum size. return `trades_df`\n",
"`aggregate_trades` acceptss `trades_df` and ressolution and type of bars (VOLUME, TIME, DOLLAR) and return aggregated ohlcv dataframe `ohlcv_df`"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">Activating profile profile1\n",
"</pre>\n"
],
"text/plain": [
"Activating profile profile1\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"trades_df-BAC-2024-01-11T09:30:00-2024-01-12T16:00:00.parquet\n",
"trades_df-SPY-2024-01-01T09:30:00-2024-05-14T16:00:00.parquet\n",
"ohlcv_df-BAC-2024-01-11T09:30:00-2024-01-12T16:00:00.parquet\n",
"ohlcv_df-SPY-2024-01-01T09:30:00-2024-05-14T16:00:00.parquet\n"
]
}
],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"from numba import jit\n",
"from alpaca.data.historical import StockHistoricalDataClient\n",
"from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR\n",
"from alpaca.data.requests import StockTradesRequest\n",
"from v2realbot.enums.enums import BarType\n",
"import time\n",
"from datetime import datetime\n",
"from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, zoneNY, send_to_telegram, fetch_calendar_data\n",
"import pyarrow\n",
"from v2realbot.loader.aggregator_vectorized import fetch_daily_stock_trades, fetch_trades_parallel, generate_time_bars_nb, aggregate_trades\n",
"import vectorbtpro as vbt\n",
"import v2realbot.utils.config_handler as cfh\n",
"\n",
"vbt.settings.set_theme(\"dark\")\n",
"vbt.settings['plotting']['layout']['width'] = 1280\n",
"vbt.settings.plotting.auto_rangebreaks = True\n",
"# Set the option to display with pagination\n",
"pd.set_option('display.notebook_repr_html', True)\n",
"pd.set_option('display.max_rows', 20) # Number of rows per page\n",
"# pd.set_option('display.float_format', '{:.9f}'.format)\n",
"\n",
"\n",
"#trade filtering\n",
"exclude_conditions = cfh.config_handler.get_val('AGG_EXCLUDED_TRADES') #standard ['C','O','4','B','7','V','P','W','U','Z','F']\n",
"minsize = 100\n",
"\n",
"symbol = \"SPY\"\n",
"#datetime in zoneNY \n",
"day_start = datetime(2024, 1, 1, 9, 30, 0)\n",
"day_stop = datetime(2024, 1, 14, 16, 00, 0)\n",
"day_start = zoneNY.localize(day_start)\n",
"day_stop = zoneNY.localize(day_stop)\n",
"#filename of trades_df parquet, date are in isoformat but without time zone part\n",
"dir = DATA_DIR + \"/notebooks/\"\n",
"#parquet interval cache contains exclude conditions and minsize filtering\n",
"file_trades = dir + f\"trades_df-{symbol}-{day_start.strftime('%Y-%m-%dT%H:%M:%S')}-{day_stop.strftime('%Y-%m-%dT%H:%M:%S')}-{exclude_conditions}-{minsize}.parquet\"\n",
"#file_trades = dir + f\"trades_df-{symbol}-{day_start.strftime('%Y-%m-%dT%H:%M:%S')}-{day_stop.strftime('%Y-%m-%dT%H:%M:%S')}.parquet\"\n",
"file_ohlcv = dir + f\"ohlcv_df-{symbol}-{day_start.strftime('%Y-%m-%dT%H:%M:%S')}-{day_stop.strftime('%Y-%m-%dT%H:%M:%S')}-{exclude_conditions}-{minsize}.parquet\"\n",
"\n",
"#PRINT all parquet in directory\n",
"import os\n",
"files = [f for f in os.listdir(dir) if f.endswith(\".parquet\")]\n",
"for f in files:\n",
" print(f)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NOT FOUND. Fetching from remote\n"
]
},
{
"ename": "KeyboardInterrupt",
"evalue": "",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[2], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m trades_df \u001b[38;5;241m=\u001b[39m \u001b[43mfetch_daily_stock_trades\u001b[49m\u001b[43m(\u001b[49m\u001b[43msymbol\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mday_start\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mday_stop\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mexclude_conditions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mexclude_conditions\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mminsize\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mminsize\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mforce_remote\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_retries\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m5\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbackoff_factor\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 2\u001b[0m trades_df\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/v2realbot/loader/aggregator_vectorized.py:200\u001b[0m, in \u001b[0;36mfetch_daily_stock_trades\u001b[0;34m(symbol, start, end, exclude_conditions, minsize, force_remote, max_retries, backoff_factor)\u001b[0m\n\u001b[1;32m 198\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m attempt \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mrange\u001b[39m(max_retries):\n\u001b[1;32m 199\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 200\u001b[0m tradesResponse \u001b[38;5;241m=\u001b[39m \u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_stock_trades\u001b[49m\u001b[43m(\u001b[49m\u001b[43mstockTradeRequest\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 201\u001b[0m is_empty \u001b[38;5;241m=\u001b[39m \u001b[38;5;129;01mnot\u001b[39;00m tradesResponse[symbol]\n\u001b[1;32m 202\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRemote fetched: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mis_empty\u001b[38;5;132;01m=}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m, start, end)\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/data/historical/stock.py:144\u001b[0m, in \u001b[0;36mStockHistoricalDataClient.get_stock_trades\u001b[0;34m(self, request_params)\u001b[0m\n\u001b[1;32m 141\u001b[0m params \u001b[38;5;241m=\u001b[39m request_params\u001b[38;5;241m.\u001b[39mto_request_fields()\n\u001b[1;32m 143\u001b[0m \u001b[38;5;66;03m# paginated get request for market data api\u001b[39;00m\n\u001b[0;32m--> 144\u001b[0m raw_trades \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_data_get\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 145\u001b[0m \u001b[43m \u001b[49m\u001b[43mendpoint_data_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtrades\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 146\u001b[0m \u001b[43m \u001b[49m\u001b[43mendpoint_asset_class\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstocks\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 147\u001b[0m \u001b[43m \u001b[49m\u001b[43mapi_version\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mv2\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 148\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mparams\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 149\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 151\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_use_raw_data:\n\u001b[1;32m 152\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m raw_trades\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/data/historical/stock.py:338\u001b[0m, in \u001b[0;36mStockHistoricalDataClient._data_get\u001b[0;34m(self, endpoint_asset_class, endpoint_data_type, api_version, symbol_or_symbols, limit, page_limit, extension, **kwargs)\u001b[0m\n\u001b[1;32m 335\u001b[0m params[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlimit\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m actual_limit\n\u001b[1;32m 336\u001b[0m params[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpage_token\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m page_token\n\u001b[0;32m--> 338\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mparams\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mapi_version\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mapi_version\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 340\u001b[0m \u001b[38;5;66;03m# TODO: Merge parsing if possible\u001b[39;00m\n\u001b[1;32m 341\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m extension \u001b[38;5;241m==\u001b[39m DataExtensionType\u001b[38;5;241m.\u001b[39mSNAPSHOT:\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/common/rest.py:221\u001b[0m, in \u001b[0;36mRESTClient.get\u001b[0;34m(self, path, data, **kwargs)\u001b[0m\n\u001b[1;32m 210\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mget\u001b[39m(\u001b[38;5;28mself\u001b[39m, path: \u001b[38;5;28mstr\u001b[39m, data: Union[\u001b[38;5;28mdict\u001b[39m, \u001b[38;5;28mstr\u001b[39m] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m HTTPResult:\n\u001b[1;32m 211\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Performs a single GET request\u001b[39;00m\n\u001b[1;32m 212\u001b[0m \n\u001b[1;32m 213\u001b[0m \u001b[38;5;124;03m Args:\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 219\u001b[0m \u001b[38;5;124;03m dict: The response\u001b[39;00m\n\u001b[1;32m 220\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m--> 221\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mGET\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/common/rest.py:129\u001b[0m, in \u001b[0;36mRESTClient._request\u001b[0;34m(self, method, path, data, base_url, api_version)\u001b[0m\n\u001b[1;32m 127\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m retry \u001b[38;5;241m>\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[1;32m 128\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 129\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_one_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mopts\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mretry\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 130\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m RetryException:\n\u001b[1;32m 131\u001b[0m time\u001b[38;5;241m.\u001b[39msleep(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_retry_wait)\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/alpaca/common/rest.py:193\u001b[0m, in \u001b[0;36mRESTClient._one_request\u001b[0;34m(self, method, url, opts, retry)\u001b[0m\n\u001b[1;32m 174\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_one_request\u001b[39m(\u001b[38;5;28mself\u001b[39m, method: \u001b[38;5;28mstr\u001b[39m, url: \u001b[38;5;28mstr\u001b[39m, opts: \u001b[38;5;28mdict\u001b[39m, retry: \u001b[38;5;28mint\u001b[39m) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28mdict\u001b[39m:\n\u001b[1;32m 175\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Perform one request, possibly raising RetryException in the case\u001b[39;00m\n\u001b[1;32m 176\u001b[0m \u001b[38;5;124;03m the response is 429. Otherwise, if error text contain \"code\" string,\u001b[39;00m\n\u001b[1;32m 177\u001b[0m \u001b[38;5;124;03m then it decodes to json object and returns APIError.\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 191\u001b[0m \u001b[38;5;124;03m dict: The response data\u001b[39;00m\n\u001b[1;32m 192\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m--> 193\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_session\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mopts\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 195\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 196\u001b[0m response\u001b[38;5;241m.\u001b[39mraise_for_status()\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/requests/sessions.py:589\u001b[0m, in \u001b[0;36mSession.request\u001b[0;34m(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)\u001b[0m\n\u001b[1;32m 584\u001b[0m send_kwargs \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m 585\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtimeout\u001b[39m\u001b[38;5;124m\"\u001b[39m: timeout,\n\u001b[1;32m 586\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mallow_redirects\u001b[39m\u001b[38;5;124m\"\u001b[39m: allow_redirects,\n\u001b[1;32m 587\u001b[0m }\n\u001b[1;32m 588\u001b[0m send_kwargs\u001b[38;5;241m.\u001b[39mupdate(settings)\n\u001b[0;32m--> 589\u001b[0m resp \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msend\u001b[49m\u001b[43m(\u001b[49m\u001b[43mprep\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43msend_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 591\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m resp\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/requests/sessions.py:703\u001b[0m, in \u001b[0;36mSession.send\u001b[0;34m(self, request, **kwargs)\u001b[0m\n\u001b[1;32m 700\u001b[0m start \u001b[38;5;241m=\u001b[39m preferred_clock()\n\u001b[1;32m 702\u001b[0m \u001b[38;5;66;03m# Send the request\u001b[39;00m\n\u001b[0;32m--> 703\u001b[0m r \u001b[38;5;241m=\u001b[39m \u001b[43madapter\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msend\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 705\u001b[0m \u001b[38;5;66;03m# Total elapsed time of the request (approximately)\u001b[39;00m\n\u001b[1;32m 706\u001b[0m elapsed \u001b[38;5;241m=\u001b[39m preferred_clock() \u001b[38;5;241m-\u001b[39m start\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/requests/adapters.py:486\u001b[0m, in \u001b[0;36mHTTPAdapter.send\u001b[0;34m(self, request, stream, timeout, verify, cert, proxies)\u001b[0m\n\u001b[1;32m 483\u001b[0m timeout \u001b[38;5;241m=\u001b[39m TimeoutSauce(connect\u001b[38;5;241m=\u001b[39mtimeout, read\u001b[38;5;241m=\u001b[39mtimeout)\n\u001b[1;32m 485\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 486\u001b[0m resp \u001b[38;5;241m=\u001b[39m \u001b[43mconn\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43murlopen\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 487\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 488\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 489\u001b[0m \u001b[43m \u001b[49m\u001b[43mbody\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbody\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 490\u001b[0m \u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 491\u001b[0m \u001b[43m \u001b[49m\u001b[43mredirect\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 492\u001b[0m \u001b[43m \u001b[49m\u001b[43massert_same_host\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 493\u001b[0m \u001b[43m \u001b[49m\u001b[43mpreload_content\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 494\u001b[0m \u001b[43m \u001b[49m\u001b[43mdecode_content\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 495\u001b[0m \u001b[43m \u001b[49m\u001b[43mretries\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmax_retries\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 496\u001b[0m \u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 497\u001b[0m \u001b[43m \u001b[49m\u001b[43mchunked\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mchunked\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 498\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 500\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (ProtocolError, \u001b[38;5;167;01mOSError\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m err:\n\u001b[1;32m 501\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mConnectionError\u001b[39;00m(err, request\u001b[38;5;241m=\u001b[39mrequest)\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py:703\u001b[0m, in \u001b[0;36mHTTPConnectionPool.urlopen\u001b[0;34m(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)\u001b[0m\n\u001b[1;32m 700\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_prepare_proxy(conn)\n\u001b[1;32m 702\u001b[0m \u001b[38;5;66;03m# Make the request on the httplib connection object.\u001b[39;00m\n\u001b[0;32m--> 703\u001b[0m httplib_response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_make_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 704\u001b[0m \u001b[43m \u001b[49m\u001b[43mconn\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 705\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 706\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 707\u001b[0m \u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout_obj\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 708\u001b[0m \u001b[43m \u001b[49m\u001b[43mbody\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mbody\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 709\u001b[0m \u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 710\u001b[0m \u001b[43m \u001b[49m\u001b[43mchunked\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mchunked\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 711\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 713\u001b[0m \u001b[38;5;66;03m# If we're going to release the connection in ``finally:``, then\u001b[39;00m\n\u001b[1;32m 714\u001b[0m \u001b[38;5;66;03m# the response doesn't need to know about the connection. Otherwise\u001b[39;00m\n\u001b[1;32m 715\u001b[0m \u001b[38;5;66;03m# it will also try to release it and we'll have a double-release\u001b[39;00m\n\u001b[1;32m 716\u001b[0m \u001b[38;5;66;03m# mess.\u001b[39;00m\n\u001b[1;32m 717\u001b[0m response_conn \u001b[38;5;241m=\u001b[39m conn \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m release_conn \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py:449\u001b[0m, in \u001b[0;36mHTTPConnectionPool._make_request\u001b[0;34m(self, conn, method, url, timeout, chunked, **httplib_request_kw)\u001b[0m\n\u001b[1;32m 444\u001b[0m httplib_response \u001b[38;5;241m=\u001b[39m conn\u001b[38;5;241m.\u001b[39mgetresponse()\n\u001b[1;32m 445\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 446\u001b[0m \u001b[38;5;66;03m# Remove the TypeError from the exception chain in\u001b[39;00m\n\u001b[1;32m 447\u001b[0m \u001b[38;5;66;03m# Python 3 (including for exceptions like SystemExit).\u001b[39;00m\n\u001b[1;32m 448\u001b[0m \u001b[38;5;66;03m# Otherwise it looks like a bug in the code.\u001b[39;00m\n\u001b[0;32m--> 449\u001b[0m \u001b[43msix\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mraise_from\u001b[49m\u001b[43m(\u001b[49m\u001b[43me\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m)\u001b[49m\n\u001b[1;32m 450\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (SocketTimeout, BaseSSLError, SocketError) \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 451\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_raise_timeout(err\u001b[38;5;241m=\u001b[39me, url\u001b[38;5;241m=\u001b[39murl, timeout_value\u001b[38;5;241m=\u001b[39mread_timeout)\n",
"File \u001b[0;32m<string>:3\u001b[0m, in \u001b[0;36mraise_from\u001b[0;34m(value, from_value)\u001b[0m\n",
"File \u001b[0;32m~/Documents/Development/python/v2trading/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py:444\u001b[0m, in \u001b[0;36mHTTPConnectionPool._make_request\u001b[0;34m(self, conn, method, url, timeout, chunked, **httplib_request_kw)\u001b[0m\n\u001b[1;32m 441\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m:\n\u001b[1;32m 442\u001b[0m \u001b[38;5;66;03m# Python 3\u001b[39;00m\n\u001b[1;32m 443\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 444\u001b[0m httplib_response \u001b[38;5;241m=\u001b[39m \u001b[43mconn\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mgetresponse\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 445\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 446\u001b[0m \u001b[38;5;66;03m# Remove the TypeError from the exception chain in\u001b[39;00m\n\u001b[1;32m 447\u001b[0m \u001b[38;5;66;03m# Python 3 (including for exceptions like SystemExit).\u001b[39;00m\n\u001b[1;32m 448\u001b[0m \u001b[38;5;66;03m# Otherwise it looks like a bug in the code.\u001b[39;00m\n\u001b[1;32m 449\u001b[0m six\u001b[38;5;241m.\u001b[39mraise_from(e, \u001b[38;5;28;01mNone\u001b[39;00m)\n",
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1375\u001b[0m, in \u001b[0;36mHTTPConnection.getresponse\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 1373\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 1374\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m-> 1375\u001b[0m \u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbegin\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1376\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mConnectionError\u001b[39;00m:\n\u001b[1;32m 1377\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mclose()\n",
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:318\u001b[0m, in \u001b[0;36mHTTPResponse.begin\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 316\u001b[0m \u001b[38;5;66;03m# read until we get a non-100 response\u001b[39;00m\n\u001b[1;32m 317\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[0;32m--> 318\u001b[0m version, status, reason \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_read_status\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 319\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m status \u001b[38;5;241m!=\u001b[39m CONTINUE:\n\u001b[1;32m 320\u001b[0m \u001b[38;5;28;01mbreak\u001b[39;00m\n",
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:279\u001b[0m, in \u001b[0;36mHTTPResponse._read_status\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 278\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_read_status\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[0;32m--> 279\u001b[0m line \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mreadline\u001b[49m\u001b[43m(\u001b[49m\u001b[43m_MAXLINE\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124miso-8859-1\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 280\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(line) \u001b[38;5;241m>\u001b[39m _MAXLINE:\n\u001b[1;32m 281\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m LineTooLong(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstatus line\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py:705\u001b[0m, in \u001b[0;36mSocketIO.readinto\u001b[0;34m(self, b)\u001b[0m\n\u001b[1;32m 703\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[1;32m 704\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 705\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_sock\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrecv_into\u001b[49m\u001b[43m(\u001b[49m\u001b[43mb\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 706\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m timeout:\n\u001b[1;32m 707\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_timeout_occurred \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n",
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1274\u001b[0m, in \u001b[0;36mSSLSocket.recv_into\u001b[0;34m(self, buffer, nbytes, flags)\u001b[0m\n\u001b[1;32m 1270\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m flags \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[1;32m 1271\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[1;32m 1272\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnon-zero flags not allowed in calls to recv_into() on \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;241m%\u001b[39m\n\u001b[1;32m 1273\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m)\n\u001b[0;32m-> 1274\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread\u001b[49m\u001b[43m(\u001b[49m\u001b[43mnbytes\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbuffer\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1275\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 1276\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28msuper\u001b[39m()\u001b[38;5;241m.\u001b[39mrecv_into(buffer, nbytes, flags)\n",
"File \u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1130\u001b[0m, in \u001b[0;36mSSLSocket.read\u001b[0;34m(self, len, buffer)\u001b[0m\n\u001b[1;32m 1128\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 1129\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m buffer \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m-> 1130\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_sslobj\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mlen\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbuffer\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1131\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 1132\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_sslobj\u001b[38;5;241m.\u001b[39mread(\u001b[38;5;28mlen\u001b[39m)\n",
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
]
}
],
"source": [
"trades_df = fetch_daily_stock_trades(symbol, day_start, day_stop, exclude_conditions=exclude_conditions, minsize=minsize, force_remote=False, max_retries=5, backoff_factor=1)\n",
"trades_df"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"#Either load trades or ohlcv from parquet if exists\n",
"\n",
"#trades_df = fetch_trades_parallel(symbol, day_start, day_stop, exclude_conditions=exclude_conditions, minsize=50, max_workers=20) #exclude_conditions=['C','O','4','B','7','V','P','W','U','Z','F'])\n",
"# trades_df.to_parquet(file_trades, engine='pyarrow', compression='gzip')\n",
"\n",
"trades_df = pd.read_parquet(file_trades,engine='pyarrow')\n",
"ohlcv_df = aggregate_trades(symbol=symbol, trades_df=trades_df, resolution=1, type=BarType.TIME)\n",
"ohlcv_df.to_parquet(file_ohlcv, engine='pyarrow', compression='gzip')\n",
"\n",
"# ohlcv_df = pd.read_parquet(file_ohlcv,engine='pyarrow')\n",
"# trades_df = pd.read_parquet(file_trades,engine='pyarrow')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#list all files is dir directory with parquet extension\n",
"dir = DATA_DIR + \"/notebooks/\"\n",
"import os\n",
"files = [f for f in os.listdir(dir) if f.endswith(\".parquet\")]\n",
"file_name = \"\"\n",
"ohlcv_df = pd.read_parquet(file_ohlcv,engine='pyarrow')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ohlcv_df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import seaborn as sns\n",
"# Calculate daily returns\n",
"ohlcv_df['returns'] = ohlcv_df['close'].pct_change().dropna()\n",
"#same as above but pct_change is from 3 datapoints back, but only if it is the same date, else na\n",
"\n",
"\n",
"# Plot the probability distribution curve\n",
"plt.figure(figsize=(10, 6))\n",
"sns.histplot(df['returns'].dropna(), kde=True, stat='probability', bins=30)\n",
"plt.title('Probability Distribution of Daily Returns')\n",
"plt.xlabel('Daily Returns')\n",
"plt.ylabel('Probability')\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.metrics import accuracy_score\n",
"\n",
"# Define the intervals from 5 to 20 s, returns for each interval\n",
"#maybe use rolling window?\n",
"intervals = range(5, 21, 5)\n",
"\n",
"# Create columns for percentage returns\n",
"rolling_window = 50\n",
"\n",
"# Normalize the returns using rolling mean and std\n",
"for N in intervals:\n",
" column_name = f'returns_{N}'\n",
" rolling_mean = ohlcv_df[column_name].rolling(window=rolling_window).mean()\n",
" rolling_std = ohlcv_df[column_name].rolling(window=rolling_window).std()\n",
" ohlcv_df[f'norm_{column_name}'] = (ohlcv_df[column_name] - rolling_mean) / rolling_std\n",
"\n",
"# Display the dataframe with normalized return columns\n",
"ohlcv_df\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Calculate the sum of the normalized return columns for each row\n",
"ohlcv_df['sum_norm_returns'] = ohlcv_df[[f'norm_returns_{N}' for N in intervals]].sum(axis=1)\n",
"\n",
"# Sort the DataFrame based on the sum of normalized returns in descending order\n",
"df_sorted = ohlcv_df.sort_values(by='sum_norm_returns', ascending=False)\n",
"\n",
"# Display the top rows with the highest sum of normalized returns\n",
"df_sorted\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Drop initial rows with NaN values due to pct_change\n",
"ohlcv_df.dropna(inplace=True)\n",
"\n",
"# Plotting the probability distribution curves\n",
"plt.figure(figsize=(14, 8))\n",
"for N in intervals:\n",
" sns.kdeplot(ohlcv_df[f'returns_{N}'].dropna(), label=f'Returns {N}', fill=True)\n",
"\n",
"plt.title('Probability Distribution of Percentage Returns')\n",
"plt.xlabel('Percentage Return')\n",
"plt.ylabel('Density')\n",
"plt.legend()\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import seaborn as sns\n",
"# Plot the probability distribution curve\n",
"plt.figure(figsize=(10, 6))\n",
"sns.histplot(ohlcv_df['returns'].dropna(), kde=True, stat='probability', bins=30)\n",
"plt.title('Probability Distribution of Daily Returns')\n",
"plt.xlabel('Daily Returns')\n",
"plt.ylabel('Probability')\n",
"plt.show()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#show only rows from ohlcv_df where returns > 0.005\n",
"ohlcv_df[ohlcv_df['returns'] > 0.0005]\n",
"\n",
"#ohlcv_df[ohlcv_df['returns'] < -0.005]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#ohlcv where index = date 2024-03-13 and between hour 12\n",
"\n",
"a = ohlcv_df.loc['2024-03-13 12:00:00':'2024-03-13 13:00:00']\n",
"a"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ohlcv_df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trades_df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ohlcv_df.info()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trades_df.to_parquet(\"trades_df-spy-0111-0111.parquett\", engine='pyarrow', compression='gzip')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trades_df.to_parquet(\"trades_df-spy-111-0516.parquett\", engine='pyarrow', compression='gzip', allow_truncated_timestamps=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ohlcv_df.to_parquet(\"ohlcv_df-spy-111-0516.parquett\", engine='pyarrow', compression='gzip')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"basic_data = vbt.Data.from_data(vbt.symbol_dict({symbol: ohlcv_df}), tz_convert=zoneNY)\n",
"vbt.settings['plotting']['auto_rangebreaks'] = True\n",
"basic_data.ohlcv.plot()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#access just BCA\n",
"#df_filtered = df.loc[\"BAC\"]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -1,421 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from v2realbot.tools.loadbatch import load_batch\n",
"from v2realbot.utils.utils import zoneNY\n",
"import pandas as pd\n",
"import numpy as np\n",
"import vectorbtpro as vbt\n",
"from itables import init_notebook_mode, show\n",
"\n",
"init_notebook_mode(all_interactive=True)\n",
"\n",
"vbt.settings.set_theme(\"dark\")\n",
"vbt.settings['plotting']['layout']['width'] = 1280\n",
"vbt.settings.plotting.auto_rangebreaks = True\n",
"# Set the option to display with pagination\n",
"pd.set_option('display.notebook_repr_html', True)\n",
"pd.set_option('display.max_rows', 10) # Number of rows per page\n",
"\n",
"res, df = load_batch(batch_id=\"0fb5043a\", #46 days 1.3 - 6.5.\n",
" space_resolution_evenly=False,\n",
" indicators_columns=[\"Rsi14\"],\n",
" main_session_only=True,\n",
" verbose = False)\n",
"if res < 0:\n",
" print(\"Error\" + str(res) + str(df))\n",
"df = df[\"bars\"]\n",
"\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# filter dates"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#filter na dny\n",
"# dates_of_interest = pd.to_datetime(['2024-04-22', '2024-04-23']).tz_localize('US/Eastern')\n",
"# filtered_df = df.loc[df.index.normalize().isin(dates_of_interest)]\n",
"\n",
"# df = filtered_df\n",
"# df.info()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import plotly.io as pio\n",
"pio.renderers.default = 'notebook'\n",
"\n",
"#naloadujeme do vbt symbol as column\n",
"basic_data = vbt.Data.from_data({\"BAC\": df}, tz_convert=zoneNY)\n",
"start_date = pd.Timestamp('2024-03-12 09:30', tz=zoneNY)\n",
"end_date = pd.Timestamp('2024-03-13 16:00', tz=zoneNY)\n",
"\n",
"#basic_data = basic_data.transform(lambda df: df[df.index.date == start_date.date()])\n",
"#basic_data = basic_data.transform(lambda df: df[(df.index >= start_date) & (df.index <= end_date)])\n",
"#basic_data.data[\"BAC\"].info()\n",
"\n",
"# fig = basic_data.plot(plot_volume=False)\n",
"# pivot_info = basic_data.run(\"pivotinfo\", up_th=0.003, down_th=0.002)\n",
"# #pivot_info.plot()\n",
"# pivot_info.plot(fig=fig, conf_value_trace_kwargs=dict(visible=True))\n",
"# fig.show()\n",
"\n",
"\n",
"# rsi14 = basic_data.data[\"BAC\"][\"Rsi14\"].rename(\"Rsi14\")\n",
"\n",
"# rsi14.vbt.plot().show()\n",
"#basic_data.xloc[\"09:30\":\"10:00\"].data[\"BAC\"].vbt.ohlcv.plot().show()\n",
"\n",
"vbt.settings.plotting.auto_rangebreaks = True\n",
"#basic_data.data[\"BAC\"].vbt.ohlcv.plot()\n",
"\n",
"#basic_data.data[\"BAC\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"m1_data = basic_data[['Open', 'High', 'Low', 'Close', 'Volume']]\n",
"\n",
"m1_data.data[\"BAC\"]\n",
"#m5_data = m1_data.resample(\"5T\")\n",
"\n",
"#m5_data.data[\"BAC\"].head(10)\n",
"\n",
"# m15_data = m1_data.resample(\"15T\")\n",
"\n",
"# m15 = m15_data.data[\"BAC\"]\n",
"\n",
"# m15.vbt.ohlcv.plot()\n",
"\n",
"# m1_data.wrapper.index\n",
"\n",
"# m1_resampler = m1_data.wrapper.get_resampler(\"1T\")\n",
"# m1_resampler.index_difference(reverse=True)\n",
"\n",
"\n",
"# m5_resampler.prettify()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# defining ENTRY WINDOW and forced EXIT window"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#m1_data.data[\"BAC\"].info()\n",
"import datetime\n",
"# Define the market open and close times\n",
"market_open = datetime.time(9, 30)\n",
"market_close = datetime.time(16, 0)\n",
"entry_window_opens = 1\n",
"entry_window_closes = 350\n",
"\n",
"forced_exit_start = 380\n",
"forced_exit_end = 390\n",
"\n",
"forced_exit = m1_data.symbol_wrapper.fill(False)\n",
"entry_window_open= m1_data.symbol_wrapper.fill(False)\n",
"\n",
"# Calculate the time difference in minutes from market open for each timestamp\n",
"elapsed_min_from_open = (forced_exit.index.hour - market_open.hour) * 60 + (forced_exit.index.minute - market_open.minute)\n",
"\n",
"entry_window_open[(elapsed_min_from_open >= entry_window_opens) & (elapsed_min_from_open < entry_window_closes)] = True\n",
"forced_exit[(elapsed_min_from_open >= forced_exit_start) & (elapsed_min_from_open < forced_exit_end)] = True\n",
"\n",
"#entry_window_open.info()\n",
"# forced_exit.tail(100)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"close = m1_data.close\n",
"\n",
"rsi = vbt.RSI.run(close, window=14)\n",
"\n",
"long_entries = (rsi.rsi.vbt.crossed_below(20) & entry_window_open)\n",
"long_exits = (rsi.rsi.vbt.crossed_above(70) | forced_exit)\n",
"#long_entries.info()\n",
"#number of trues and falses in long_entries\n",
"long_entries.value_counts()\n",
"#long_exits.value_counts()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def plot_rsi(rsi, close, entries, exits):\n",
" fig = vbt.make_subplots(rows=1, cols=1, shared_xaxes=True, specs=[[{\"secondary_y\": True}]], vertical_spacing=0.02, subplot_titles=(\"RSI\", \"Price\" ))\n",
" close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=True))\n",
" rsi.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False))\n",
" entries.vbt.signals.plot_as_entries(rsi.rsi, fig=fig, add_trace_kwargs=dict(secondary_y=False)) \n",
" exits.vbt.signals.plot_as_exits(rsi.rsi, fig=fig, add_trace_kwargs=dict(secondary_y=False)) \n",
" return fig\n",
"\n",
"plot_rsi(rsi, close, long_entries, long_exits)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"vbt.phelp(vbt.Portfolio.from_signals)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sl_stop = np.arange(0.03/100, 0.2/100, 0.02/100).tolist()\n",
"# Using the round function\n",
"sl_stop = [round(val, 4) for val in sl_stop]\n",
"print(sl_stop)\n",
"sl_stop = vbt.Param(sl_stop) #np.nan mean s no stoploss\n",
"\n",
"pf = vbt.Portfolio.from_signals(close=close, entries=long_entries, sl_stop=sl_stop, tp_stop = sl_stop, exits=long_exits,fees=0.0167/100, freq=\"1s\") #sl_stop=sl_stop, tp_stop = sl_stop, \n",
"\n",
"#pf.stats()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pf.plot()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pf[(0.0015,0.0013)].plot()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pf[0.03].plot_trade_signals()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# pristup k pf jako multi index"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#pf[0.03].plot()\n",
"#pf.order_records\n",
"pf[(0.03)].stats()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#zgrupovane statistiky\n",
"stats_df = pf.stats([\n",
" 'total_return',\n",
" 'total_trades',\n",
" 'win_rate',\n",
" 'expectancy'\n",
"], agg_func=None)\n",
"stats_df\n",
"\n",
"\n",
"stats_df.nlargest(50, 'Total Return [%]')\n",
"#stats_df.info()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pf[(0.0011,0.0013)].plot()\n",
"\n",
"#pf[(0.0011,0.0013000000000000002)].plot()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pandas.tseries.offsets import DateOffset\n",
"\n",
"temp_data = basic_data['2024-4-22']\n",
"temp_data\n",
"res1m = temp_data[[\"Open\", \"High\", \"Low\", \"Close\", \"Volume\"]]\n",
"\n",
"# Define a custom date offset that starts at 9:30 AM and spans 4 hours\n",
"custom_offset = DateOffset(hours=4, minutes=30)\n",
"\n",
"# res1m = res1m.get().resample(\"4H\").agg({ \n",
"# \"Open\": \"first\",\n",
"# \"High\": \"max\",\n",
"# \"Low\": \"min\",\n",
"# \"Close\": \"last\",\n",
"# \"Volume\": \"sum\"\n",
"# })\n",
"\n",
"res4h = res1m.resample(\"1h\", resample_kwargs=dict(origin=\"start\"))\n",
"\n",
"res4h.data\n",
"\n",
"res15m = res1m.resample(\"15T\", resample_kwargs=dict(origin=\"start\"))\n",
"\n",
"res15m.data[\"BAC\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@vbt.njit\n",
"def long_entry_place_func_nb(c, low, close, time_in_ns, rsi14, window_open, window_close):\n",
" market_open_minutes = 570 # 9 hours * 60 minutes + 30 minutes\n",
"\n",
" for out_i in range(len(c.out)):\n",
" i = c.from_i + out_i\n",
"\n",
" current_minutes = vbt.dt_nb.hour_nb(time_in_ns[i]) * 60 + vbt.dt_nb.minute_nb(time_in_ns[i])\n",
" #print(\"current_minutes\", current_minutes)\n",
" # Calculate elapsed minutes since market open at 9:30 AM\n",
" elapsed_from_open = current_minutes - market_open_minutes\n",
" elapsed_from_open = elapsed_from_open if elapsed_from_open >= 0 else 0\n",
" #print( \"elapsed_from_open\", elapsed_from_open)\n",
"\n",
" #elapsed_from_open = elapsed_minutes_from_open_nb(time_in_ns) \n",
" in_window = elapsed_from_open > window_open and elapsed_from_open < window_close\n",
" #print(\"in_window\", in_window)\n",
" # if in_window:\n",
" # print(\"in window\")\n",
"\n",
" if in_window and rsi14[i] > 60: # and low[i, c.col] <= hit_price: # and hour == 9: # (4)!\n",
" return out_i\n",
" return -1\n",
"\n",
"@vbt.njit\n",
"def long_exit_place_func_nb(c, high, close, time_index, tp, sl): # (5)!\n",
" entry_i = c.from_i - c.wait\n",
" entry_price = close[entry_i, c.col]\n",
" hit_price = entry_price * (1 + tp)\n",
" stop_price = entry_price * (1 - sl)\n",
" for out_i in range(len(c.out)):\n",
" i = c.from_i + out_i\n",
" last_bar_of_day = vbt.dt_nb.day_changed_nb(time_index[i], time_index[i + 1])\n",
"\n",
" #print(next_day)\n",
" if last_bar_of_day: #pokud je dalsi next day, tak zavirame posledni\n",
" print(\"ted\",out_i)\n",
" return out_i\n",
" if close[i, c.col] >= hit_price or close[i, c.col] <= stop_price :\n",
" return out_i\n",
" return -1\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))\n",
"\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.sum()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

View File

@ -1,107 +0,0 @@
# Plotly
* MAKE_SUBPLOT Defines layout (if more then 1x1 or secondary y axis are required)
```python
fig = vbt.make_subplots(rows=2, cols=1, shared_xaxes=True,
specs=[[{"secondary_y": True}], [{"secondary_y": False}]],
vertical_spacing=0.02, subplot_titles=("Row 1 title", "Row 2 title"))
```
Then the different [sr/df generic accessor](http://5.161.179.223:8000/static/js/vbt/api/generic/accessors/index.html#vectorbtpro.generic.accessors.GenericAccessor.areaplot) are added with ADD_TRACE_KWARGS and TRACE_KWARGS. Other types of plot available in [plotting module](http://5.161.179.223:8000/static/js/vbt/api/generic/plotting/index.html)
```python
#using accessor
close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False,row=1, col=1), trace_kwargs=dict(line=dict(color="blue")))
indvolume.vbt.barplot(fig=fig, add_trace_kwargs=dict(secondary_y=False, row=2, col=1))
#using plotting module
vbt.Bar(indvolume, fig=fig, add_trace_kwargs=dict(secondary_y=False, row=2, col=1))
```
* ADD_TRACE_KWARGS - determines positioning withing subplot
```python
add_trace_kwargs=dict(secondary_y=False,row=1, col=1)
```
* TRACE_KWARGS - other styling of trace
```python
trace_kwargs=dict(name="LONGS",
line=dict(color="#ffe476"),
marker=dict(color="limegreen"),
fill=None,
connectgaps=True)
```
## Example
```python
fig = vbt.make_subplots(rows=2, cols=1, shared_xaxes=True,
specs=[[{"secondary_y": True}], [{"secondary_y": False}]],
vertical_spacing=0.02, subplot_titles=("Price and Indicators", "Volume"))
# Plotting the close price
close.vbt.plot(fig=fig, add_trace_kwargs=dict(secondary_y=False,row=1, col=1), trace_kwargs=dict(line=dict(color="blue")))
```
# Data
## Resampling
```python
t1data = basic_data[['open', 'high', 'low', 'close', 'volume','vwap','buyvolume','sellvolume']].resample("1T")
t1data = t1data.transform(lambda df: df.between_time('09:30', '16:00').dropna()) #main session data only, no nans
t5data = basic_data[['open', 'high', 'low', 'close', 'volume','vwap','buyvolume','sellvolume']].resample("5T")
t5data = t5data.transform(lambda df: df.between_time('09:30', '16:00').dropna())
dailydata = basic_data[['open', 'high', 'low', 'close', 'volume', 'vwap']].resample("D").dropna()
#realign 5min close to 1min so it can be compared with 1min
t5data_close_realigned = t5data.close.vbt.realign_closing("1T").between_time('09:30', '16:00').dropna()
#same with open
t5data.open.vbt.realign_opening("1h")
```
### Define resample function for custom column
Example of custom feature config [Binance Data](http://5.161.179.223:8000/static/js/vbt/api/data/custom/binance/index.html#vectorbtpro.data.custom.binance.BinanceData.feature_config).
Other [reduced functions available](http://5.161.179.223:8000/static/js/vbt/api/generic/nb/apply_reduce/index.html). (mean, min, max, median, nth ...)
```python
from vectorbtpro.utils.config import merge_dicts, Config, HybridConfig
from vectorbtpro import _typing as tp
from vectorbtpro.generic import nb as generic_nb
_feature_config: tp.ClassVar[Config] = HybridConfig(
{
"buyvolume": dict(
resample_func=lambda self, obj, resampler: obj.vbt.resample_apply(
resampler,
generic_nb.sum_reduce_nb,
)
),
"sellvolume": dict(
resample_func=lambda self, obj, resampler: obj.vbt.resample_apply(
resampler,
generic_nb.sum_reduce_nb,
)
)
}
)
basic_data._feature_config = _feature_config
```
### Validate resample
```python
t2dataclose = t2data.close.rename("15MIN - realigned").vbt.realign_closing("1T")
fig = t1data.close.rename("1MIN").vbt.plot()
t2data.close.rename("15MIN").vbt.plot(fig=fig)
t2dataclose.vbt.plot(fig=fig)
```
## Persisting
```python
basic_data.to_parquet(partition_by="day", compression="gzip")
day_data = vbt.ParquetData.pull("BAC", filters=[("group", "==", "2024-05-03")])
vbt.print_dir_tree("BTC-USD")#overeni directory structure
```
# Discover
```python
vbt.phelp(vbt.talib(atr).run) #parameters it accepts
vbt.pdir(pf) - get available properties and methods
vbt.pprint(basic_data) #to get correct shape, info about instance
```

View File

@ -1,3 +1,3 @@
API_KEY = ''
SECRET_KEY = ''
API_KEY = 'PKGGEWIEYZOVQFDRY70L'
SECRET_KEY = 'O5Kt8X4RLceIOvM98i5LdbalItsX7hVZlbPYHy8Y'
MAX_BATCH_SIZE = 1

View File

@ -1,9 +1,7 @@
import os,sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
print(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import pandas as pd
import numpy as np
from alpaca.data.historical import StockHistoricalDataClient
from alpaca.data.historical import CryptoHistoricalDataClient, StockHistoricalDataClient
from alpaca.data.requests import CryptoLatestTradeRequest, StockLatestTradeRequest, StockLatestBarRequest, StockTradesRequest
from alpaca.data.enums import DataFeed
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY

View File

@ -1,66 +0,0 @@
import os
from bs4 import BeautifulSoup
import html2text
def convert_html_to_markdown(html_content, link_mapping):
h = html2text.HTML2Text()
h.ignore_links = False
# Update internal links to point to the relevant sections in the Markdown
soup = BeautifulSoup(html_content, 'html.parser')
for a in soup.find_all('a', href=True):
href = a['href']
if href in link_mapping:
a['href'] = f"#{link_mapping[href]}"
return h.handle(str(soup))
def create_link_mapping(root_dir):
link_mapping = {}
for subdir, _, files in os.walk(root_dir):
for file in files:
if file == "index.html":
relative_path = os.path.relpath(os.path.join(subdir, file), root_dir)
chapter_id = relative_path.replace(os.sep, '-').replace('index.html', '')
link_mapping[relative_path] = chapter_id
link_mapping[relative_path.replace(os.sep, '/')] = chapter_id # for URLs with slashes
return link_mapping
def read_html_files(root_dir, link_mapping):
markdown_content = []
for subdir, _, files in os.walk(root_dir):
relative_path = os.path.relpath(subdir, root_dir)
if files and any(file == "index.html" for file in files):
# Add directory as a heading based on its depth
heading_level = relative_path.count(os.sep) + 1
markdown_content.append(f"{'#' * heading_level} {relative_path}\n")
for file in files:
if file == "index.html":
file_path = os.path.join(subdir, file)
with open(file_path, 'r', encoding='utf-8') as f:
html_content = f.read()
soup = BeautifulSoup(html_content, 'html.parser')
title = soup.title.string if soup.title else "No Title"
chapter_id = os.path.relpath(file_path, root_dir).replace(os.sep, '-').replace('index.html', '')
markdown_content.append(f"<a id='{chapter_id}'></a>\n")
markdown_content.append(f"{'#' * (heading_level + 1)} {title}\n")
markdown_content.append(convert_html_to_markdown(html_content, link_mapping))
return "\n".join(markdown_content)
def save_to_markdown_file(content, output_file):
with open(output_file, 'w', encoding='utf-8') as f:
f.write(content)
def main():
root_dir = "./v2realbot/static/js/vbt/"
output_file = "output.md"
link_mapping = create_link_mapping(root_dir)
markdown_content = read_html_files(root_dir, link_mapping)
save_to_markdown_file(markdown_content, output_file)
print(f"Markdown document created at {output_file}")
if __name__ == "__main__":
main()

View File

@ -5,7 +5,7 @@ from rich import print
from typing import Any, Optional, List, Union
from datetime import datetime, date
from pydantic import BaseModel, Field
from v2realbot.enums.enums import Mode, Account, SchedulerStatus, Moddus, Market
from v2realbot.enums.enums import Mode, Account, SchedulerStatus, Moddus
from alpaca.data.enums import Exchange
@ -159,7 +159,6 @@ class RunManagerRecord(BaseModel):
mode: Mode
note: Optional[str] = None
ilog_save: bool = False
market: Optional[Market] = Market.US
bt_from: Optional[datetime] = None
bt_to: Optional[datetime] = None
#weekdays filter

View File

@ -5,7 +5,9 @@ import v2realbot.controller.services as cs
#prevede dict radku zpatky na objekt vcetme retypizace
def row_to_runmanager(row: dict) -> RunManagerRecord:
is_running = cs.is_runner_running(row['runner_id']) if row['runner_id'] else False
res = RunManagerRecord(
moddus=row['moddus'],
id=row['id'],
@ -15,7 +17,6 @@ def row_to_runmanager(row: dict) -> RunManagerRecord:
account=row['account'],
note=row['note'],
ilog_save=bool(row['ilog_save']),
market=row['market'] if row['market'] is not None else None,
bt_from=datetime.fromisoformat(row['bt_from']) if row['bt_from'] else None,
bt_to=datetime.fromisoformat(row['bt_to']) if row['bt_to'] else None,
weekdays_filter=[int(x) for x in row['weekdays_filter'].split(',')] if row['weekdays_filter'] else [],

View File

@ -4,23 +4,18 @@ from appdirs import user_data_dir
from pathlib import Path
import os
from collections import defaultdict
from dotenv import load_dotenv
# Global flag to track if the ml module has been imported (solution for long import times of tensorflow)
#the first occurence of using it will load it globally
_ml_module_loaded = False
#directory for generated images and basic reports
MEDIA_DIRECTORY = Path(__file__).parent.parent.parent / "media"
VBT_DOC_DIRECTORY = Path(__file__).parent.parent.parent / "vbt-doc" #directory for vbt doc
RUNNER_DETAIL_DIRECTORY = Path(__file__).parent.parent.parent / "runner_detail"
#location of strat.log - it is used to fetch by gui
LOG_PATH = Path(__file__).parent.parent
LOG_FILE = Path(__file__).parent.parent / "strat.log"
JOB_LOG_FILE = Path(__file__).parent.parent / "job.log"
DOTENV_DIRECTORY = Path(__file__).parent.parent.parent
ENV_FILE = DOTENV_DIRECTORY / '.env'
#stratvars that cannot be changed in gui
STRATVARS_UNCHANGEABLES = ['pendingbuys', 'blockbuy', 'jevylozeno', 'limitka']
@ -31,12 +26,6 @@ MODEL_DIR = Path(DATA_DIR)/"models"
PROFILING_NEXT_ENABLED = False
PROFILING_OUTPUT_DIR = DATA_DIR
#NALOADUJEME DOTENV ENV VARIABLES
if load_dotenv(ENV_FILE, verbose=True) is False:
print(f"Error loading.env file {ENV_FILE}. Now depending on ENV VARIABLES set externally.")
else:
print(f"Loaded env variables from file {ENV_FILE}")
#WIP - FILL CONFIGURATION CLASS FOR BACKTESTING
class BT_FILL_CONF:
""""
@ -79,7 +68,7 @@ def get_key(mode: Mode, account: Account):
#strategy instance main loop heartbeat
HEARTBEAT_TIMEOUT=5
WEB_API_KEY=os.environ.get('WEB_API_KEY')
WEB_API_KEY="david"
#PRIMARY PAPER
ACCOUNT1_PAPER_API_KEY = os.environ.get('ACCOUNT1_PAPER_API_KEY')

View File

@ -172,14 +172,14 @@ def add_run_manager_record(new_record: RunManagerRecord):
# Construct a suitable INSERT query based on your RunManagerRecord fields
insert_query = """
INSERT INTO run_manager (moddus, id, strat_id, symbol,account, mode, note,ilog_save,
market, bt_from, bt_to, weekdays_filter, batch_id,
bt_from, bt_to, weekdays_filter, batch_id,
start_time, stop_time, status, last_processed,
history, valid_from, valid_to, testlist_id)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,?)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
"""
values = [
new_record.moddus, str(new_record.id), str(new_record.strat_id), new_record.symbol, new_record.account, new_record.mode, new_record.note,
int(new_record.ilog_save), new_record.market,
int(new_record.ilog_save),
new_record.bt_from.isoformat() if new_record.bt_from is not None else None,
new_record.bt_to.isoformat() if new_record.bt_to is not None else None,
",".join(str(x) for x in new_record.weekdays_filter) if new_record.weekdays_filter else None,

View File

@ -1,11 +1,6 @@
from enum import Enum
from alpaca.trading.enums import OrderSide, OrderStatus, OrderType
class BarType(str, Enum):
TIME = "time"
VOLUME = "volume"
DOLLAR = "dollar"
class Env(str, Enum):
PROD = "prod"
TEST = "test"
@ -109,9 +104,3 @@ class StartBarAlign(str, Enum):
"""
ROUND = "round"
RANDOM = "random"
class Market(str, Enum):
US = "US"
CRYPTO = "CRYPTO"

File diff suppressed because it is too large Load Diff

View File

@ -1,570 +0,0 @@
import pandas as pd
import numpy as np
from numba import jit
from alpaca.data.historical import StockHistoricalDataClient
from sqlalchemy import column
from v2realbot.config import ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, DATA_DIR
from alpaca.data.requests import StockTradesRequest
import time as time_module
from v2realbot.utils.utils import parse_alpaca_timestamp, ltp, zoneNY, send_to_telegram, fetch_calendar_data
import pyarrow
from traceback import format_exc
from datetime import timedelta, datetime, time
from concurrent.futures import ThreadPoolExecutor
import os
import gzip
import pickle
import random
from alpaca.data.models import BarSet, QuoteSet, TradeSet
import v2realbot.utils.config_handler as cfh
from v2realbot.enums.enums import BarType
from tqdm import tqdm
""""
Module used for vectorized aggregation of trades.
Includes fetch (remote/cached) methods and numba aggregator function for TIME BASED, VOLUME BASED and DOLLAR BARS
"""""
def aggregate_trades(symbol: str, trades_df: pd.DataFrame, resolution: int, type: BarType = BarType.TIME):
""""
Accepts dataframe with trades keyed by symbol. Preparess dataframe to
numpy and calls Numba optimized aggregator for given bar type. (time/volume/dollar)
"""""
trades_df = trades_df.loc[symbol]
trades_df= trades_df.reset_index()
ticks = trades_df[['timestamp', 'price', 'size']].to_numpy()
# Extract the timestamps column (assuming it's the first column)
timestamps = ticks[:, 0]
# Convert the timestamps to Unix timestamps in seconds with microsecond precision
unix_timestamps_s = np.array([ts.timestamp() for ts in timestamps], dtype='float64')
# Replace the original timestamps in the NumPy array with the converted Unix timestamps
ticks[:, 0] = unix_timestamps_s
ticks = ticks.astype(np.float64)
#based on type, specific aggregator function is called
match type:
case BarType.TIME:
ohlcv_bars = generate_time_bars_nb(ticks, resolution)
case BarType.VOLUME:
ohlcv_bars = generate_volume_bars_nb(ticks, resolution)
case BarType.DOLLAR:
ohlcv_bars = generate_dollar_bars_nb(ticks, resolution)
case _:
raise ValueError("Invalid bar type. Supported types are 'time', 'volume' and 'dollar'.")
# Convert the resulting array back to a DataFrame
columns = ['time', 'open', 'high', 'low', 'close', 'volume', 'trades']
if type == BarType.DOLLAR:
columns.append('amount')
columns.append('updated')
if type == BarType.TIME:
columns.append('vwap')
columns.append('buyvolume')
columns.append('sellvolume')
if type == BarType.VOLUME:
columns.append('buyvolume')
columns.append('sellvolume')
ohlcv_df = pd.DataFrame(ohlcv_bars, columns=columns)
ohlcv_df['time'] = pd.to_datetime(ohlcv_df['time'], unit='s').dt.tz_localize('UTC').dt.tz_convert(zoneNY)
#print(ohlcv_df['updated'])
ohlcv_df['updated'] = pd.to_datetime(ohlcv_df['updated'], unit="s").dt.tz_localize('UTC').dt.tz_convert(zoneNY)
# Round to microseconds to maintain six decimal places
ohlcv_df['updated'] = ohlcv_df['updated'].dt.round('us')
ohlcv_df.set_index('time', inplace=True)
#ohlcv_df.index = ohlcv_df.index.tz_localize('UTC').tz_convert(zoneNY)
return ohlcv_df
# Function to ensure fractional seconds are present
def ensure_fractional_seconds(timestamp):
if '.' not in timestamp:
# Inserting .000000 before the timezone indicator 'Z'
return timestamp.replace('Z', '.000000Z')
else:
return timestamp
def convert_dict_to_multiindex_df(tradesResponse):
""""
Converts dictionary from cache or from remote (raw input) to multiindex dataframe.
with microsecond precision (from nanoseconds in the raw data)
"""""
# Create a DataFrame for each key and add the key as part of the MultiIndex
dfs = []
for key, values in tradesResponse.items():
df = pd.DataFrame(values)
# Rename columns
# Select and order columns explicitly
#print(df)
df = df[['t', 'x', 'p', 's', 'i', 'c','z']]
df.rename(columns={'t': 'timestamp', 'c': 'conditions', 'p': 'price', 's': 'size', 'x': 'exchange', 'z':'tape', 'i':'id'}, inplace=True)
df['symbol'] = key # Add ticker as a column
# Apply the function to ensure all timestamps have fractional seconds
#zvazit zda toto ponechat a nebo dat jen pri urcitem erroru pri to_datetime
#pripadne pak pridelat efektivnejsi pristup, aneb nahrazeni NaT - https://chatgpt.com/c/d2be6f87-b38f-4050-a1c6-541d100b1474
df['timestamp'] = df['timestamp'].apply(ensure_fractional_seconds)
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce') # Convert 't' from string to datetime before setting it as an index
#Adjust to microsecond precision
df.loc[df['timestamp'].notna(), 'timestamp'] = df['timestamp'].dt.floor('us')
df.set_index(['symbol', 'timestamp'], inplace=True) # Set the multi-level index using both 'ticker' and 't'
df = df.tz_convert(zoneNY, level='timestamp')
dfs.append(df)
# Concatenate all DataFrames into a single DataFrame with MultiIndex
final_df = pd.concat(dfs)
return final_df
def dict_to_df(tradesResponse, start, end, exclude_conditions = None, minsize = None):
""""
Transforms dict to Tradeset, then df and to zone aware
Also filters to start and end if necessary (ex. 9:30 to 15:40 is required only)
NOTE: prepodkladame, ze tradesResponse je dict from Raw data (cached/remote)
"""""
df = convert_dict_to_multiindex_df(tradesResponse)
#REQUIRED FILTERING
#pokud je zacatek pozdeji nebo konec driv tak orizneme
if (start.time() > time(9, 30) or end.time() < time(16, 0)):
print(f"filtrujeme {start.time()} {end.time()}")
# Define the time range
# start_time = pd.Timestamp(start.time(), tz=zoneNY).time()
# end_time = pd.Timestamp(end.time(), tz=zoneNY).time()
# Create a mask to filter rows within the specified time range
mask = (df.index.get_level_values('timestamp') >= start) & \
(df.index.get_level_values('timestamp') <= end)
# Apply the mask to the DataFrame
df = df[mask]
if exclude_conditions is not None:
print(f"excluding conditions {exclude_conditions}")
# Create a mask to exclude rows with any of the specified conditions
mask = df['conditions'].apply(lambda x: any(cond in exclude_conditions for cond in x))
# Filter out the rows with specified conditions
df = df[~mask]
if minsize is not None:
print(f"minsize {minsize}")
#exclude conditions
df = df[df['size'] >= minsize]
return df
def fetch_daily_stock_trades(symbol, start, end, exclude_conditions=None, minsize=None, force_remote=False, max_retries=5, backoff_factor=1):
#doc for this function
"""
Attempts to fetch stock trades either from cache or remote. When remote, it uses retry mechanism with exponential backoff.
Also it stores the data to cache if it is not already there.
by using force_remote - forcess using remote data always and thus refreshing cache for these dates
Attributes:
:param symbol: The stock symbol to fetch trades for.
:param start: The start time for the trade data.
:param end: The end time for the trade data.
:exclude_conditions: list of string conditions to exclude from the data
:minsize minimum size of trade to be included in the data
:force_remote will always use remote data and refresh cache
:param max_retries: Maximum number of retries.
:param backoff_factor: Factor to determine the next sleep time.
:return: TradesResponse object.
:raises: ConnectionError if all retries fail.
We use tradecache only for main sessison requests = 9:30 to 16:00
Do budoucna ukládat celý den BAC-20240203.cache.gz a z toho si pak filtrovat bud main sesssionu a extended
Ale zatim je uloženo jen main session v BAC-timestampopenu-timestampclose.cache.gz
"""
is_same_day = start.date() == end.date()
# Determine if the requested times fall within the main session
in_main_session = (time(9, 30) <= start.time() < time(16, 0)) and (time(9, 30) <= end.time() <= time(16, 0))
file_path = ''
if in_main_session:
filename_start = zoneNY.localize(datetime.combine(start.date(), time(9, 30)))
filename_end = zoneNY.localize(datetime.combine(end.date(), time(16, 0)))
daily_file = f"{symbol}-{int(filename_start.timestamp())}-{int(filename_end.timestamp())}.cache.gz"
file_path = f"{DATA_DIR}/tradecache/{daily_file}"
if not force_remote and os.path.exists(file_path):
print(f"Searching {str(start.date())} cache: " + daily_file)
with gzip.open(file_path, 'rb') as fp:
tradesResponse = pickle.load(fp)
print("FOUND in CACHE", daily_file)
return dict_to_df(tradesResponse, start, end, exclude_conditions, minsize)
print("NOT FOUND. Fetching from remote")
client = StockHistoricalDataClient(ACCOUNT1_PAPER_API_KEY, ACCOUNT1_PAPER_SECRET_KEY, raw_data=True)
stockTradeRequest = StockTradesRequest(symbol_or_symbols=symbol, start=start, end=end)
last_exception = None
for attempt in range(max_retries):
try:
tradesResponse = client.get_stock_trades(stockTradeRequest)
is_empty = not tradesResponse[symbol]
print(f"Remote fetched: {is_empty=}", start, end)
if in_main_session and not is_empty:
current_time = datetime.now().astimezone(zoneNY)
if not (start < current_time < end):
with gzip.open(file_path, 'wb') as fp:
pickle.dump(tradesResponse, fp)
print("Saving to Trade CACHE", file_path)
else: # Don't save the cache if the market is still open
print("Not saving trade cache, market still open today")
return pd.DataFrame() if is_empty else dict_to_df(tradesResponse, start, end, exclude_conditions, minsize)
except Exception as e:
print(f"Attempt {attempt + 1} failed: {e}")
last_exception = e
time_module.sleep(backoff_factor * (2 ** attempt) + random.uniform(0, 1)) # Adding random jitter
print("All attempts to fetch data failed.")
raise ConnectionError(f"Failed to fetch stock trades after {max_retries} retries. Last exception: {str(last_exception)} and {format_exc()}")
def fetch_trades_parallel(symbol, start_date, end_date, exclude_conditions = cfh.config_handler.get_val('AGG_EXCLUDED_TRADES'), minsize = 100, force_remote = False, max_workers=None):
"""
Fetches trades for each day between start_date and end_date during market hours (9:30-16:00) in parallel and concatenates them into a single DataFrame.
:param symbol: Stock symbol.
:param start_date: Start date as datetime.
:param end_date: End date as datetime.
:return: DataFrame containing all trades from start_date to end_date.
"""
futures = []
results = []
market_open_days = fetch_calendar_data(start_date, end_date)
day_count = len(market_open_days)
print("Contains", day_count, " market days")
max_workers = min(10, max(2, day_count // 2)) if max_workers is None else max_workers # Heuristic: half the days to process, but at least 1 and no more than 10
with ThreadPoolExecutor(max_workers=max_workers) as executor:
#for single_date in (start_date + timedelta(days=i) for i in range((end_date - start_date).days + 1)):
for market_day in tqdm(market_open_days, desc="Processing market days"):
#start = datetime.combine(single_date, time(9, 30)) # Market opens at 9:30 AM
#end = datetime.combine(single_date, time(16, 0)) # Market closes at 4:00 PM
interval_from = zoneNY.localize(market_day.open)
interval_to = zoneNY.localize(market_day.close)
#pripadne orizneme pokud je pozadovane pozdejsi zacatek a drivejsi konek
start = start_date if interval_from < start_date else interval_from
#start = max(start_date, interval_from)
end = end_date if interval_to > end_date else interval_to
#end = min(end_date, interval_to)
future = executor.submit(fetch_daily_stock_trades, symbol, start, end, exclude_conditions, minsize, force_remote)
futures.append(future)
for future in tqdm(futures, desc="Fetching data"):
try:
result = future.result()
results.append(result)
except Exception as e:
print(f"Error fetching data for a day: {e}")
# Batch concatenation to improve speed
batch_size = 10
batches = [results[i:i + batch_size] for i in range(0, len(results), batch_size)]
final_df = pd.concat([pd.concat(batch, ignore_index=False) for batch in batches], ignore_index=False)
return final_df
#original version
#return pd.concat(results, ignore_index=False)
@jit(nopython=True)
def generate_dollar_bars_nb(ticks, amount_per_bar):
""""
Generates Dollar based bars from ticks.
There is also simple prevention of aggregation from different days
as described here https://chatgpt.com/c/17804fc1-a7bc-495d-8686-b8392f3640a2
Downside: split days by UTC (which is ok for main session, but when extended hours it should be reworked by preprocessing new column identifying session)
When trade is split into multiple bars it is counted as trade in each of the bars.
Other option: trade count can be proportionally distributed by weight (0.2 to 1st bar, 0.8 to 2nd bar) - but this is not implemented yet
https://chatgpt.com/c/ff4802d9-22a2-4b72-8ab7-97a91e7a515f
"""""
ohlcv_bars = []
remaining_amount = amount_per_bar
# Initialize bar values based on the first tick to avoid uninitialized values
open_price = ticks[0, 1]
high_price = ticks[0, 1]
low_price = ticks[0, 1]
close_price = ticks[0, 1]
volume = 0
trades_count = 0
current_day = np.floor(ticks[0, 0] / 86400) # Calculate the initial day from the first tick timestamp
bar_time = ticks[0, 0] # Initialize bar time with the time of the first tick
for tick in ticks:
tick_time = tick[0]
price = tick[1]
tick_volume = tick[2]
tick_amount = price * tick_volume
tick_day = np.floor(tick_time / 86400) # Calculate the day of the current tick
# Check if the new tick is from a different day, then close the current bar
if tick_day != current_day:
if trades_count > 0:
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, amount_per_bar, tick_time])
# Reset for the new day using the current tick data
open_price = price
high_price = price
low_price = price
close_price = price
volume = 0
trades_count = 0
remaining_amount = amount_per_bar
current_day = tick_day
bar_time = tick_time
# Start new bar if needed because of the dollar value
while tick_amount > 0:
if tick_amount < remaining_amount:
# Add the entire tick to the current bar
high_price = max(high_price, price)
low_price = min(low_price, price)
close_price = price
volume += tick_volume
remaining_amount -= tick_amount
trades_count += 1
tick_amount = 0
else:
# Calculate the amount of volume that fits within the remaining dollar amount
volume_to_add = remaining_amount / price
volume += volume_to_add # Update the volume here before appending and resetting
# Append the partially filled bar to the list
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count + 1, amount_per_bar, tick_time])
# Fill the current bar and continue with a new bar
tick_volume -= volume_to_add
tick_amount -= remaining_amount
# Reset bar values for the new bar using the current tick data
open_price = price
high_price = price
low_price = price
close_price = price
volume = 0 # Reset volume for the new bar
trades_count = 0
remaining_amount = amount_per_bar
# Increment bar time if splitting a trade
if tick_volume > 0: #pokud v tradu je jeste zbytek nastavujeme cas o nanosekundu vetsi
bar_time = tick_time + 1e-6
else:
bar_time = tick_time #jinak nastavujeme cas ticku
#bar_time = tick_time
# Add the last bar if it contains any trades
if trades_count > 0:
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, amount_per_bar, tick_time])
return np.array(ohlcv_bars)
@jit(nopython=True)
def generate_volume_bars_nb(ticks, volume_per_bar):
""""
Generates Volume based bars from ticks.
NOTE: UTC day split here (doesnt aggregate trades from different days)
but realized from UTC (ok for main session) - but needs rework for extension by preprocessing ticks_df and introduction sesssion column
When trade is split into multiple bars it is counted as trade in each of the bars.
Other option: trade count can be proportionally distributed by weight (0.2 to 1st bar, 0.8 to 2nd bar) - but this is not implemented yet
https://chatgpt.com/c/ff4802d9-22a2-4b72-8ab7-97a91e7a515f
"""""
ohlcv_bars = []
remaining_volume = volume_per_bar
# Initialize bar values based on the first tick to avoid uninitialized values
open_price = ticks[0, 1]
high_price = ticks[0, 1]
low_price = ticks[0, 1]
close_price = ticks[0, 1]
volume = 0
trades_count = 0
current_day = np.floor(ticks[0, 0] / 86400) # Calculate the initial day from the first tick timestamp
bar_time = ticks[0, 0] # Initialize bar time with the time of the first tick
buy_volume = 0 # Volume of buy trades
sell_volume = 0 # Volume of sell trades
prev_price = ticks[0, 1] # Initialize previous price for the first tick
for tick in ticks:
tick_time = tick[0]
price = tick[1]
tick_volume = tick[2]
tick_day = np.floor(tick_time / 86400) # Calculate the day of the current tick
# Check if the new tick is from a different day, then close the current bar
if tick_day != current_day:
if trades_count > 0:
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, tick_time, buy_volume, sell_volume])
# Reset for the new day using the current tick data
open_price = price
high_price = price
low_price = price
close_price = price
volume = 0
trades_count = 0
remaining_volume = volume_per_bar
current_day = tick_day
bar_time = tick_time # Update bar time to the current tick time
buy_volume = 0
sell_volume = 0
# Reset previous tick price (calulating imbalance for each day from the start)
prev_price = price
# Start new bar if needed because of the volume
while tick_volume > 0:
if tick_volume < remaining_volume:
# Add the entire tick to the current bar
high_price = max(high_price, price)
low_price = min(low_price, price)
close_price = price
volume += tick_volume
remaining_volume -= tick_volume
trades_count += 1
# Update buy and sell volumes
if price > prev_price:
buy_volume += tick_volume
elif price < prev_price:
sell_volume += tick_volume
tick_volume = 0
else:
# Fill the current bar and continue with a new bar
volume_to_add = remaining_volume
volume += volume_to_add
tick_volume -= volume_to_add
trades_count += 1
# Update buy and sell volumes
if price > prev_price:
buy_volume += volume_to_add
elif price < prev_price:
sell_volume += volume_to_add
# Append the completed bar to the list
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, tick_time, buy_volume, sell_volume])
# Reset bar values for the new bar using the current tick data
open_price = price
high_price = price
low_price = price
close_price = price
volume = 0
trades_count = 0
remaining_volume = volume_per_bar
buy_volume = 0
sell_volume = 0
# Increment bar time if splitting a trade
if tick_volume > 0: # If there's remaining volume in the trade, set bar time slightly later
bar_time = tick_time + 1e-6
else:
bar_time = tick_time # Otherwise, set bar time to the tick time
prev_price = price
# Add the last bar if it contains any trades
if trades_count > 0:
ohlcv_bars.append([bar_time, open_price, high_price, low_price, close_price, volume, trades_count, tick_time, buy_volume, sell_volume])
return np.array(ohlcv_bars)
@jit(nopython=True)
def generate_time_bars_nb(ticks, resolution):
# Initialize the start and end time
start_time = np.floor(ticks[0, 0] / resolution) * resolution
end_time = np.floor(ticks[-1, 0] / resolution) * resolution
# # Calculate number of bars
# num_bars = int((end_time - start_time) // resolution + 1)
# Using a list to append data only when trades exist
ohlcv_bars = []
# Variables to track the current bar
current_bar_index = -1
open_price = 0
high_price = -np.inf
low_price = np.inf
close_price = 0
volume = 0
trades_count = 0
vwap_cum_volume_price = 0 # Cumulative volume * price
cum_volume = 0 # Cumulative volume for VWAP
buy_volume = 0 # Volume of buy trades
sell_volume = 0 # Volume of sell trades
prev_price = ticks[0, 1] # Initialize previous price for the first tick
prev_day = np.floor(ticks[0, 0] / 86400) # Calculate the initial day from the first tick timestamp
for tick in ticks:
curr_time = tick[0] #updated time
tick_time = np.floor(tick[0] / resolution) * resolution
price = tick[1]
tick_volume = tick[2]
tick_day = np.floor(tick_time / 86400) # Calculate the day of the current tick
#if the new tick is from a new day, reset previous tick price (calculating imbalance starts over)
if tick_day != prev_day:
prev_price = price
prev_day = tick_day
# Check if the tick belongs to a new bar
if tick_time != start_time + current_bar_index * resolution:
if current_bar_index >= 0 and trades_count > 0: # Save the previous bar if trades happened
vwap = vwap_cum_volume_price / cum_volume if cum_volume > 0 else 0
ohlcv_bars.append([start_time + current_bar_index * resolution, open_price, high_price, low_price, close_price, volume, trades_count, curr_time, vwap, buy_volume, sell_volume])
# Reset bar values
current_bar_index = int((tick_time - start_time) / resolution)
open_price = price
high_price = price
low_price = price
volume = 0
trades_count = 0
vwap_cum_volume_price = 0
cum_volume = 0
buy_volume = 0
sell_volume = 0
# Update the OHLCV values for the current bar
high_price = max(high_price, price)
low_price = min(low_price, price)
close_price = price
volume += tick_volume
trades_count += 1
vwap_cum_volume_price += price * tick_volume
cum_volume += tick_volume
# Update buy and sell volumes
if price > prev_price:
buy_volume += tick_volume
elif price < prev_price:
sell_volume += tick_volume
prev_price = price
# Save the last processed bar
if trades_count > 0:
vwap = vwap_cum_volume_price / cum_volume if cum_volume > 0 else 0
ohlcv_bars.append([start_time + current_bar_index * resolution, open_price, high_price, low_price, close_price, volume, trades_count, curr_time, vwap, buy_volume, sell_volume])
return np.array(ohlcv_bars)
# Example usage
if __name__ == '__main__':
pass
#example in agg_vect.ipynb

View File

@ -1,7 +1,7 @@
import os,sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
os.environ["KERAS_BACKEND"] = "jax"
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY, LOG_PATH, MODEL_DIR, VBT_DOC_DIRECTORY
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY, LOG_PATH, MODEL_DIR
from alpaca.data.timeframe import TimeFrame, TimeFrameUnit
from datetime import datetime
from rich import print
@ -11,14 +11,13 @@ import uvicorn
from uuid import UUID
from v2realbot.utils.ilog import get_log_window
from v2realbot.common.model import RunManagerRecord, StrategyInstance, RunnerView, RunRequest, Trade, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, Bar, RunArchiveChange, TestList, ConfigItem, InstantIndicator, DataTablesRequest, AnalyzerInputs
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends, HTTPException, status, WebSocketException, Cookie, Query, Request
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends, HTTPException, status, WebSocketException, Cookie, Query
from fastapi.responses import FileResponse, StreamingResponse, JSONResponse
from fastapi.staticfiles import StaticFiles
from fastapi.security import HTTPBasic, HTTPBasicCredentials
from v2realbot.enums.enums import Env, Mode
from typing import Annotated
import os
import psutil
import uvicorn
import orjson
from queue import Queue, Empty
@ -36,7 +35,7 @@ from traceback import format_exc
#from v2realbot.reporting.optimizecutoffs import find_optimal_cutoff
import v2realbot.reporting.analyzer as ci
import shutil
from starlette.responses import JSONResponse, HTMLResponse, FileResponse, RedirectResponse
from starlette.responses import JSONResponse
import mlroom
import mlroom.utils.mlutils as ml
from typing import List
@ -76,67 +75,13 @@ def api_key_auth(api_key: str = Depends(X_API_KEY)):
detail="Forbidden"
)
def authenticate_user(credentials: HTTPBasicCredentials = Depends(HTTPBasic())):
correct_username = "david"
correct_password = "david"
if credentials.username == correct_username and credentials.password == correct_password:
return True
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Basic"},
)
app = FastAPI()
root = os.path.dirname(os.path.abspath(__file__))
#app.mount("/static", StaticFiles(html=True, directory=os.path.join(root, 'static')), name="static")
app.mount("/static", StaticFiles(html=True, directory=os.path.join(root, 'static')), name="static")
app.mount("/media", StaticFiles(directory=str(MEDIA_DIRECTORY)), name="media")
#app.mount("/", StaticFiles(html=True, directory=os.path.join(root, 'static')), name="www")
security = HTTPBasic()
@app.get("/static/{path:path}")
async def static_files(request: Request, path: str, authenticated: bool = Depends(authenticate_user)):
root = os.path.dirname(os.path.abspath(__file__))
static_dir = os.path.join(root, 'static')
if not path or path == "/":
file_path = os.path.join(static_dir, 'index.html')
else:
file_path = os.path.join(static_dir, path)
# Check if path is a directory
if os.path.isdir(file_path):
# If it's a directory, try to serve index.html within that directory
index_path = os.path.join(file_path, 'index.html')
if os.path.exists(index_path):
return FileResponse(index_path)
else:
# Optionally, you can return a directory listing or a custom 404 page here
return HTMLResponse("Directory listing not enabled.", status_code=403)
if not os.path.exists(file_path):
raise HTTPException(status_code=404, detail="File not found")
return FileResponse(file_path)
@app.get("/vbt-doc/{file_path:path}")
async def serve_protected_docs(file_path: str, credentials: HTTPBasicCredentials = Depends(authenticate_user)):
file_location = VBT_DOC_DIRECTORY / file_path
if file_location.is_dir(): # If it's a directory, serve index.html
index_file = file_location / "index.html"
if index_file.exists():
return FileResponse(index_file)
else:
raise HTTPException(status_code=404, detail="Index file not found")
elif file_location.exists():
return FileResponse(file_location)
else:
raise HTTPException(status_code=404, detail="File not found")
def get_current_username(
credentials: Annotated[HTTPBasicCredentials, Depends(security)]
@ -158,9 +103,9 @@ async def get_api_key(
return session or api_key
#TODO predelat z Async?
# @app.get("/static")
# async def get(username: Annotated[str, Depends(get_current_username)]):
# return FileResponse("index.html")
@app.get("/static")
async def get(username: Annotated[str, Depends(get_current_username)]):
return FileResponse("index.html")
@app.websocket("/runners/{runner_id}/ws")
async def websocket_endpoint(
@ -1042,25 +987,7 @@ def get_metadata(model_name: str):
# "last_modified": os.path.getmtime(model_path),
# # ... other metadata fields ...
# }
@app.get("/system-info")
def get_system_info():
"""Get system info, e.g. disk free space, used percentage ... """
disk_total = round(psutil.disk_usage('/').total / 1024**3, 1)
disk_used = round(psutil.disk_usage('/').used / 1024**3, 1)
disk_free = round(psutil.disk_usage('/').free / 1024**3, 1)
disk_used_percentage = round(psutil.disk_usage('/').percent, 1)
# memory_total = round(psutil.virtual_memory().total / 1024**3, 1)
# memory_perc = round(psutil.virtual_memory().percent, 1)
# cpu_time_user = round(psutil.cpu_times().user,1)
# cpu_time_system = round(psutil.cpu_times().system,1)
# cpu_time_idle = round(psutil.cpu_times().idle,1)
# network_sent = round(psutil.net_io_counters().bytes_sent / 1024**3, 6)
# network_recv = round(psutil.net_io_counters().bytes_recv / 1024**3, 6)
return {"disk_space": {"total": disk_total, "used": disk_used, "free" : disk_free, "used_percentage" : disk_used_percentage},
# "memory": {"total": memory_total, "used_percentage": memory_perc},
# "cpu_time" : {"user": cpu_time_user, "system": cpu_time_system, "idle": cpu_time_idle},
# "network": {"sent": network_sent, "received": network_recv}
}
# Thread function to insert data from the queue into the database
def insert_queue2db():

View File

@ -2,7 +2,7 @@ from uuid import UUID
from typing import Any, List, Tuple
from uuid import UUID, uuid4
from v2realbot.enums.enums import Moddus, SchedulerStatus, RecordType, StartBarAlign, Mode, Account, OrderSide
from v2realbot.common.model import RunManagerRecord, StrategyInstance, RunDay, StrategyInstance, Runner, RunRequest, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, RunArchiveChange, Bar, TradeEvent, TestList, Intervals, ConfigItem, InstantIndicator, DataTablesRequest, Market
from v2realbot.common.model import RunManagerRecord, StrategyInstance, RunDay, StrategyInstance, Runner, RunRequest, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, RunArchiveChange, Bar, TradeEvent, TestList, Intervals, ConfigItem, InstantIndicator, DataTablesRequest
from v2realbot.utils.utils import validate_and_format_time, AttributeDict, zoneNY, zonePRG, safe_get, dict_replace_value, Store, parse_toml_string, json_serial, is_open_hours, send_to_telegram, concatenate_weekdays, transform_data
from v2realbot.common.PrescribedTradeModel import Trade, TradeDirection, TradeStatus, TradeStoplossType
from datetime import datetime
@ -116,8 +116,7 @@ def initialize_jobs(run_manager_records: RunManagerRecord = None):
scheduler.add_job(start_runman_record, start_trigger, id=f"scheduler_start_{record.id}", args=[record.id])
scheduler.add_job(stop_runman_record, stop_trigger, id=f"scheduler_stop_{record.id}", args=[record.id])
#scheduler.add_job(print_hello, 'interval', seconds=10, id=
# f"scheduler_testinterval")
#scheduler.add_job(print_hello, 'interval', seconds=10, id=f"scheduler_testinterval")
scheduled_jobs = scheduler.get_jobs()
print(f"APS jobs refreshed ({len(scheduled_jobs)})")
current_jobs_dict = format_apscheduler_jobs(scheduled_jobs)
@ -125,9 +124,9 @@ def initialize_jobs(run_manager_records: RunManagerRecord = None):
return 0, current_jobs_dict
#zastresovaci funkce resici error handling a printing
def start_runman_record(id: UUID, debug_date = None):
def start_runman_record(id: UUID, market = "US", debug_date = None):
record = None
res, record, msg = _start_runman_record(id=id, debug_date=debug_date)
res, record, msg = _start_runman_record(id=id, market=market, debug_date=debug_date)
if record is not None:
market_time_now = datetime.now().astimezone(zoneNY) if debug_date is None else debug_date
@ -166,8 +165,8 @@ def update_runman_record(record: RunManagerRecord):
err_msg= f"STOP: Error updating {record.id} errir {set} with values {record}"
return -2, err_msg#toto stopne zpracovani dalsich zaznamu pri chybe, zvazit continue
def stop_runman_record(id: UUID, debug_date = None):
res, record, msg = _stop_runman_record(id=id, debug_date=debug_date)
def stop_runman_record(id: UUID, market = "US", debug_date = None):
res, record, msg = _stop_runman_record(id=id, market=market, debug_date=debug_date)
#results : 0 - ok, -1 not running/already running/not specific, -2 error
#report vzdy zapiseme do history, pokud je record not None, pripadna chyba se stala po dotazeni recordu
@ -197,7 +196,7 @@ def stop_runman_record(id: UUID, debug_date = None):
print(f"STOP JOB: {id} FINISHED")
#start function that is called from the job
def _start_runman_record(id: UUID, debug_date = None):
def _start_runman_record(id: UUID, market = "US", debug_date = None):
print(f"Start scheduled record {id}")
record : RunManagerRecord = None
@ -208,16 +207,15 @@ def _start_runman_record(id: UUID, debug_date = None):
record = result
if record.market == Market.US or record.market == Market.CRYPTO:
res, sada = sch.get_todays_market_times(market=record.market, debug_date=debug_date)
if market is not None and market == "US":
res, sada = sch.get_todays_market_times(market=market, debug_date=debug_date)
if res == 0:
market_time_now, market_open_datetime, market_close_datetime = sada
print(f"OPEN:{market_open_datetime} CLOSE:{market_close_datetime}")
else:
sada = f"Market {record.market} Error getting market times (CLOSED): " + str(sada)
sada = f"Market {market} Error getting market times (CLOSED): " + str(sada)
return res, record, sada
else:
print("Market type is unknown.")
if cs.is_stratin_running(record.strat_id):
return -1, record, f"Stratin {record.strat_id} is already running"
@ -231,7 +229,7 @@ def _start_runman_record(id: UUID, debug_date = None):
return 0, record, record.runner_id
#stop function that is called from the job
def _stop_runman_record(id: UUID, debug_date = None):
def _stop_runman_record(id: UUID, market = "US", debug_date = None):
record = None
#get all records
print(f"Stopping record {id}")
@ -306,5 +304,5 @@ if __name__ == "__main__":
# print(f"CALL FINISHED, with {debug_date} RESULT: {res}, {result}")
res, result = stop_runman_record(id=id, debug_date = debug_date)
res, result = stop_runman_record(id=id, market = "US", debug_date = debug_date)
print(f"CALL FINISHED, with {debug_date} RESULT: {res}, {result}")

View File

@ -2,10 +2,10 @@ import json
import datetime
import v2realbot.controller.services as cs
import v2realbot.controller.run_manager as rm
from v2realbot.common.model import RunnerView, RunManagerRecord, StrategyInstance, Runner, RunRequest, Trade, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, Bar, RunArchiveChange, TestList, ConfigItem, InstantIndicator, DataTablesRequest, AnalyzerInputs, Market
from v2realbot.common.model import RunnerView, RunManagerRecord, StrategyInstance, Runner, RunRequest, Trade, RunArchive, RunArchiveView, RunArchiveViewPagination, RunArchiveDetail, Bar, RunArchiveChange, TestList, ConfigItem, InstantIndicator, DataTablesRequest, AnalyzerInputs
from uuid import uuid4, UUID
from v2realbot.utils.utils import json_serial, send_to_telegram, zoneNY, zonePRG, zoneUTC, fetch_calendar_data
from datetime import datetime, timedelta, time
from v2realbot.utils.utils import json_serial, send_to_telegram, zoneNY, zonePRG, fetch_calendar_data
from datetime import datetime, timedelta
from traceback import format_exc
from rich import print
import requests
@ -18,18 +18,9 @@ from v2realbot.config import WEB_API_KEY
#naplanovany jako samostatni job a triggerován pouze jednou v daný čas pro start a stop
#novy kod v aps_scheduler.py
def is_US_market_day(date):
cal_dates = fetch_calendar_data(date, date)
if len(cal_dates) == 0:
print("Today is not a market day.")
return False, cal_dates
else:
print("Market is open")
return True, cal_dates
def get_todays_market_times(market, debug_date = None):
def get_todays_market_times(market = "US", debug_date = None):
try:
if market == Market.US:
if market == "US":
#zjistit vsechny podminky - mozna loopovat - podminky jsou vlevo
if debug_date is not None:
nowNY = debug_date
@ -37,20 +28,17 @@ def get_todays_market_times(market, debug_date = None):
nowNY = datetime.now().astimezone(zoneNY)
nowNY_date = nowNY.date()
#is market open - nyni pouze US
stat, calendar_dates = is_US_market_day(nowNY_date)
if stat:
cal_dates = fetch_calendar_data(nowNY_date, nowNY_date)
if len(cal_dates) == 0:
print("No Market Day today")
return -1, "Market Closed"
#zatim podpora pouze main session
#pouze main session
market_open_datetime = zoneNY.localize(calendar_dates[0].open)
market_close_datetime = zoneNY.localize(calendar_dates[0].close)
return 0, (nowNY, market_open_datetime, market_close_datetime)
else:
return -1, "Market is closed."
elif market == Market.CRYPTO:
now_market_datetime = datetime.now().astimezone(zoneUTC)
market_open_datetime = datetime.combine(datetime.now(), time.min)
matket_close_datetime = datetime.combine(datetime.now(), time.max)
return 0, (now_market_datetime, market_open_datetime, matket_close_datetime)
market_open_datetime = zoneNY.localize(cal_dates[0].open)
market_close_datetime = zoneNY.localize(cal_dates[0].close)
return 0, (nowNY, market_open_datetime, market_close_datetime)
else:
return -1, "Market not supported"
except Exception as e:

View File

@ -131,29 +131,9 @@
<!-- <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/monaco-editor/0.41.0/min/vs/editor/editor.main.js"></script> -->
<!-- <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/monaco-editor/0.41.0/min/vs/loader.min.js"></script> -->
<script src="/static/js/systeminfo.js"> </script>
</head>
<body>
<div id="main" class="mainConteiner flex-container content">
<div id="system-info" class="flex-items">
<label data-bs-toggle="collapse" data-bs-target="#system-info-inner" aria-expanded="true">
<h4>System Info </h4>
</label>
<div id="system-info-inner" class="collapse">
<div id="system-info-output"></div>
<div id="graphical-output">
<div id="disk-gauge-container">
<span id="title"> Disk Space: </span>
<span id="free-space">Free: -- GB</span> |
<span id="total-space">Total: -- GB</span> |
<span id="used-percent">Used: -- %</span>
<div id="disk-gauge">
<div id="disk-gauge-bar"></div>
</div>
</div>
</div>
</div>
</div>
<div id="chartContainer" class="flex-items">
<label data-bs-toggle="collapse" data-bs-target="#chartContainerInner" aria-expanded="true">
<h4>Chart</h4>
@ -250,7 +230,6 @@
<!-- <table id="trades-data-table" class="dataTable no-footer" style="width: 300px;display: contents;"></table> -->
</div>
</div>
<div id="runner-table" class="flex-items">
<label data-bs-toggle="collapse" data-bs-target="#runner-table-inner">
<h4>Running Strategies</h4>
@ -368,7 +347,6 @@
<th>testlist_id</th>
<th>Running</th>
<th>RunnerId</th>
<th>Market</th>
</tr>
</thead>
<tbody></tbody>
@ -1171,7 +1149,7 @@
<script src="/static/js/config.js?v=1.04"></script>
<!-- tady zacina polska docasna lokalizace -->
<!-- <script type="text/javascript" src="https://unpkg.com/lightweight-charts/dist/lightweight-charts.standalone.production.js"></script> -->
<script type="text/javascript" src="/static/js/libs/lightweightcharts/lightweight-charts.standalone.production413.js"></script>
<script type="text/javascript" src="/static/js/libs/lightweightcharts/lightweight-charts.standalone.production410.js"></script>
<script src="/static/js/dynamicbuttons.js?v=1.05"></script>

View File

@ -1,31 +0,0 @@
function get_system_info() {
console.log('Button get system status clicked')
$.ajax({
url: '/system-info',
type: 'GET',
beforeSend: function (xhr) {
xhr.setRequestHeader('X-API-Key',
API_KEY); },
success: function(response) {
$.each(response, function(index, item) {
if (index=="disk_space") {
$('#disk-gauge-bar').css('width', response.disk_space.used_percentage + '%');
$('#free-space').text('Free: ' + response.disk_space.free + ' GB');
$('#total-space').text('Total: ' + response.disk_space.total + ' GB');
$('#used-percent').text('Used: ' + response.disk_space.used_percentage + '%');
} else {
var formatted_item = JSON.stringify(item, null, 4)
$('#system-info-output').append('<p>' + index + ': ' + formatted_item + '</p>');
}
});
},
error: function(xhr, status, error) {
$('#disk-gauge-bar').html('An error occurred: ' + error + xhr.responseText + status);
}
});
}
$(document).ready(function(){
get_system_info()
});

View File

@ -46,7 +46,6 @@ function initialize_runmanagerRecords() {
{data: 'testlist_id', visible: true},
{data: 'strat_running', visible: true},
{data: 'runner_id', visible: true},
{data: 'market', visible: true},
],
paging: true,
processing: true,

View File

@ -371,10 +371,9 @@ function initialize_chart() {
}
chart = LightweightCharts.createChart(document.getElementById('chart'), chartOptions);
chart.applyOptions({ timeScale: { visible: true, timeVisible: true, secondsVisible: true, minBarSpacing: 0.003}, crosshair: {
chart.applyOptions({ timeScale: { visible: true, timeVisible: true, secondsVisible: true }, crosshair: {
mode: LightweightCharts.CrosshairMode.Normal, labelVisible: true
}})
console.log("chart intiialized")
}
//mozna atributy last value visible

View File

@ -994,24 +994,3 @@ pre {
#datepicker:disabled {
background-color: #f2f2f2;
}
#disk-gauge-container {
text-align: center;
width: 400px;
}
#disk-gauge {
width: 100%;
height: 20px;
background-color: #ddd;
border-radius: 10px;
overflow: hidden;
}
#disk-gauge-bar {
height: 100%;
background-color: #4285F4;
width: 0%; /* Initial state */
border-radius: 10px;
}

View File

@ -5,108 +5,99 @@ from v2realbot.utils.utils import slice_dict_lists,zoneUTC,safe_get, AttributeDi
#id = "b11c66d9-a9b6-475a-9ac1-28b11e1b4edf"
#state = AttributeDict(vars={})
from rich import print
from traceback import format_exc
def attach_previous_data(state):
"""""
Attaches data from previous runner of the same batch.
"""""
print("ATTACHING PREVIOUS DATA")
try:
runner : Runner
#get batch_id of current runer
res, runner = cs.get_runner(state.runner_id)
if res < 0:
if runner.batch_id is None:
print(f"No batch_id found for runner {runner.id}")
else:
print(f"Couldnt get previous runner {state.runner_id} error: {runner}")
return None
batch_id = runner.batch_id
#batch_id = "6a6b0bcf"
res, runner_ids =cs.get_archived_runnerslist_byBatchID(batch_id, "desc")
if res < 0:
msg = f"error whne fetching runners of batch {batch_id} {runner_ids}"
print(msg)
return None
if runner_ids is None or len(runner_ids) == 0:
print(f"NO runners found for batch {batch_id} {runner_ids}")
return None
last_runner = runner_ids[0]
print("Previous runner identified:", last_runner)
#get archived header - to get transferables
runner_header : RunArchive = None
res, runner_header = cs.get_archived_runner_header_byID(last_runner)
if res < 0:
print(f"Error when fetching runner header {last_runner}")
return None
state.vars["transferables"] = runner_header.transferables
print("INITIALIZED transferables", state.vars["transferables"])
#get details from the runner
print(f"Fetching runner details of {last_runner}")
res, val = cs.get_archived_runner_details_byID(last_runner)
if res < 0:
print(f"no archived runner {last_runner}")
return None
detail = RunArchiveDetail(**val)
#print("toto jsme si dotahnuli", detail.bars)
if len(detail.bars["time"]) == 0:
print(f"no bars for runner {last_runner}")
return None
# from stratvars directives
attach_previous_bar_data = safe_get(state.vars, "attach_previous_bar_data", 50)
attach_previous_tick_data = safe_get(state.vars, "attach_previous_tick_data", None)
#indicators datetime utc
indicators = slice_dict_lists(d=detail.indicators[0],last_item=attach_previous_bar_data, time_to_datetime=True)
#time -datetime utc, updated - timestamp float
bars = slice_dict_lists(d=detail.bars, last_item=attach_previous_bar_data, time_to_datetime=True)
cbar_ids = {}
#zarovname tick spolu s bar daty
if attach_previous_tick_data is None:
oldest_timestamp = bars["updated"][0]
#returns only values older that oldest_timestamp
cbar_inds = filter_timeseries_by_timestamp(detail.indicators[1], oldest_timestamp)
runner : Runner
#get batch_id of current runer
res, runner = cs.get_runner(state.runner_id)
if res < 0:
if runner.batch_id is None:
print(f"No batch_id found for runner {runner.id}")
else:
cbar_inds = slice_dict_lists(d=detail.indicators[1],last_item=attach_previous_tick_data)
#USE these as INITs - TADY SI TO JESTE ZASTAVIT a POROVNAT
#print("state.indicatorsL", state.indicators, "NEW:", indicators)
state.indicators = AttributeDict(**indicators)
print("transfered indicators:", len(state.indicators["time"]))
#print("state.bars", state.bars, "NEW:", bars)
state.bars = AttributeDict(bars)
print("transfered bars:", len(state.bars["time"]))
#print("state.cbar_indicators", state.cbar_indicators, "NEW:", cbar_inds)
state.cbar_indicators = AttributeDict(cbar_inds)
print("transfered ticks:", len(state.cbar_indicators["time"]))
print("TRANSFERABLEs INITIALIZED")
#bars
#transferable_state_vars = ["martingale", "batch_profit"]
#1. pri initu se tyto klice v state vars se namapuji do ext_data ext_data["transferrables"]["martingale"] = state.vars["martingale"]
#2. pri transferu se vse z ext_data["trasferrables"] dá do stejnénné state.vars["martingale"]
#3. na konci dne se uloží do sloupce transferables v RunArchive
#pridavame dailyBars z extData
# if hasattr(detail, "ext_data") and "dailyBars" in detail.ext_data:
# state.dailyBars = detail.ext_data["dailyBars"]
return
except Exception as e:
print(str(e)+format_exc())
print(f"Couldnt get previous runner {state.runner_id} error: {runner}")
return None
batch_id = runner.batch_id
#batch_id = "6a6b0bcf"
res, runner_ids =cs.get_archived_runnerslist_byBatchID(batch_id, "desc")
if res < 0:
msg = f"error whne fetching runners of batch {batch_id} {runner_ids}"
print(msg)
return None
if runner_ids is None or len(runner_ids) == 0:
print(f"NO runners found for batch {batch_id} {runner_ids}")
return None
last_runner = runner_ids[0]
print("Previous runner identified:", last_runner)
#get archived header - to get transferables
runner_header : RunArchive = None
res, runner_header = cs.get_archived_runner_header_byID(last_runner)
if res < 0:
print(f"Error when fetching runner header {last_runner}")
return None
state.vars["transferables"] = runner_header.transferables
print("INITIALIZED transferables", state.vars["transferables"])
#get details from the runner
print(f"Fetching runner details of {last_runner}")
res, val = cs.get_archived_runner_details_byID(last_runner)
if res < 0:
print(f"no archived runner {last_runner}")
return None
detail = RunArchiveDetail(**val)
#print("toto jsme si dotahnuli", detail.bars)
# from stratvars directives
attach_previous_bar_data = safe_get(state.vars, "attach_previous_bar_data", 50)
attach_previous_tick_data = safe_get(state.vars, "attach_previous_tick_data", None)
#indicators datetime utc
indicators = slice_dict_lists(d=detail.indicators[0],last_item=attach_previous_bar_data, time_to_datetime=True)
#time -datetime utc, updated - timestamp float
bars = slice_dict_lists(d=detail.bars, last_item=attach_previous_bar_data, time_to_datetime=True)
#zarovname tick spolu s bar daty
if attach_previous_tick_data is None:
oldest_timestamp = bars["updated"][0]
#returns only values older that oldest_timestamp
cbar_inds = filter_timeseries_by_timestamp(detail.indicators[1], oldest_timestamp)
else:
cbar_inds = slice_dict_lists(d=detail.indicators[1],last_item=attach_previous_tick_data)
#USE these as INITs - TADY SI TO JESTE ZASTAVIT a POROVNAT
#print("state.indicatorsL", state.indicators, "NEW:", indicators)
state.indicators = AttributeDict(**indicators)
print("transfered indicators:", len(state.indicators["time"]))
#print("state.bars", state.bars, "NEW:", bars)
state.bars = AttributeDict(bars)
print("transfered bars:", len(state.bars["time"]))
#print("state.cbar_indicators", state.cbar_indicators, "NEW:", cbar_inds)
state.cbar_indicators = AttributeDict(cbar_inds)
print("transfered ticks:", len(state.cbar_indicators["time"]))
print("TRANSFERABLEs INITIALIZED")
#bars
#transferable_state_vars = ["martingale", "batch_profit"]
#1. pri initu se tyto klice v state vars se namapuji do ext_data ext_data["transferrables"]["martingale"] = state.vars["martingale"]
#2. pri transferu se vse z ext_data["trasferrables"] dá do stejnénné state.vars["martingale"]
#3. na konci dne se uloží do sloupce transferables v RunArchive
#pridavame dailyBars z extData
# if hasattr(detail, "ext_data") and "dailyBars" in detail.ext_data:
# state.dailyBars = detail.ext_data["dailyBars"]
return
# if __name__ == "__main__":
# attach_previous_data(state)

View File

@ -1,320 +0,0 @@
import matplotlib
import matplotlib.dates as mdates
#matplotlib.use('Agg') # Set the Matplotlib backend to 'Agg'
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from datetime import datetime
from typing import List
from enum import Enum
import numpy as np
import v2realbot.controller.services as cs
from rich import print as richprint
from v2realbot.common.model import AnalyzerInputs
from v2realbot.common.PrescribedTradeModel import TradeDirection, TradeStatus, Trade, TradeStoplossType
from v2realbot.utils.utils import isrising, isfalling,zoneNY, price2dec, safe_get#, print
from pathlib import Path
from v2realbot.config import WEB_API_KEY, DATA_DIR, MEDIA_DIRECTORY
from v2realbot.enums.enums import RecordType, StartBarAlign, Mode, Account, OrderSide
from io import BytesIO
from v2realbot.utils.historicals import get_historical_bars
from alpaca.data.timeframe import TimeFrame, TimeFrameUnit
from collections import defaultdict
from scipy.stats import zscore
from io import BytesIO
from typing import Tuple, Optional, List
from v2realbot.common.PrescribedTradeModel import TradeDirection, TradeStatus, Trade, TradeStoplossType
from collections import Counter
import vectorbtpro as vbt
# Function to add 23 seconds to the last datetime (if it exists and is the same day)
def adjust_datetime_iteratively(df, resolution):
adjusted_times = []
for i, current_time in enumerate(df.index):
if i == 0:
# The first entry is unchanged
adjusted_times.append(current_time)
continue
previous_time = adjusted_times[-1]
# Check if it's the same day
if previous_time.date() == current_time.date():
# Add resolution to the previous datetime
adjusted_time = previous_time + pd.Timedelta(seconds=resolution)
else:
# Different day, leave it as is
adjusted_time = current_time
adjusted_times.append(adjusted_time)
# Update DataFrame index
df.index = pd.DatetimeIndex(adjusted_times)
return df
def convert_to_dataframe(ohlcv):
"""
Convert a dictionary containing OHLCV data into a pandas DataFrame.
Parameters:
ohlcv (dict): Dictionary containing OHLCV data.
It should have keys 'time', 'open', 'high', 'low', 'close', 'volume', 'updated'.
'time' should be a list of float timestamps.
'updated' should be a list of Python datetimes in UTC time zone.
Returns:
pd.DataFrame: DataFrame containing the OHLCV data with the index converted to East coast US time.
"""
#pokud existuje key index, tak menime na custom_index, aby nedelal neplechu v pd
try:
if ohlcv.get('index', False):
ohlcv['custom_index'] = ohlcv.pop('index')
except Exception as e:
pass
#keys that should not go uppercase letter first
keys_not_to_upper = ["time", "updated"]
# Update keys not in the exclusion list
for key in list(ohlcv.keys()): # Iterate over a copy of the keys
if key not in keys_not_to_upper:
ohlcv[key.title()] = ohlcv.pop(key)
# Create DataFrame from the dictionary
df = pd.DataFrame(ohlcv)
# Convert 'time' to datetime and set as index
df['time'] = pd.to_datetime(df['time'], unit='s', utc=True)
df.set_index('time', inplace=True)
# Convert index to East coast US time zone
df.index = df.index.tz_convert('US/Eastern')
if 'updated' in df.columns:
df['updated'] = pd.to_datetime(df['updated'], unit='s', utc=True)
df['updated'] = df['updated'].dt.tz_convert('US/Eastern')
return df
def print(v, *args, **kwargs):
if v:
richprint(*args, **kwargs)
def load_batch(runner_ids: List = None, batch_id: str = None, space_resolution_evenly = False, main_session_only = True, merge_ind2bars = True, bars_columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'Vwap'], indicators_columns = [], verbose = False) -> Tuple[int, dict]:
"""Load batches (all runners from single batch) into pandas dataframes
Args:
runner_ids (List, optional): A list of runner identifiers (e.g., stock tickers). Defaults to None.
batch_id (str, optional): The ID of a specific batch to retrieve. Defaults to None.
merge_ind2bars (bool, optional): merge indicator into bars dataframe. Defaults to True.
bars_columns (list, optional): List of columns to keep in bars df. Defaults to ['Open', 'High', 'Low', 'Close', 'Volume', 'Vwap'].
indicators_columns (list, optional): List of columns to keep in indicators df. Defaults to an empty list.
space_resoution_evenly: If True then, it alters index so it is spaced evenly in given resolution in ['resooution']
Returns:
Tuple[int, dict]: A tuple containing:
* An integer potentially representing a status code or data count.
* A dictionary with keys bars, indicators and cbar_indicators - with pandas dataframe
"""
if runner_ids is None and batch_id is None:
return -2, f"runner_id or batch_id must be present", 0
if batch_id is not None:
res, runner_ids =cs.get_archived_runnerslist_byBatchID(batch_id)
if res != 0:
print(f"no batch {batch_id} found")
return -1, f"no batch {batch_id} found", 0
#DATA PREPARATION
bars = None
indicators = None
cnt = 0
dfs = dict(bars=[], indicators=[],cbar_indicators=[])
resolution = None
for id in runner_ids:
cnt += 1
#get runner detail
res, sada =cs.get_archived_runner_details_byID(id)
if res != 0:
print(f"no runner {id} found")
return -1, f"no runner {id} found", 0
if resolution is None:
resolution = sada["bars"]["resolution"][0]
print(verbose, f"Resolution : {resolution}")
#add daily bars limited to required columns, we keep updated as its mapping column to indicators
bars = convert_to_dataframe(sada["bars"])[bars_columns + ["updated"]]
#bars = bars.loc[:, bars_columns]
indicators = convert_to_dataframe(sada["indicators"][0])[indicators_columns]
#join indicators to bars dataframe
if merge_ind2bars:
#merge, time v indicators odpovida udpated v bars
bars = bars.reset_index()
bars = pd.merge(bars, indicators, left_on="updated", right_on="time", how="left")
bars = bars.set_index("time")
else:
dfs["indicators"].append(indicators)
#drop updated as mapping column
#bars = bars.drop("updated", axis=1)
dfs["bars"].append(bars)
#indicators = sada["indicators"][0]
#cbar_indicators = sada["indicators"][1]
#merge all days into single df
for key in dfs:
if len(dfs[key])>0:
concat_df = pd.concat(dfs[key], axis=0)
concat_df = concat_df.between_time('9:30', '16:00') if main_session_only else concat_df
# Count the number of duplicates (excluding the first occurrence)
num_duplicates = concat_df.index.duplicated().sum()
if num_duplicates > 0:
print(verbose, f"NOTE: DUPLICATES {num_duplicates}/{len(concat_df)} in {key}. REMOVING.")
concat_df = concat_df[~concat_df.index.duplicated()]
num_duplicates = concat_df.index.duplicated().sum()
print(verbose, f"Now there are {num_duplicates}/{len(concat_df)}")
if space_resolution_evenly and key != "cbar_indicators":
# Apply rounding to the datetime index according to resolution (in seconds)
concat_df = adjust_datetime_iteratively(concat_df, resolution)
dfs[key] = concat_df
return 0, dfs
if __name__ == "__main__":
res, df = load_batch(batch_id="e44a5075", space_resolution_evenly=True, indicators_columns=["Rsi14"], main_session_only=False)
if res < 0:
print("Error" + str(res) + str(df))
print(df)
df = df["bars"]
print(df.info(), df.head())
#filter columns
#columns_to_keep = ['Open', 'High', 'Low', 'Close', 'Volume', 'Vwap']
#df = df.loc[:, columns_to_keep]
#df = df.rename(columns={'index': 'custom_index'})
print(df.info(), df.head(), df.describe())
#filter times
#df = df.between_time('9:30', '16:00')
print(df.info())
# Set the frequency to 23 seconds
#df.index.freq = pd.tseries.offsets.Second(23)
# Check the frequency of the index
# Resample and aggregate the data
# resampled_df = df.resample('23S').agg({
# 'open': 'first',
# 'high': 'max',
# 'low': 'min',
# 'close': 'last',
# 'volume': 'sum'
# })
#df.index.freq = pd.infer_freq(df.index)
#print(df.index.freq)
# Set the frequency of the index explicitly - if it exists like 1T etc, if doesnt exists then custom_frequency will be used
#df.index.freq = pd.date_range(start=df.index[0], periods=len(df), freq='23S')
print(df.info())
vbt.settings.set_theme("dark")
vbt.settings['plotting']['layout']['width'] = 1280
vbt.settings.plotting.auto_rangebreaks = True
#naloadujeme do vbt symbol as column
bar_data = vbt.Data.from_data({"BAC": df}, tz_convert="US/Eastern")
print(bar_data)
print(bar_data.close)
print(bar_data.data["BAC"]["Rsi14"])
bar_data.data["BAC"]["Rsi14"].vbt.plot().show()
print(bar_data["Rsi14"])
#ohlcv plot (sublot 2x1)
bar_data.data["BAC"].vbt.ohlcv.plot().show()
#create two subplots 3x1 (ohlcv + RSI)
# fig = vbt.make_subplots(rows=3, cols=1)
# bar_data.data["BAC"].vbt.ohlcv.plot(add_trace_kwargs=dict(row=1, col=1),fig=fig)
# bar_data.data["BAC"]["Rsi14"].vbt.plot(add_trace_kwargs=dict(row=3, col=1),fig=fig)
# fig.show()
#create subplots with alternate Y axis - RSI overlay
fig1 = vbt.make_subplots(specs=[[{"secondary_y": True}]])
bar_data.data["BAC"]["Close"].vbt.plot(add_trace_kwargs=dict(secondary_y=False),fig=fig1)
bar_data.data["BAC"].vbt.plot(add_trace_kwargs=dict(secondary_y=True),fig=fig1)
fig1.show()
puv_df = bar_data.data["BAC"]
bar_data23s = bar_data[["Open", "High", "Low", "Close", "Volume"]]
print(bar_data23s)
#resample by vbt
bar_data46s = bar_data23s.get().resample("46s").agg({
"Open": "first",
"High": "max",
"Low": "min",
"Close": "last",
"Volume": "sum"
})
print(bar_data46s)
res_data = bar_data46s.data["BAC"]
#bar_data23s.data["BAC"].ptable()
#bar_data23s = bar_data.resample("23S")
print(bar_data46s)
print(bar_data46s.close)
vbt.settings.plotting.auto_rangebreaks = True
bar_data46s.data["BAC"].vbt.ohlcv.plot().show()
#TARGET DAYS - only one day or range
# Target Date
#target_date = pd.to_datetime('2023-10-12', tz='US/Eastern')
# Date Range
start_date = pd.to_datetime('2024-03-12')
#end_date = pd.to_datetime('2023-10-14')
new_data = bar_data.transform(lambda df: df[df.index.date == start_date.date()])
#range filtered_data = data[(data.index >= start_date) & (data.index <= end_date)
print(new_data)
new_data.data["BAC"].vbt.ohlcv.plot().show()
# Filtering RANGE or DAY
# filtered_data = data[(data.index >= start_date) & (data.index <= end_date)]g
# filtered_data = data[data.index.date == target_date.date()]
#custom aggregagation
# ohlcv_agg = pd.DataFrame({
# 'Open': df.resample('1T')['Open'].first(),
# 'High': df.resample('1T')['High'].max(),
# 'Low': df.resample('1T')['Low'].min(),
# 'Close': df.resample('1T')['Close'].last(),
# 'Volume': df.resample('1T')['Volume'].sum()
# })
#Define a custom frequency with a timedelta of 23 seconds
# custom_frequency = pd.tseries.offsets.DateOffset(seconds=23)
# # Create a new DataFrame with the desired frequency
# new_index = pd.date_range(start=df.index[0], end=df.index[-1], freq=custom_frequency)
# new_df = pd.DataFrame(index=new_index)
# # Reindex the DataFrame
# df = df.reindex(new_df.index)
# # Now you can check the frequency of the index
# print(df.index.freq)

View File

@ -5,7 +5,6 @@ from alpaca.data.enums import DataFeed
import v2realbot.utils.config_defaults as config_defaults
from v2realbot.enums.enums import FillCondition
from rich import print
# from v2realbot.utils.utils import print
def aggregate_configurations(module):
return {key: getattr(module, key) for key in dir(module) if key.isupper()}
@ -49,8 +48,8 @@ class ConfigHandler:
self.active_config = self.default_config.copy()
self.active_config.update(override_configuration)
self.active_profile = profile_name
#print(f"Profile {profile_name} loaded successfully.")
#print("Current values:", self.active_config)
print(f"Profile {profile_name} loaded successfully.")
print("Current values:", self.active_config)
else:
print(f"Profile {profile_name} does not exist in config item: {config_directive}")
except Exception as e:
@ -94,9 +93,7 @@ class ConfigHandler:
return FillCondition(value)
case "BT_FILL_CONDITION_SELL_LIMIT":
return FillCondition(value)
case "AGG_EXCLUDED_TRADES":
return sorted(value) # Convert to sorted
# Add cases for other enumeration conversions or transformations as needed
# Add cases for other enumeration conversions as needed
case _:
return value
@ -105,8 +102,8 @@ class ConfigHandler:
# Global configuratio - it is imported by modules that need it. In the future can be changed to Dependency Ingestion (each service will have the config instance as input parameter)
config_handler = ConfigHandler()
#print(f"{config_handler.active_profile=}")
#print("config handler initialized")
print(f"{config_handler.active_profile=}")
print("config handler initialized")
#this is how to get value
#config_handler.get_val('BT_FILL_PRICE_MARKET_ORDER_PREMIUM')

View File

@ -20,7 +20,7 @@ from uuid import UUID
from enum import Enum
#from v2realbot.enums.enums import Order
from v2realbot.common.model import Order as btOrder, TradeUpdate as btTradeUpdate
from alpaca.trading.models import Order, TradeUpdate, Calendar
from alpaca.trading.models import Order, TradeUpdate
import numpy as np
import pandas as pd
from collections import deque
@ -35,7 +35,6 @@ import tempfile
import shutil
from filelock import FileLock
import v2realbot.utils.config_handler as cfh
import pandas_market_calendars as mcal
def validate_and_format_time(time_string):
"""
@ -60,32 +59,8 @@ def validate_and_format_time(time_string):
else:
return None
def fetch_calendar_data(start: datetime, end: datetime) -> List[Calendar]:
"""
Fetches the trading schedule for the NYSE (New York Stock Exchange) between the specified start and end dates.
Args:
start (datetime): The start date for the trading schedule.
end (datetime): The end date for the trading schedule.
Returns:
List[Calendar]: A list of Calendar objects containing the trading dates and market open/close times.
Returns an empty list if no trading days are found within the specified range.
"""
nyse = mcal.get_calendar('NYSE')
schedule = nyse.schedule(start_date=start, end_date=end, tz='America/New_York')
if not schedule.empty:
schedule = (schedule.reset_index()
.rename(columns={"index": "date", "market_open": "open", "market_close": "close"})
.assign(date=lambda day: day['date'].dt.date.astype(str),
open=lambda day: day['open'].dt.strftime('%H:%M'),
close=lambda day: day['close'].dt.strftime('%H:%M'))
.to_dict(orient="records"))
cal_dates = [Calendar(**record) for record in schedule]
return cal_dates
else:
return []
#Alpaca Calendar wrapper with retry
def fetch_calendar_data_from_alpaca(start, end, max_retries=5, backoff_factor=1):
def fetch_calendar_data(start, end, max_retries=5, backoff_factor=1):
"""
Attempts to fetch calendar data with exponential backoff. Raises an exception if all retries fail.

View File

@ -1,97 +0,0 @@
BEGIN TRANSACTION;
CREATE TABLE IF NOT EXISTS "test_list" (
"id" varchar(32) NOT NULL,
"name" varchar(255) NOT NULL,
"dates" json NOT NULL
);
CREATE TABLE IF NOT EXISTS "runner_detail" (
"runner_id" varchar(32) NOT NULL,
"data" json NOT NULL,
PRIMARY KEY("runner_id")
);
CREATE TABLE IF NOT EXISTS "runner_header" (
"runner_id" varchar(32) NOT NULL,
"strat_id" TEXT,
"batch_id" TEXT,
"symbol" TEXT,
"name" TEXT,
"note" TEXT,
"started" TEXT,
"stopped" TEXT,
"mode" TEXT,
"account" TEXT,
"bt_from" TEXT,
"bt_to" TEXT,
"strat_json" TEXT,
"settings" TEXT,
"ilog_save" INTEGER,
"profit" NUMERIC,
"trade_count" INTEGER,
"end_positions" INTEGER,
"end_positions_avgp" NUMERIC,
"metrics" TEXT,
"stratvars_toml" TEXT,
"transferables" TEXT,
PRIMARY KEY("runner_id")
);
CREATE TABLE IF NOT EXISTS "config_table" (
"id" INTEGER,
"item_name" TEXT NOT NULL,
"json_data" JSON NOT NULL,
"item_lang" TEXT,
PRIMARY KEY("id" AUTOINCREMENT)
);
CREATE TABLE IF NOT EXISTS "runner_logs" (
"runner_id" varchar(32) NOT NULL,
"time" real NOT NULL,
"data" json NOT NULL
);
CREATE TABLE "run_manager" (
"moddus" TEXT NOT NULL,
"id" varchar(32),
"strat_id" varchar(32) NOT NULL,
"symbol" TEXT,
"account" TEXT NOT NULL,
"mode" TEXT NOT NULL,
"note" TEXT,
"ilog_save" BOOLEAN,
"bt_from" TEXT,
"bt_to" TEXT,
"weekdays_filter" TEXT,
"batch_id" TEXT,
"start_time" TEXT NOT NULL,
"stop_time" TEXT NOT NULL,
"status" TEXT NOT NULL,
"last_processed" TEXT,
"history" TEXT,
"valid_from" TEXT,
"valid_to" TEXT,
"testlist_id" TEXT,
"runner_id" varchar2(32),
"market" TEXT,
PRIMARY KEY("id")
);
CREATE INDEX idx_moddus ON run_manager (moddus);
CREATE INDEX idx_status ON run_manager (status);
CREATE INDEX idx_status_moddus ON run_manager (status, moddus);
CREATE INDEX idx_valid_from_to ON run_manager (valid_from, valid_to);
CREATE INDEX idx_stopped_batch_id ON runner_header (stopped, batch_id);
CREATE INDEX idx_search_value ON runner_header (strat_id, batch_id);
CREATE INDEX IF NOT EXISTS "index_runner_header_pk" ON "runner_header" (
"runner_id"
);
CREATE INDEX IF NOT EXISTS "index_runner_header_strat" ON "runner_header" (
"strat_id"
);
CREATE INDEX IF NOT EXISTS "index_runner_header_batch" ON "runner_header" (
"batch_id"
);
CREATE UNIQUE INDEX IF NOT EXISTS "index_runner_detail_pk" ON "runner_detail" (
"runner_id"
);
CREATE INDEX IF NOT EXISTS "index_runner_logs" ON "runner_logs" (
"runner_id",
"time"
);
INSERT INTO config_table VALUES (1, "test", "{}", "json");
COMMIT;