Reinforcement Learning for Trading Practical Examples and Lessons Learned by Dr. Tom Starke

  Рет қаралды 69,177

Quantopian

Quantopian

Күн бұрын

Пікірлер: 55
@Otvazhnii
@Otvazhnii 2 жыл бұрын
Yet another improvement. You cannot rely on one attempt of training. Initial weights are random. You have to implement a multiprocessing logic and run 10 attempts at a time and print the 10 pictures of profit and loss curves at the end.
@scalbylasusjim2780
@scalbylasusjim2780 3 жыл бұрын
I think neural networks start to truly outperform SVM’s as the decision boundary becomes more and more nonlinear. Kernel tricks would have to become more and more complex.
@Bill0102
@Bill0102 9 ай бұрын
I'm immersed in this. I read a book with a similar theme, and I was completely immersed. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn
@Otvazhnii
@Otvazhnii 2 жыл бұрын
And yet another improvement. You regularize a state data by substracting mean and dividing by standard deviation. It is a good thing to regularize but a state data is made of 141 values, including ohlc prices for M5, H1, D1 bars and different indicators ranging from -10 to 100. I do not think you can merge quantities of products with prices for products and then regularize, as they say, altogether.
@attilasarkany6123
@attilasarkany6123 Жыл бұрын
yep, you are right, that part of the code is wrong.
@michelletadmor8642
@michelletadmor8642 4 жыл бұрын
I wonder how he smoothes the data - perhaps "now" timestamp was already including partial info of the next data point. If it was smoothly only backwards then the next timestamp at exit might be completely off than the real exit price.
@AtillaYurtseven
@AtillaYurtseven 4 жыл бұрын
30:14 you are updating state and applying the action. When we chose an action, first we need to apply than we need to update the state and get the reward. Let's say current price is 100.20. When agent decides to buy, it's has to buy from the price 100.20 (excluding spread/slippage and commission). In your example, it's buying with the next price. Am I wrong?
@hanwantshekhawat4314
@hanwantshekhawat4314 4 жыл бұрын
Executing at the next price sample (tick/bar) is a common way to model time delay. Simplistic but adds a some noise which may be more realistic than assuming execution happens at exactly the price where the decision was made
@Otvazhnii
@Otvazhnii 2 жыл бұрын
I come up with a number of improvements to the code. Firstly, the epsilon calculation runs out to zero after trade 5, while 99% of random numbers fall between 0,1 and 0,9. So no exploration after trade 5. Secondly, H1 and D1 bars are made from M5 bars by choosing only the 2 left and right M5 bars. This is correct for open and close prices, but not for high and low prices which move very noisily during every hour and still more so during the day. Thirdly, the way the code is built, it may take well over a month to run 11500 games (trades) as is indicated in your code. By converting the pandas data to numpy data and by then building a numpy array of states before training, you can speed up the code literally 10,000 times. And finally, the Apple stock goes bluntly up at some point so your strategy which drops exploration after trade 5 and starts learning the replay memory of the last trades, does it not fit so nicely to the growing trend of the stock?
@zhibindeng8723
@zhibindeng8723 2 жыл бұрын
why not share your code for us to backtest? Thank you.
@williamdad
@williamdad Жыл бұрын
does this really work for stock trading? any trackrecord to check for last 5 years?
@chrisminnoy3637
@chrisminnoy3637 5 жыл бұрын
Just as with any other AI algorithm, you need to clean your data before you give it to your reinforcement learner. But you can make a neural net that cleans that data for you, with relative succes. Noise is also an issue in other domains, not just finance. Ofcourse you are creating a feedback loop. When you buy/sell with succes your competitors will adapt, and so the problem shifts to a more difficult state...adding overal noise (randomness) to the system.
@niallmurray2915
@niallmurray2915 5 жыл бұрын
How do your competitors know you are successful?
@blackprinze
@blackprinze 4 жыл бұрын
SVM has some type of geometry element which responds well to any freely traded market
@alrey72
@alrey72 3 жыл бұрын
but the technical indicators are from past prices also. isnt it better for the nn/rl interpret the prices themselves?
@AlexeyMatushevsky
@AlexeyMatushevsky 3 жыл бұрын
Thank you for great presentation!
@AlexeyMatushevsky
@AlexeyMatushevsky 3 жыл бұрын
On 19:53 you mentions the Right Regime - 'does it end up choosing some training process ' It's easy to understand what is Mean Reversion process, but what does it mean 'training process' ?
@harendrasingh_22
@harendrasingh_22 5 жыл бұрын
3:32 guys please upload his talk too !
@andy.robinson
@andy.robinson 4 жыл бұрын
There's quite a few Tucker Balch vids on YT. You can probably pick up a lot from those 👍
@alute5532
@alute5532 4 жыл бұрын
I am doing deep learning but now I'm thinking of integrating it with a reinforcement learning as an ensemble on the outside. (with a money management system on the side) Is there Anyone in California interested in my project?
@Scrathzerz
@Scrathzerz 3 жыл бұрын
How’s it going?
@suecheng3755
@suecheng3755 3 жыл бұрын
Hi there, my research area is reinforcement learning, maybe I can give you some ideas. May I have your contact information?
@polonezu9576
@polonezu9576 3 жыл бұрын
can we use this on MT 4 platform?
@andresg297
@andresg297 2 жыл бұрын
MT5
@polonezu9576
@polonezu9576 2 жыл бұрын
@@andresg297 is not posible
@joysahoo7470
@joysahoo7470 3 жыл бұрын
Thank you sir for good explanation! Please help me to solve this error - ImportError: cannot import name 'sgd' from 'keras.optimizers' am not able to fix this error and if anyone to fix this error please help me
@eliastheis5265
@eliastheis5265 3 жыл бұрын
from keras.optimizers import SGD not sgd
@eliastheis5265
@eliastheis5265 3 жыл бұрын
@@joysahoo7470 Try from tensorflow.keras.optimizers import SGD
@joysahoo7470
@joysahoo7470 3 жыл бұрын
@@eliastheis5265 Thank you
@aricanto1764
@aricanto1764 Жыл бұрын
NLP is the way forward 💪
@polonezu9576
@polonezu9576 3 жыл бұрын
i get errors on mt 4
@gogae22
@gogae22 3 жыл бұрын
Why can't we give reward in every time step?
@randomdude79404
@randomdude79404 3 жыл бұрын
From my basic knowledge the reward only comes in when a decision is made whether that be a buy or a sell. I may be wrong but this is just from my basic understanding.
@norabelrose198
@norabelrose198 3 жыл бұрын
"LSTMs, they're somewhat new" they've been around since 1997 lol
@block1086
@block1086 3 жыл бұрын
attention is all you need
@a_machiniac
@a_machiniac Жыл бұрын
09:37
@monanica7331
@monanica7331 3 жыл бұрын
BTC for $75K by end of this year& Control of The Currency is already Decentralised And now the China disruption would simply Decentralise the Mining setup for the better
@henrifritsmaarseveen6260
@henrifritsmaarseveen6260 3 жыл бұрын
95% of the trades are made by big money they can hire and build the most advance systems and programmers .. and they fail . so it probably will not .. will not ever work ..
@oliverli9630
@oliverli9630 5 жыл бұрын
Siri was triggered at 1:40, hahaha. Time to rethink about ML?
@theappliedcoder9824
@theappliedcoder9824 5 жыл бұрын
Hi, I work on Reinforcement Learning too, anyone hiring reply !
@Tradinginthezen
@Tradinginthezen 5 жыл бұрын
how much salary are you expecting?
@theappliedcoder9824
@theappliedcoder9824 5 жыл бұрын
@@Tradinginthezen depends on the work flow, sir
@redcabinstudios7248
@redcabinstudios7248 4 жыл бұрын
If AI works, Quant Trading will go down I guess, beware!
@alute5532
@alute5532 4 жыл бұрын
23:55 when you said random walk most of the time I just realized you're not a real Quant my dear one.
@rshsrhserhserh1268
@rshsrhserhserh1268 3 жыл бұрын
why
@williamqh
@williamqh 3 жыл бұрын
@@rshsrhserhserh1268 the market is moved by big institutions usually, so they will make it appeared to be random but not really.
@guardtank4877
@guardtank4877 4 жыл бұрын
I thought reinforcement learning is shit for trading
@100xspaceai9
@100xspaceai9 2 жыл бұрын
dont think do , test and validate
Reinforcement Learning Course - Full Machine Learning Tutorial
3:55:27
freeCodeCamp.org
Рет қаралды 916 М.
За кого болели?😂
00:18
МЯТНАЯ ФАНТА
Рет қаралды 3,3 МЛН
Deep Learning: A Crash Course (2018) | SIGGRAPH Courses
3:33:03
ACMSIGGRAPH
Рет қаралды 3,3 МЛН
Detective work leading to viable trading strategies · Tom Starke
1:18:31
Chat With Traders
Рет қаралды 31 М.
The Do's and Don't's of Quant Trading
59:23
Quantopian
Рет қаралды 29 М.
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 240 М.
Reinforcement Learning Series: Overview of Methods
21:37
Steve Brunton
Рет қаралды 103 М.
How Deep Neural Networks Work - Full Course for Beginners
3:50:57
freeCodeCamp.org
Рет қаралды 4,4 МЛН