Yet another improvement. You cannot rely on one attempt of training. Initial weights are random. You have to implement a multiprocessing logic and run 10 attempts at a time and print the 10 pictures of profit and loss curves at the end.
@scalbylasusjim27803 жыл бұрын
I think neural networks start to truly outperform SVM’s as the decision boundary becomes more and more nonlinear. Kernel tricks would have to become more and more complex.
@Bill01029 ай бұрын
I'm immersed in this. I read a book with a similar theme, and I was completely immersed. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn
@Otvazhnii2 жыл бұрын
And yet another improvement. You regularize a state data by substracting mean and dividing by standard deviation. It is a good thing to regularize but a state data is made of 141 values, including ohlc prices for M5, H1, D1 bars and different indicators ranging from -10 to 100. I do not think you can merge quantities of products with prices for products and then regularize, as they say, altogether.
@attilasarkany6123 Жыл бұрын
yep, you are right, that part of the code is wrong.
@michelletadmor86424 жыл бұрын
I wonder how he smoothes the data - perhaps "now" timestamp was already including partial info of the next data point. If it was smoothly only backwards then the next timestamp at exit might be completely off than the real exit price.
@AtillaYurtseven4 жыл бұрын
30:14 you are updating state and applying the action. When we chose an action, first we need to apply than we need to update the state and get the reward. Let's say current price is 100.20. When agent decides to buy, it's has to buy from the price 100.20 (excluding spread/slippage and commission). In your example, it's buying with the next price. Am I wrong?
@hanwantshekhawat43144 жыл бұрын
Executing at the next price sample (tick/bar) is a common way to model time delay. Simplistic but adds a some noise which may be more realistic than assuming execution happens at exactly the price where the decision was made
@Otvazhnii2 жыл бұрын
I come up with a number of improvements to the code. Firstly, the epsilon calculation runs out to zero after trade 5, while 99% of random numbers fall between 0,1 and 0,9. So no exploration after trade 5. Secondly, H1 and D1 bars are made from M5 bars by choosing only the 2 left and right M5 bars. This is correct for open and close prices, but not for high and low prices which move very noisily during every hour and still more so during the day. Thirdly, the way the code is built, it may take well over a month to run 11500 games (trades) as is indicated in your code. By converting the pandas data to numpy data and by then building a numpy array of states before training, you can speed up the code literally 10,000 times. And finally, the Apple stock goes bluntly up at some point so your strategy which drops exploration after trade 5 and starts learning the replay memory of the last trades, does it not fit so nicely to the growing trend of the stock?
@zhibindeng87232 жыл бұрын
why not share your code for us to backtest? Thank you.
@williamdad Жыл бұрын
does this really work for stock trading? any trackrecord to check for last 5 years?
@chrisminnoy36375 жыл бұрын
Just as with any other AI algorithm, you need to clean your data before you give it to your reinforcement learner. But you can make a neural net that cleans that data for you, with relative succes. Noise is also an issue in other domains, not just finance. Ofcourse you are creating a feedback loop. When you buy/sell with succes your competitors will adapt, and so the problem shifts to a more difficult state...adding overal noise (randomness) to the system.
@niallmurray29155 жыл бұрын
How do your competitors know you are successful?
@blackprinze4 жыл бұрын
SVM has some type of geometry element which responds well to any freely traded market
@alrey723 жыл бұрын
but the technical indicators are from past prices also. isnt it better for the nn/rl interpret the prices themselves?
@AlexeyMatushevsky3 жыл бұрын
Thank you for great presentation!
@AlexeyMatushevsky3 жыл бұрын
On 19:53 you mentions the Right Regime - 'does it end up choosing some training process ' It's easy to understand what is Mean Reversion process, but what does it mean 'training process' ?
@harendrasingh_225 жыл бұрын
3:32 guys please upload his talk too !
@andy.robinson4 жыл бұрын
There's quite a few Tucker Balch vids on YT. You can probably pick up a lot from those 👍
@alute55324 жыл бұрын
I am doing deep learning but now I'm thinking of integrating it with a reinforcement learning as an ensemble on the outside. (with a money management system on the side) Is there Anyone in California interested in my project?
@Scrathzerz3 жыл бұрын
How’s it going?
@suecheng37553 жыл бұрын
Hi there, my research area is reinforcement learning, maybe I can give you some ideas. May I have your contact information?
@polonezu95763 жыл бұрын
can we use this on MT 4 platform?
@andresg2972 жыл бұрын
MT5
@polonezu95762 жыл бұрын
@@andresg297 is not posible
@joysahoo74703 жыл бұрын
Thank you sir for good explanation! Please help me to solve this error - ImportError: cannot import name 'sgd' from 'keras.optimizers' am not able to fix this error and if anyone to fix this error please help me
@eliastheis52653 жыл бұрын
from keras.optimizers import SGD not sgd
@eliastheis52653 жыл бұрын
@@joysahoo7470 Try from tensorflow.keras.optimizers import SGD
@joysahoo74703 жыл бұрын
@@eliastheis5265 Thank you
@aricanto1764 Жыл бұрын
NLP is the way forward 💪
@polonezu95763 жыл бұрын
i get errors on mt 4
@gogae223 жыл бұрын
Why can't we give reward in every time step?
@randomdude794043 жыл бұрын
From my basic knowledge the reward only comes in when a decision is made whether that be a buy or a sell. I may be wrong but this is just from my basic understanding.
@norabelrose1983 жыл бұрын
"LSTMs, they're somewhat new" they've been around since 1997 lol
@block10863 жыл бұрын
attention is all you need
@a_machiniac Жыл бұрын
09:37
@monanica73313 жыл бұрын
BTC for $75K by end of this year& Control of The Currency is already Decentralised And now the China disruption would simply Decentralise the Mining setup for the better
@henrifritsmaarseveen62603 жыл бұрын
95% of the trades are made by big money they can hire and build the most advance systems and programmers .. and they fail . so it probably will not .. will not ever work ..
@oliverli96305 жыл бұрын
Siri was triggered at 1:40, hahaha. Time to rethink about ML?
@theappliedcoder98245 жыл бұрын
Hi, I work on Reinforcement Learning too, anyone hiring reply !
@Tradinginthezen5 жыл бұрын
how much salary are you expecting?
@theappliedcoder98245 жыл бұрын
@@Tradinginthezen depends on the work flow, sir
@redcabinstudios72484 жыл бұрын
If AI works, Quant Trading will go down I guess, beware!
@alute55324 жыл бұрын
23:55 when you said random walk most of the time I just realized you're not a real Quant my dear one.
@rshsrhserhserh12683 жыл бұрын
why
@williamqh3 жыл бұрын
@@rshsrhserhserh1268 the market is moved by big institutions usually, so they will make it appeared to be random but not really.
@guardtank48774 жыл бұрын
I thought reinforcement learning is shit for trading