【粵語】機器學習實作 | 用Excel輕鬆建立、訓練和使用神經網路Neural Network | 完整教程

  Рет қаралды 4,542

解密遊俠

解密遊俠

11 ай бұрын

國語版: • 【國語】機器學習實作 | 用Excel輕鬆建...
* 加入成為會員(等級三)可以獲得完整Excel檔案。
現在世界已經進入了人工智能的新時代,如果你還對人工智能、機器學習等一臉茫然,小心被時代淘汰啊!不過別怕,我會用極之顯淺易懂的方式,通過Excel示範實作,為你撥開迷霧、揭曉真相!這個教程是我特別精心設計的,一切講清講楚,毫不含糊。你只需要有小學程度的數學基礎,就一定能夠明白,不需要懂得編程、微積分、統計學等專門知識。而且,當你學會之後,絕對可以應用在日常生活和工作上。
03:29 神經網路模型結構 (Neural Network model architecture, 原理principle, 數據擬合 data fitting, 最優化optimization)
09:00 實作一:預測公司盈利 (可調參數 trainable parameters, 訓練回合 training epoch, 梯度下降法 Gradient Descent)
23:54 實作二:預測女神好感度 (數據標準化 Data Standardization, 學習速率Learning Rate)
------------------------------------------------
相關鏈接:
📚 可找到相關書籍的網購平台
KingStone 金石堂 tinyurl.com/yhv2xcxc
Yahoo 奇摩購物中心 tinyurl.com/yjnm8j5a
Rakuten 臺灣樂天市場 tinyurl.com/ygwkb8ul
💚 捐款贊助《解密遊俠》
paypal.me/wenlingchan

Пікірлер: 50
@gregorykwong6018
@gregorykwong6018 Ай бұрын
你真係好正!謝謝您無私教導!
@chenethan3702
@chenethan3702 11 ай бұрын
这么好的人工智能教学,不强烈点赞天理难容。
@decrypt-ranger
@decrypt-ranger 11 ай бұрын
感謝!XDDD
@MrLam-lx7td
@MrLam-lx7td 4 ай бұрын
再返看很有用。
@arthurlee5558
@arthurlee5558 2 ай бұрын
講得太好👏👏👏
@yatsiulung9452
@yatsiulung9452 8 ай бұрын
講解得好好, 好正, 但都要一些時間消化, 多謝教導。
@decrypt-ranger
@decrypt-ranger 8 ай бұрын
謝謝欣賞。歡迎指出那裏講得不夠清楚。教學相長!
@raymond2z
@raymond2z 11 ай бұрын
獲益良多啊!感謝!
@dohkowong
@dohkowong 11 ай бұрын
勁喎。廣東都有
@tony_lee_99
@tony_lee_99 7 ай бұрын
感謝遊俠教學🙏,期待有再高階excel影片😃
@chan80s
@chan80s 11 ай бұрын
非常精簡易明,非常好的示範!謝謝!
@lamjimmy9494
@lamjimmy9494 11 ай бұрын
講解好好。支持出多啲片
@Tsang7570
@Tsang7570 10 ай бұрын
Excellent video, especially how the content is taught in a simple way. It's very well organized with good pacing. Examples are both 實用 and interesting too.
@thezorrinofromgemail6978
@thezorrinofromgemail6978 5 ай бұрын
Excellent and very well organisied movie.
@vivianlam81
@vivianlam81 11 ай бұрын
🎉🎉🎉謝謝
@man-te2pu
@man-te2pu Ай бұрын
Thank you for your excellant presentation. Explaining a difficult concept with such a simple illustration which is easy to follow is a "Mission Impossible". You've done a great job!! Concerning the calculation for w1 change around15:26 video script, I agree that w1 change should be directly related to (y-predicted y) from the formula : x1w1 + ... = y. However, w1 change should also be inversely related to value of x1 ie for a certain amount of (y-predicted y), the greater the value of x1, the smaller the w1 change should be needed. In another words, w1 change should be = (E3-J3)/B3 instead. Can you further elaborate for why is that? Thanks again.
@decrypt-ranger
@decrypt-ranger Ай бұрын
Good question. Consider an extreme case: for certain datum, x1 = 0. If w1 change is inversely proportional to x1, then w1 change will become infinity, which does not make sense. Remember, w1 is shared by all data. Some data have small x1, some have big x1. Small-x1 data make w1 unimportant, while big-x1 data make w1 important. In this sense, x1 is like the weight of w1. As a weight, it should be proportional. Hope this view helps!
@gailefrankie9849
@gailefrankie9849 11 ай бұрын
講得太好太好,咁我想問有冇咩KZbin channel推介吓,用有關calculus去玩machine learning?謝謝!!!
@decrypt-ranger
@decrypt-ranger 11 ай бұрын
如果你有Calculus知識,我推薦KZbin channel "3 Blue 1 Brown" 的 Neural Network系列,共4條片。播放清單:kzbin.info/www/bejne/l5rVlHSoqtuhgc0
@kwNT4cy2GTnm
@kwNT4cy2GTnm 11 ай бұрын
非常好的教學,感謝!我有幾個問題: 1. 片中介紹的是1個neuron的AI model,那多於1個neuron是怎麼實現? 2. LR能用AI model自動調整嗎? 3. 如果資料有多於一個維度(比如stock analysis, 有trend),呢d應該點apply落去?
@decrypt-ranger
@decrypt-ranger 11 ай бұрын
1. 本片36:55有交代😉 2. 不能 3. 若這維度是屬於input那邊,那只是增加多一個x而已,neuron數量不必變。若你指的是output那邊,那就要增加y,增加y就必須增加neuron了。
@shakechen7944
@shakechen7944 6 ай бұрын
如何理解activation function(激活函数)那个地方使用了激活函数?
@decrypt-ranger
@decrypt-ranger 6 ай бұрын
我這模型沒有使用activation function
@shakechen7944
@shakechen7944 6 ай бұрын
我对着你的excel,其实做了一份,正向传播干啥,计算损失函数,反向传播干啥,跟新权重,为下一次计算做准备。,为啥反向传播是正向传播的计算量2倍。唯独没理解的就是激活函数,应该怎么玩。,@@decrypt-ranger
@mokivan4447
@mokivan4447 11 ай бұрын
如果Y係有3個答案呢
@decrypt-ranger
@decrypt-ranger 11 ай бұрын
那就需要最少3顆Neuron,每顆輸出一個y。
@user-vf3vg8mv6i
@user-vf3vg8mv6i Ай бұрын
我把X的數量變成6個,然後增加datum的資料量,但在訓練model 的時候error就是不會下降,Wchange會越來越小,這是為什麼?學習速率的改變並沒有用了。
@decrypt-ranger
@decrypt-ranger Ай бұрын
我看了你發給我的Google Sheet。你的計算全對,最後error不能進一步下降也是正常。以下是原因:你的輸入項是30位學生的6次測驗成績,輸出項是考試成績,輸入項與輸出項之間的關連性嘛,沒錯是有一點的,但並不高,所以模型的預測不可能準,換言之是有頗大的error。這error的量就反映關連性有多弱。
@yatsiulung9452
@yatsiulung9452 8 ай бұрын
剛消化完第一部份, 例如: Epoch 40, 為何第366行, y=0.14, 但 predicted y 出來, 0.2109665, 相差約50%誤差, 是否還可以再改良算式?
@decrypt-ranger
@decrypt-ranger 8 ай бұрын
視乎這datum是否帶noise,因而可以忽略誤差。否則,可以增加neuron數目,以增加model自由度,從而data fit得更好,但要小心overfitting。 但多neuron模型就難以簡單地在Excel實現了。
@wymanwong1361
@wymanwong1361 2 ай бұрын
可否拍條片教下用excel整多層neural network?
@decrypt-ranger
@decrypt-ranger 2 ай бұрын
多層的話,不說微分方程不行,而一說微分方程,就沒人看了 ^_^"
@wymanwong1361
@wymanwong1361 2 ай бұрын
@@decrypt-ranger 有沒有相關資料給我參考下 因為我照你的做法去做 做到某一部error square 停了 (兩個epoch 的error square 互減變左0 )😢
@decrypt-ranger
@decrypt-ranger 2 ай бұрын
​@@wymanwong1361 Error變0有兩個可能,一是求出了完美解,一是你有地方做錯了😆
@wymanwong1361
@wymanwong1361 2 ай бұрын
@@decrypt-ranger 我指的是連續兩個error square 互減變0 即是error 沒有變細到 當我做了 百幾二百次之後
@decrypt-ranger
@decrypt-ranger 2 ай бұрын
@@wymanwong1361 有點不明白了。哪一步需要將連續兩個error square互減呢?
@xiaojijuchang
@xiaojijuchang 8 ай бұрын
樓主,你咪講話average 果度必然系0,沒錯有時會系0,但系我試過用一組數據入,但次序梧桐,就已經賴左野,變左系接近0既小數.....想問下點解會甘
@decrypt-ranger
@decrypt-ranger 8 ай бұрын
電腦計算小數有微小誤差實屬正常。理論值是零,但實際計算可能不是零的。
@mokivan4447
@mokivan4447 11 ай бұрын
average square error要去到幾多先有參考性
@decrypt-ranger
@decrypt-ranger 11 ай бұрын
19:25有提到average squared error降到多少才夠。
@user-vx6pk8sc1x
@user-vx6pk8sc1x 9 ай бұрын
想問吓,average同stdev果part,點解average出唔到0,而stdev無事
@decrypt-ranger
@decrypt-ranger 9 ай бұрын
Average出唔到0? 唔係好明, 可以講多少少嗎?
@user-vx6pk8sc1x
@user-vx6pk8sc1x 9 ай бұрын
@@decrypt-ranger 已解決了,謝謝
@hehehaha1418
@hehehaha1418 8 ай бұрын
What if the predicted result is wrong and How we use the "wrong" predicted result to re-input or "re-training"???. Can you use Back Propagation in Gradient decent ??? How to do that? Thanks.
@decrypt-ranger
@decrypt-ranger 8 ай бұрын
If a predicted result for a certain case is wrong, first you should check if this case is correct or is just a noise. If it's correct and is an important case, you may duplicate this case many times in the dataset, then retrain the model. I have already used back propagation in gradient decent algorithm. But I avoided deriving the back propagation equations through partial derivatives like what textbooks normally did, because I want to explain in layman terms. The equation is explainable for this simple single neuron model, but not for multiple neurons model.
@hehehaha1418
@hehehaha1418 8 ай бұрын
@@decrypt-ranger Thank you so much for your valued reply...I will try the duplicate method...thanks a lot!
@decrypt-ranger
@decrypt-ranger 8 ай бұрын
@@hehehaha1418 You are welcome~ Hope that helps!
Fast and Furious: New Zealand 🚗
00:29
How Ridiculous
Рет қаралды 37 МЛН
Playing hide and seek with my dog 🐶
00:25
Zach King
Рет қаралды 34 МЛН
从零开始学习大语言模型(一)
20:13
林亦LYi
Рет қаралды 195 М.
Python 全民瘋AI系列 [Day 29] DNN 分類器
33:04
10程式中
Рет қаралды 3 М.
A Hackers' Guide to Language Models
1:31:13
Jeremy Howard
Рет қаралды 516 М.
İĞNE İLE TELEFON TEMİZLEMEK!🤯
0:17
Safak Novruz
Рет қаралды 511 М.
8 Товаров с Алиэкспресс, о которых ты мог и не знать!
49:47
РасПаковка ДваПаковка
Рет қаралды 164 М.