If you want to learn even more about TensorFlow, check out this 7-hour course: kzbin.info/www/bejne/qoG8m2ace696oM0
@stronggun20144 жыл бұрын
Which one is better? Should I watch this first or the other one?
@saurabhs47434 жыл бұрын
@@stronggun2014 longer one has machine learning, deep learning, and reinforcement learning video parts.. however this one is only deep learning with videos on image classification and text classification
@pythonprogrammer89894 жыл бұрын
am i the only one getting an error?
@pythonprogrammer89894 жыл бұрын
is anyone getting this error? Failed to load the native TensorFlow runtime.
@ibrahimrefai1 Жыл бұрын
does the course demonstrate how to use gpus?
@TechWithTim5 жыл бұрын
Hope you guys liked it! If you want more machine learning and AI tutorials check out my channel 🔥
@TheAnswerworld-et6ze5 жыл бұрын
Thank you for this presentation Tim. God bless you.
@kamalpandey71775 жыл бұрын
Already subscribed. Could you please reply to my question. I am beginner in this.
@xerowon34905 жыл бұрын
Big fan of your videos and I was so excited to see you doing this course
@toluwaniamos76815 жыл бұрын
Thank you Tim!!!
@DJBremen5 жыл бұрын
Thank you for this Tim!
@gaddafim79595 жыл бұрын
⭐ course contents ⭐ (0:00:00) what is a Neural Network? (0:26:34) Loading & Looking at Data (0:39:38) creating a model (0:56:48) using the model to make predictions (1:07:11) Text Classification p1 (1:28:37) what is an Embedding Layer? Text Classification P2 (1:42:30) Training the model - Text classification P3 (1:52:35) saving & Loading Models - Text Classification 4 (2:07:09) How to install Tensorflow GPU on Linux -- Learn to code for free and get a developer job:
@aZaamBie1355 жыл бұрын
Thank you!!
@maoryatskan63465 жыл бұрын
That's helpful tnx
@PesTech15 жыл бұрын
@@feridakifzade9070mmmmm. : Hmmmm
@calibr06365 жыл бұрын
as if it wasn't already in the description
@azchen65115 жыл бұрын
Well, ppl can click on the timestamps here, so I think it is kind of useful.
@jamiecybersecurity5 жыл бұрын
(0:00:00) What is a Neural Network? (0:26:34) How to load & look at data (0:39:38) How to create a model (0:56:48) How to use the model to make predictions (1:07:11) Text Classification (part 1) (1:28:37) What is an Embedding Layer? Text Classification (part 2) (1:42:30) How to train the model - Text Classification (part 3) (1:52:35) How to saving & loading models - Text Classification (part 4) (2:07:09) How to install TensorFlow GPU on Linux
@1-0.5i4 жыл бұрын
1:47:14 - The verbose parameter is a simple debugging tool which prints the status of epochs while the model is being trained. In the case, verbose=1 displays the epoch number with a little decoration. Please feel free to correct if any and add more info.
@dassad97773 жыл бұрын
your're right. You can use verbose to track model's progress, if you dont want to see it's learning process you can switch it off. verbose = 0 - you see absolute no progress, just when training is finished program will go on, verbose = 2 - you'll see which epoch is already trained, verbose = 1 you'll see number of epoch (for example Epoch 11/100), progress bar in %, and some parameters (loss function, and accuracy).
@shawnliu85635 жыл бұрын
Newbie in almost every aspects in what is said in this video. Took me a whole day to get to 35:10. Had issues (1). downloading python. The correct version is python3.6.1 for tensorflow 2.0.0, to avoid "from google.protobuf.pyext import _message ImportError: DLL load failed: The specified procedure could not be found." errors, (2). Tensorflow 2.0.0, not 2.0.0alpha0, to avoid many many many "future warnings" (3). "Cache entry deserialization failed, entry ignored", solved by opening command prompt as administrator! (4). many typo error from my own fault. Almost given up. Using windows10 pro, CPU, intel64. I know the problem relates to my special settings, but might happen to other new users. This is a great video for beginners, thou.
@johnkeating71705 жыл бұрын
I spent 3days to watch and figure out the basic theory behind this tutorial, thank you very much~!
@neelanjanmanna62924 жыл бұрын
Great tutorial so far, just a quick correction that the sigmoid activation function ranges between 0 and 1. What you had drawn was actually the tanh activation function, that ranges between -1 and 1. Cheers!
@nixlq4 жыл бұрын
Correct, I just came to write that
@azizulhakim15344 жыл бұрын
He became my favorite by saying he doesn't know what Verbose is.
@cplusplusgoddess33594 жыл бұрын
that comment gave me less confidence in him
@maclacrosse15 жыл бұрын
you explain this better then most college professors do and it doesn't cost me my future in student loans
@ilegallive9995 жыл бұрын
1:19:30 He prints the codes from the train_data[0], but presents the string results from test_data[0]. So there's a mismatch. Hopefully, it's helpful.
@mli88474 жыл бұрын
i am 13 years old and don't speak and understand very much englisch, but i am now watching the video at 25 min and i have understand how neural networks work and every thing else expect the activation function, thanks
@maoryatskan63465 жыл бұрын
at 00:37:00 255 represents white and 0 represents black. Great video! keep it up.
@cam415275 жыл бұрын
Somebody has probably already answered this but "verbose" means descriptive. If your were to enable the verbose property on an object it would normally give a lot more details about something, whether that is debug information or just printing output. Also Great Video Tim! I am a big fan of the tutorials on cutting edge technology as they are difficult to find elsewhere :) Keep up the great work
@BenjaSerra4 жыл бұрын
for those who are having problem with the predict method, replace it with predict_classes: model.predict_classes([test_review])
@ruvikperera88134 жыл бұрын
Bro you saved me....thanks
@benhudelson5 жыл бұрын
Hey Tim, great content so far. I would recommend in future videos to reduce the point size on your pen so that your handwriting is a little clearer. Thanks for putting this together. Very well constructed explanations.
@raiden1401884 жыл бұрын
By far (!) the best video i have found for beginners in neural networks. And i viewed a lot. Love it! What killed me is the writing of numbers from the bottom to the top. Never seen writing numbers that way.
@TheDeadSource5 жыл бұрын
A lot of people are asking what versions of python and windows can be used to run TensorFlow 2.0. I've dug into this for you all. A lot of info is from the official site, some is from github issues and published articles regarding TF 2.0, so at the time of writing this should be accurate information. First, operating systems. TF 2.0 was tested and is officially supported on the following *64-bit* systems: * Windows 7 or later. * Ubuntu 16.04 or later. * macOS 10.12.6 (Sierra) or later - note that these versions do not offer GPU support. * Raspbian 9.0 or later. Python versions that are currently supported are: * Python 3.6 (but NOT Python 3.7, despite its recent release.) * Python 2.7.
@vinaypatil72934 жыл бұрын
At 1:05:25 you mentioned if we need to make predictions on a single image from the dataset we just need to put [test_images[7]] instead of test_images. correction: prediction = model.predict(np.array([test_images[7]])) we need to pass a NumPy array to the model else one would run into this error: ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 784 but received input with shape [None, 28]
@samraeburn93414 жыл бұрын
Best tensorflow tutorial ive ever seen, thanks for this one!
@ayushyadav44124 жыл бұрын
have yopu completed the course ?? How was it?
@alvarosanchezp5 жыл бұрын
19:25 This is not the sigmoid function as you say in the video. This is the hyperbolic tangent. The sigmoid function maps any value between 0 and 1, not between -1 and 1.
@ianpan01025 жыл бұрын
Yup you're correct, the sigmoid function's y coordinate is bounded by 0 and 1.
@nl15755 жыл бұрын
But if you do 1-g(z)^2 it is a value between 0-1
@nl15755 жыл бұрын
These functions are known as non-linear activation functions common ones known are the sigmoid function and hyperbolic tangent but modern ones used are Rectified Linear Unit (ReLU), Leaky LU and Exponential LU
@nl15755 жыл бұрын
We can also use a value between -1 and 1 to represent strongly positive and negative values during testing
@nl15755 жыл бұрын
Just for the people who may have been confused
@davincy094 жыл бұрын
i have high school math and i understook what an activation function is so is very well explained!
@akshiwakoti78515 жыл бұрын
Lists and Arrays are very different data structures in the way they work during runtime. Lists are mutable objects, that is, one can add, replace, remove an element in a list whereas Arrays are immutable objects, that is, one cannot change data elements once an array has been created (you can delete the array entirely however). Lists can contain multiple types of data elements in multiple combinations, such as, a list inside a list inside a list, an array inside a nested list, a tuple, a dictionary, a string, an integer, a float, a timestamp, etc., whereas an Array must have all data elements of the same type. There are of course many other differences.
@doc20105 жыл бұрын
To answer your question at 1:47:25 , verbose is an optional argument which can be used to report more information about an operation in your program
@jermaineken77725 жыл бұрын
Was using google collab to implement this tutorial. Thank you for the great content.
@raj-nq8ke3 жыл бұрын
Must watch for basics of TensorFlow. Good Tutorial
@raj-nq8ke3 жыл бұрын
37:53 its not the RGB values. They are the values obtained after using back propagation and adding biases. Its like conversion of images of T-shirt into a matrix that a computer can understand and then compare that matrix.
@eddw1234 жыл бұрын
Yeah! I handle to run this tutorial with >>PyCharm +Anaconda{Py3.7, TF2.0.0}+offline data
@ANILKHANDEI5 жыл бұрын
Very nice explanation of neural netwiorks and using the same to predict the fashion mnist. This makes a lot of sense to me, thanks for this.
@heisenberg47035 жыл бұрын
Took me hours to figure this out: If training the model takes really long, this might be the fix: Don't use IDLE to run the Code. Just use CMD/Terminal.
@anynamecanbeuse4 жыл бұрын
56:27 Training acc marginally larger than the val acc indicates that a high variance problem, and extends the training epochs doesn't sound like a good idea. Tricks as regularizations, dropout or just reduce the number of parameters should be work.
@colinrey45895 жыл бұрын
Best beginner tutorial on entire KZbin!! Thank you so much! Very good explained.
@jordanmoore41045 жыл бұрын
I'm excited to learn this with Tim! He taught me Java thoroughly!
@amitkehri4 жыл бұрын
Hats off to you dude. Crystal clear everything .
@theegyptiancamel4 жыл бұрын
When you divide your data by the max value, you are essentially "normalizing" it. "Shrinking it" or more appropriately quantizing the data is a different process involving reducing the number of significant bits and the quantization step.
@NishantSingh-px3jm5 жыл бұрын
Thank You FreeCodeCamp you people know what I want . I was just about to give ₹15000 for learning this this . Thanks I love you❤😄
@2rfg9495 жыл бұрын
Thanks for your video. I really appreciate the simplicity of your explanations and your humility is refreshing! Will watch more.
@17Koche5 жыл бұрын
This tutorial is well put together. I was looking to learn more about neural nets and TensorFlow. this is perfect for a beginner in the field.
@ChrisField135 жыл бұрын
This is incredible. I'm an hour in, and I feel like I've learned more practical application in this video than I have in all of the other ML research I've done combined.
@elliotfriesen68204 жыл бұрын
FOR ANYONE WHO CANT DOWNLOAD TENSOR FLOW please do these steps it took me hours to figure out 1.if it doesn’t work uninstall ALL versions of python and and andacanda, including ALL files related to it 2. Reinstall and anaconda and set to be in path when the installer shows up 3. then just open your command prompt not the anaconda one then your done
@chavoyao5 жыл бұрын
When you are working with a single neuron, like at the beginning of your video, you have only one bias term. It makes no sense to have one per connection. Your excitation function should have been \sum_{i=1}^4 { w_i \cdot v_i } + b. This neuron requires five parameters instead of eight.
@LimitedWard5 жыл бұрын
Thank you for pointing that out. I was really confused by that math for a bit. So I'm assuming you'd have 1 bias per node in the output layer?
@chavoyao5 жыл бұрын
@@LimitedWard You have a bias term per node in the hidden and output layers.
@azizulhakim15344 жыл бұрын
I was looking for this comment
@jonasstrabel3 жыл бұрын
For everyone who as problems installing the pip package, its only for python versions up to 3.8. If you are running python 3.9 or higher install the alpha version of the pip package
@hiltyMG5 жыл бұрын
Full Machine Learning For Finance - Quantitative Trading for Beginners (2019) PLEASE!!!
@AlexCell335 жыл бұрын
You don’t need machine learning, just basic regression and statistical models. Oh yeah, and tens of thousands of dollars a month in obscure and alpha generating data feeds. Or co locate a server at a stock exchange and front run the guys running the models.
@justicegugu97755 жыл бұрын
@@AlexCell33 What are the steps to learn machine learning for finance?
@sarves_boreddy4 жыл бұрын
Best tutorial on neural networks
@Aegilops4 жыл бұрын
Surprise power-move at 1:20:59 .... typing 'cmd' inside a Windows Explorer search bar to launch CMD in that directory. I never knew that!
@erniea58434 жыл бұрын
Very well done tutorial. Nice intro to TF and Neural Networks with some quick and easy to follow examples. So much to learn!
@erictheawesomest5 жыл бұрын
if you want the longest length for the text classifier tut it's 2697 or use the code bellow longest_length = 0; joinedlist = train_data + test_data for i in joinedlist: if len(joinedlist[i]) > longest_length: longest_length = len(joinedlist[i])
@rohithdsouza85 жыл бұрын
39:30 - Isn't decimal representation of numbers harder to work with over integer representation of number for a computer? (32 bit floating point vs 8 bit integer)
@osxs333__74 жыл бұрын
scaling the features is common practice when creating models for machine learning, you can kind of think of it as having the data sit close together for the model to make more accurate decisions. In some cases you can work with models that take in a variety of features that all operate on different scales, so you will use Standardization to create more effective models.
@darcos75354 жыл бұрын
On the gpu (cuda) there is no penalty for single precision vs int. Fp16s are also available
@AB-cp1yy4 жыл бұрын
Great job! Is there like a Part 2 video for new example modeling? Or is this the only video?
@amaandurrani7855 жыл бұрын
You can save your model like this and reuse it to save your time. model.save('abc.h5') model = keras.models.load_model('abc.h5')
@dongyuwu77604 жыл бұрын
1:05:36 adding value here will get error msg... I think the better way can be substituting i value in the for loop to any specific value you want to test instead of using loop.
@vikramadityamathur24204 жыл бұрын
Great lecture. Thank you !
@РипсимеАрутюнян5 жыл бұрын
The tanh function belongs to [-1:1]. Sigmoid activation function is [0:1].
@wiihackerkris35000vr5 жыл бұрын
hot
@SuperDonElio5 жыл бұрын
Flattening the input leads to a loss in information in the sense that geometrical dependencies are not retrievable. Usally this is solved by a convolutional layer (which is commonly used in image recognition). There is nothing wrong with choosing a simpler example in a tutorial but imo you should not give the impression that this is normal. In addition lists and arrays are not interchangeable since they are different data structures.
@kvelez4 ай бұрын
Thanks for the course, really good.
@ehsansaraee54733 жыл бұрын
Love every minute of this video!great tutorial!thank you so much!
@fg_arnold5 жыл бұрын
Fwiw: Sigmoid is not one specific function, but a class of functions that plot roughly as a flattened S. Not all map to a range of (-1,1). The most commonly used sigmoid is the 'logistic function' which maps to (0,1) - good for feeding into probability distributions. The activation function used in this video could be a hyperbolic tangent, an error function, etc. A visit to the Sigmoid function page on Wikipedia might be helpful.
@bastoscc5 жыл бұрын
thank you so much ! i was just tired of the indian tutorials. you just made my next few days
@openaidalle4 жыл бұрын
Very clear explanation and engaging.. Also people do not like 'daataa' it's data
@heritagehomes63974 жыл бұрын
Thanks for taking time and explaining this concepts with example. it helps many beginners. I teach advanced statistics and this details will help many people to understand fundamentals. nice job.
@Ftur-57-fetr3 жыл бұрын
Super clear, easy to follow explanations, THANKS!
@valfredodematteis-poet4 жыл бұрын
thank you very much man, I'm a philosophy student who's trying to find a way in understing AI, that's hard but videos like this are a HUGE help, thank you very much, keep it up
@saikumarreddyatluri33325 жыл бұрын
Thanks for this series we need more series of videos on tensorflow 2.0
@ritukumar69565 жыл бұрын
Great tutorial!
@notevoyadarminombre1564 жыл бұрын
Thank you for this tutorial, very nice. One note, on 37:50 255 would be white.
@maxajames4 жыл бұрын
If training the model takes longer than expected, just run the cmd.exe or python.exe or Spyder or whatever you use using graphics processor. Just right click on the file in the location and click "Run with graphics processor" in Windows 10 and use the non-integrated one.
@smitbarve72094 жыл бұрын
For those facing problems installing TensorFlow and other libraries I recommend using Google Colab as it already has all the required libraries per-installed.👍
@jamesh41294 жыл бұрын
Awesome video. Thank you. Now I feel ready to dive in with a book I picked up
@uelude4 жыл бұрын
Nicely done. Would be great to see new door handles.
@gleb29712 жыл бұрын
Hoped see not-api tensorflow. Well, good tutorial for begginers, ty!
@AlessioSangalli4 жыл бұрын
So cool to find a video on this subject where the teacher does not have a heavy accent
@Cormac_YT3 жыл бұрын
This line keeps throwing errors fitModel = model.fit(x_train, y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1)
@shangyunlv5 жыл бұрын
When shifting the word index by 3 and add the special token into the dictionary for "" and "" and so on, the whole system will have a mismatch of words, because such shift is not made to the data accordingly.
@mahdiheidarpoor94525 жыл бұрын
the best video for start, its better if you can make another video for more advanced nn works with ts2...
@theoreticalphysics36444 жыл бұрын
If you don't feel like going through an installation process rn, just use google colab, it has numpy, pandas, matplotlib, and tensorflow already available to use.
@hafezmousavi904311 ай бұрын
Can somebody explain. There is a very big misunderstanding for me at 1:15:12 in line 13. So he shifts the values in the dictionary by 3. Then he doesn't change the values in the train_data and test_data. This should completely corrupt the sentences because all the word indices are now changed. But surprisingly it all works out fine... Why didn't he have to change the data_train and data_test indices as well?
@theeagleseye49894 жыл бұрын
255 is white and 0 is black is the correct mapping of color if it is greyscale image and not vice versa. If color image, the 3 numbers will be in each pixel (height x weight); each number of those 3 represents intensity of one color each(rgb is the default order used in Matplotlib. OpenCV converts in a different order ie., bgr) If doubt still exists run this code in python ide after importing numpy as np and matplotlib.pyplot as plt and try changing values of red, green and blue variables red, green, blue = 255,255,0 ; rgb = (red, green, blue); pixel = 1*[1*rgb]; breadth, height = 25, 25; image = np.array(height*[breadth*pixel]);plt.imshow(image);plt.show();
@starship98745 жыл бұрын
for the length of the reviews i used for i in range(len(test_data)): sum = sum+len(test_data[i]) print(sum/len(test_data)) to simply calculate the average length, the average review is 230 words long i think thats a good limit
@RecursiveTriforce5 жыл бұрын
Shorter and more pythonic: s = sum(len(i) for i in test_data) print(s/len(test_data))
@RecursiveTriforce5 жыл бұрын
Whenever you write something like... for index in range(len(something)): print(something[i]) ...just use... for element in something: print(element)
@pmostarac5 жыл бұрын
1:50:29 print("Prediction: " +str(predict[0])) - variable predict is an array with 250 elements, why did you use first one (index zero) to evaluate prediction of NN?
@lynnlo5 жыл бұрын
It's an example normally you'll run a a for i in predict function to find the highest value.
@pmostarac5 жыл бұрын
@@lynnlo but, predict[] has the values of validation for each of 250 words, if someone wrote negative critics with only one positive word the max(predict[]) will be equal ~1 which is then characterised as positive critics?
@arjunalwe76974 жыл бұрын
For some odd reason, my IMDb model predicts numbers that aren't 1 or 0 (i.e 0.9999, 0.23538295, etc, etc.). This isn't a problem, however, I noticed Tim's model always outputted either a 1 or 0 and was just wondering if something is wrong with my model.
@joseortiz_io5 жыл бұрын
Oooo awesome man! I literally just saw a video from your main channel I'm assuming! Awesome content my friend. Im trying to familiarize myself with the AI community on KZbin. Have a good one!😃👍
@AndrewTateTopG15 жыл бұрын
2:07 maybe chatbot? Yes bro pls.. My pathetic try to make it +1 for NN chatbot
@ali.swatian5 жыл бұрын
love you man. really a good teacher.
@nhimong17994 жыл бұрын
Was using google collab to implement this tutorial. Thank you for the great content. I'm excited to learn this with Tim! He taught me Java thoroughly!
@kidsfree66154 жыл бұрын
Who is botting?
@elliotfriesen68204 жыл бұрын
Great videoooooo Ur so goodddd U should make daily vids
@jace10374 жыл бұрын
Great video! (The better replacement is using regex)
@shippy59523 жыл бұрын
Great tutorial! One question about the globalaveragepooling layer. After embedding we are actually taking the average of the embedding features over all the word vectors and not the average of every individual vector? Say we have 2 words in a sentence that we want to predict the sentiment of: "Very nice" -> [1,1,1,1], [2,2,2,2] -> 2 words, 2 word vectors with 4 embedding features (contexts). The correct way is to take the average over these vectors so the lower dimensional output is [1.5, 1.5, 1.5, 1.5], that we then pass to the dense layer. And the incorrect way is to output a 2 dimensional vector averaging the 2 vectors individually -> output: [1, 2]? Just averaging every word vector individually and passing every single one in a new vector doesn't make sense to me and would just throw away the context.
@benevans13775 жыл бұрын
From the top comments, I couldn't see but verbose means how much detail the program goes into, verbose is used a ton in linux commands so check them out!
@ProgrammingwithPeter5 жыл бұрын
Pretty neat, tensor flow grow so much
@rajeshsomasundaram72995 жыл бұрын
It was a great first half session (I watched till that point). How do we tell our neural network output neurons (output1 to output9) to predict the specific labels. For ex: output 1 neuron should predict the output of Trouser. Where do we really mention that?
@aiwithgaurav5 жыл бұрын
y labels represent the actual output.
@kyde83925 жыл бұрын
Followed your code line by line. Awesome Tutorial 🔥. Am getting an error though in the Text Classification : `ValueError: A target array with shape (15000, 250) was passed for an output of shape (None, 1) while using as loss `binary_crossentropy`. This loss expects targets to have the same shape as the output.` It would also be great if you could provide your code online.
@yasmineguemouria90995 жыл бұрын
same problem here
@achmed204 жыл бұрын
thx a ton, i finaly understood the basics :)
@toluwaniamos76815 жыл бұрын
Thank you so much Tim! As a beginner into tensorflow and machine learning, you were so excellent and helped me grasp the basic tips! Thank you again!! 😀
@prithivik29524 жыл бұрын
Great video! Batch size is for parallel training GPUs..
@PythonNC5 жыл бұрын
54:30 i'm stuck with this error TypeError: The added layer must be an instance of class Layer. Found: {, , }
@mot7955 жыл бұрын
me too
@wojtek43515 жыл бұрын
1:27:54 I am confused as to why you use the sigmoid function for the last layer? as I understand you want it to be mapped from 0 - 1 but doesn't sigmoid function map values from -1 - 1? I also can't really understand what is the point of giving "averaging" layer the "rectified linear unit" function if it just averages the vectors? does it average them and then confine them within the range of the function?
@dilanboskan22225 жыл бұрын
google what sigmoid is...
@arnobchowdhury18044 жыл бұрын
1:27:00 text classification
@shantanuagrawal57064 жыл бұрын
Nice content and explanation, thanks for the video
@shantanuagrawal57064 жыл бұрын
Can someone please clear out some of my doubts or explain these differences in my outputs? I am working in Jupyter Notebook I got these differences in my output as compared with yours, based on the same code as yours. I got around 2.2k datasets to work with, and the speed of the processing was around 2ms/step (Is it due to Jupyter environment?) Hence, for epoch=5: My accuracy ranged from 0.62 to 0.69 and testing accuracy was around 0.63, but with great difference in loss value, that was 92 for me. Similarly, when that model was again trained for epoch=10: accuracy ranged from 0.83 to 0.86 and final testing acc was 0.70 and loss value was around 88, but in the video, it was in 0.92 range.
@shantanuagrawal57064 жыл бұрын
Why is the length of predict object in test classification code is 250? Like, we have our last layer as a dense layer with 1 neuron, which is using the sigmoid activation function.
@VISHESHBCY4 жыл бұрын
Input arrays should have the same number of samples as target arrays. Found 60000 input samples and 10000 target samples. error is coming help
@hemanthkotagiri88655 жыл бұрын
Oh man, Thank you so much, Tim! I also follow your channel!