Hey all, at 14:26, are we missing the quantize_annotate_layer wrapper over the Conv2d layer (inside Sequential), like this: quantize_annotate_layer(tf.keras.layers.Conv2d(32, 5, input_shape=(28,28,1))
@gauravsingh-jm6dk4 ай бұрын
Yes you are right . Quantize Annotate works for annotation purpose only so that quantize apply knows which layer to quantize . There is one other way which provides you better granularity you can directly use Qunatize_Wrapper class. It will give you freedom to quantize the model as per your needs as here you can set configuration parameters for quantization
@shubhammane63572 жыл бұрын
I tried QAT, as result I got .h5 model with quantize wrapper layers, I want to remove it and get back my original model with modified weights, How can I dot that?
@athreyamurali14394 жыл бұрын
Hey can you re-upload with better audio, please?
@alias15vapour3 жыл бұрын
Sorry about that. I recorded the audio locally so it's better, but forgot Airpods audio compression over bluetooth lost quality.
@athreyamurali14393 жыл бұрын
@@alias15vapour All good, it happens. The topic seems really interesting tho, so I'd really appreciate it if you could re-upload or re-record it sometime. Thanks!
@alias15vapour3 жыл бұрын
@@athreyamurali1439 - Thanks. This takes a bunch of post-production work, so a bit unlikely tbh but me (or someone else on the team) will definitely do this and a better job for the next version.
@lisali61203 жыл бұрын
Thanks for sharing! Does it support mixed precision?
@bryanlozano89053 жыл бұрын
it should, he mentioned custom quantization for specific layers
@alias15vapour3 жыл бұрын
QAT does emulation of model execution in certain precisions so model accuracy is preserved. If that's your goal, you can totally do it like Bryan mentioned. But it's unlike mixed precision for training.
@sunnyguha24 жыл бұрын
Get better microphone
@Lisa-hb3js10 ай бұрын
I got this error whatever I do (the same if the network only contains Dense layers...) : ValueError: Unable to clone model. This generally happens if you used custom Keras layers or objects in your model. Please specify them via `quantize_scope` for your calls to `quantize_model` and `quantize_apply`. [Layer supplied to wrapper is not a supported layer type. Please ensure wrapped layer is a valid Keras layer.].
@opydas45487 ай бұрын
Have you found the solutions?
@gauravsingh-jm6dk4 ай бұрын
This error you get when you are defining your custom layer . For example I define my custom layer as Class CustomLayer(tf.keras.layers.Layer) and did some operations inside it then. Whenever you are doing quantize_apply you need to declare its parameters in quantize_scope
@sanjoetv574811 ай бұрын
i am having a problem when i convert my .h5 to tflite,, when i test the tflite on my mobile app the accuracy is so much lower than when i try to run the .h5 on jupyter.... my question is does quantization aware training can help me to lower the accuracy loss when converted it to tflite after the quantization aware training? please someone help!!!
@gauravsingh-jm6dk4 ай бұрын
Yes if QAT done properly. It will increase your accuracy for sure
@Hav0c10003 жыл бұрын
Hey Pulkit, Say I wanted to constrain quantization parameters to power of 2 values. Would that be supported?
@yoloswaggins21614 жыл бұрын
Can this be used for tensor cores on Nvidia GPUs or is it only for embedded devices?
@alias15vapour3 жыл бұрын
By default it supports the TFLite Quantization spec. If you want to use it for Nvidia, you would have to write custom quantization configs specific to NVidia. But it absolutely can be done.
@yoloswaggins21613 жыл бұрын
@@alias15vapour Thanks for the answer, would that be writing CUDA kernels for this or could you wrap with something higher level like Tensorrt?
@alias15vapour3 жыл бұрын
@@yoloswaggins2161 You wouldn't need to write any kernels. You would just need to arrange the TF graph in a way that it emulates the quantization on Nvidia chips. It would just reuse their existing kernels. Possible to use TensorRT but you would need to know deep internals of TensorRT to construct the graph correctly.
@yoloswaggins21613 жыл бұрын
@@alias15vapour I see, thank you.
@morekaccino4 жыл бұрын
I can't hear anything
@andresfernandoaranda54984 жыл бұрын
same
@anishdeepak18262 жыл бұрын
I have trained ssd_mobilenet_v2 model using object detection api and saved the model as .pb file. How to apply the quantization to a my model. I dont have .h5 file.
@gauravsingh-jm6dk4 ай бұрын
For PTQ (Post Training Quantization) if you are doing it on GPU use TensorRT. If you are doing it on intel CPU use OpenVIno. If you want to Do QAT (Quantize Aware Training) tensorflow_model _optimization library you can refer and If you have GPU you can also utilize Nvidia Toolkit for Quantize Aware Training
@rupeshmohanasundaram671810 ай бұрын
for object detection, QAT Works, if so how?
@sreeragm83664 жыл бұрын
Is there any scenarios in which quantisation shouldn't be done? Like, Incase I want to convert it to other formats supporting optimization, such as TensorRT.
@alias15vapour3 жыл бұрын
That depends on your needs. If you want to use TensorRT for optimization that works fine as well. Quantization is useful if performance is a concern for you.
@PremKumar-qi3cd4 жыл бұрын
When I try to post-quantize(int8) the SimpleRNN model for a time series data, it is throwing an error saying only single graph is supported. So Does the RNN, LSTMs support for quantization and conversion to tflite models? And If yes, how can I address the error? Thanks in advance.:)
@raisaalphonse40943 жыл бұрын
I'm using QAT for a functional model only, but I'm getting a value error saying, quantize_model '`to_quantize` can only either be a tf.keras Sequential or ' ValueError: `to_quantize` can only either be a tf.keras Sequential or Functional model. I'm not really sure why I'm getting this error. Could anyone please help me out in this?
@gauravsingh-jm6dk4 ай бұрын
If you do model.summary() you must be having a layer containing sub-layers. Keras Model declared in class acting as single layer. That what's this error is talking about. Prepare a proper functional model then only you can utilize QAT
@nataliameira22834 жыл бұрын
Documentation → goo.gle/2WMUZze ---> ERROR (Sorry, we couldn't find that page.)
@sairamvarma62084 жыл бұрын
The Github link in the description doesn't work
@alias15vapour3 жыл бұрын
Sorry about that, there's a typo. Just use the link below.
@ramamunireddyyanamala9732 жыл бұрын
Very good Sir
@travelsome4 жыл бұрын
Waiting for a video for sequential modelling
@rushikeshgandhmal4 жыл бұрын
Hey how should I start start learning Deep learning ? Could you suggest me?
@gokulakrishnanm Жыл бұрын
@@rushikeshgandhmalhow’s your learning journey🎉
@bryanlozano89053 жыл бұрын
Bruh, is someone weed-whacking outside?
@alias15vapour3 жыл бұрын
Unfortunately, yes. They started that the moment I started recording :(