Convolution in the time domain

  Рет қаралды 28,053

Mike X Cohen

Mike X Cohen

Күн бұрын

Пікірлер: 37
@weilawei
@weilawei 4 жыл бұрын
Super clear explanation, very intuitive. Thank you.
@mikexcohen1
@mikexcohen1 4 жыл бұрын
You're welcome!
@romanvereb7144
@romanvereb7144 4 жыл бұрын
Mike X Cohen - the unsung hero of our age
@mikexcohen1
@mikexcohen1 4 жыл бұрын
Aww, now you make me blush ;)
@IamGQ87
@IamGQ87 4 жыл бұрын
really very pedagogical. Thank you
@bokkieyeung504
@bokkieyeung504 3 жыл бұрын
I'm wondering why not aligning the center of the kernel with the edge of the signal (still need zero-padding, but less extra zeros) so that we can get the result with exact same length as the original signal, thus no need to cut off the "wings"?
@mikexcohen1
@mikexcohen1 3 жыл бұрын
If you are implementing convolution manually in the time domain using for-loops, then yes, that's convenient. But the formal procedure is done to match the implementation in the frequency domain, which is much faster.
@kaymengjialyu5086
@kaymengjialyu5086 3 жыл бұрын
You are such a good teacher :)
@mikexcohen1
@mikexcohen1 3 жыл бұрын
aww, thanks!
@jesusdanielolivaresfiguero4752
@jesusdanielolivaresfiguero4752 3 жыл бұрын
Is there a way to buy your Analyzing Neural Time Series Data book on credit for monthly payments?
@mikexcohen1
@mikexcohen1 3 жыл бұрын
Hi Jesus. Find my email address (it's on my CV) and send me an email about this.
@me-ou8rf
@me-ou8rf 23 күн бұрын
Doesn't using kernel for convolutional which have some value for n= negative violates causality ? In the first case, wavelet was mirrored so h(-n), and h(n)/wavelet-kernel in that case was not causal. However , in case of second type of convolution where convolution result is longer than signal, causality condition on the convolution kernel was satisfied I guess. ?
@mikexcohen1
@mikexcohen1 20 күн бұрын
Yes, exactly: This is a non-causal filter, which is suitable for offline analyses. The main advantage is that it creates a zero-phase-shift filter. The "backwards" influence (that is, dynamics in the near-future influence the estimate of the current time point) can be a disadvantage, depending on the application.
@me-ou8rf
@me-ou8rf 11 күн бұрын
@mikexcohen1 thanks for answering. Can j see ERD/ERS from one single trial of one single channel or do I need to sum up time-frequnecy plots of a specific channel of many trials ?
@mikexcohen1
@mikexcohen1 5 күн бұрын
Technically, you can calculate and plot the time-frequency response of a single trial, but that's likely to be quite noisy (assuming you're working with M/EEG/LFP data). Therefore, it's common to calculate the time-frequency response from many trials in the same condition, then average those TF maps together to reduce noise and highlight signal.
@MrPabloguida
@MrPabloguida 2 жыл бұрын
Is it fair to say that the result signal, even after cutting out the wings, will still be "contaminated" by the zero padding for at least another half kernel length, which would be when it start having a pure and clean signal/kernel convolution? Does it make sense?
@mikexcohen1
@mikexcohen1 2 жыл бұрын
It is certainly the case that edges are always difficult to interpret from any kind of filtering. When possible, it's best to have extra time series before and after the period of interest, so that you can ignore those edges.
@brixomatic
@brixomatic Жыл бұрын
Wouldn't the convolution it be a better representation of the signal, if you could wrap around the edges of the signal? I.e. you'd start the kernel's mid point at the start of the signal and take the left half of the kernel from the right side of the signal and if the kernel exceeds the right bounds, take the data from the start of the signal? This way your convolution would have the same length as the signal, but operate only on the signal's data and not sneak in zeroes that have no meaning and pollute the results.
@mikexcohen1
@mikexcohen1 Жыл бұрын
Yes, that's called "circular convolution"; what I explain here is "linear convolution." Both methods produce edge effects that should not be interpreted.
@鐵匠史密斯
@鐵匠史密斯 Жыл бұрын
​@@mikexcohen1 teacher. I want to make sure if if my thoughts are correct. The edge effect will occur when we use 'Convolution Theory' to obtain the result of the convolution between two signals. This is because 'Convolution Theory' uses FFT. If the max frequency of the two signals exceeds the Nyquist Frequency, aliasing will occur. This is why it's called the "edge effect", right? Sorry I'm not native English speaker, if something's confusing, please correct me.
@violincrafter
@violincrafter 4 жыл бұрын
Wings of convolution: a good band name
@mikexcohen1
@mikexcohen1 4 жыл бұрын
I'll be the back-up kazoo player.
@prempant6428
@prempant6428 3 жыл бұрын
How do you decide what sort of kernel to use?
@mikexcohen1
@mikexcohen1 3 жыл бұрын
That's application-specific. But the procedure of convolution doesn't depend on the shape or length of the kernel.
@hurstcycles
@hurstcycles 3 жыл бұрын
If the kernel is a morlet wavelet (formed by combining a constant sine wave and gaussian) and symetrical around the mid point, flipping the kernel is not necessary, is that accurate? Thanks for the great video
@mikexcohen1
@mikexcohen1 3 жыл бұрын
Kindof, but be careful with the descriptions: The kernel always needs to be flipped, but if the kernel is symmetric, then flipping has no effect. (Also, sine is an odd function and thus is asymmetric; cosine is symmetric about zero.)
@jaimelima2420
@jaimelima2420 3 жыл бұрын
This is good stuff. Good Job!
@ormedanim
@ormedanim 3 жыл бұрын
you lost me at God's perspective, now I'm flipping (out) instead of the kernel :D But I am very thankful for all the videos and the ANTS book
@mikexcohen1
@mikexcohen1 3 жыл бұрын
Nice.
@jaimelima2420
@jaimelima2420 3 жыл бұрын
Richard Hamming's Digital Filter explains this god's perspective in a different way, worth checking imho.
@RenanAlvess
@RenanAlvess 4 жыл бұрын
congratulations for explanation, was very enlightening for me
@mikexcohen1
@mikexcohen1 4 жыл бұрын
Nice to hear. I made this video just for you, Renan :D
@williammartin4416
@williammartin4416 Жыл бұрын
Excellent explanations
@mikexcohen1
@mikexcohen1 Жыл бұрын
Glad you liked it!
@sachindrad.a836
@sachindrad.a836 3 жыл бұрын
Very nice explanation
@helenzhou3530
@helenzhou3530 3 жыл бұрын
This video is super helpful, thank you so much!
@tranez2205
@tranez2205 4 жыл бұрын
Awesome video! Thank you so much!
Convolution as spectral multiplication
19:30
Mike X Cohen
Рет қаралды 17 М.
Complex Morlet wavelet convolution
12:44
Mike X Cohen
Рет қаралды 19 М.
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
Time and frequency domains
9:43
Mike X Cohen
Рет қаралды 104 М.
Morlet wavelets in time and in frequency
17:48
Mike X Cohen
Рет қаралды 62 М.
How to Understand Convolution ("This is an incredible explanation")
5:23
Iain Explains Signals, Systems, and Digital Comms
Рет қаралды 56 М.
Simple Code, High Performance
2:50:14
Molly Rocket
Рет қаралды 273 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН
3D Plotting in Matlab
34:58
Christopher Lum
Рет қаралды 242 М.
PyTorch 2D Convolution
13:11
ML & DL Explained
Рет қаралды 10 М.
Welch's method for smooth spectral decomposition
11:14
Mike X Cohen
Рет қаралды 41 М.
Convolution Equation Explained ("Best explanation on YouTube")
10:30
Iain Explains Signals, Systems, and Digital Comms
Рет қаралды 37 М.
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН