hi Mike, I got a question: why do you use power , but not amplitude directly ? As in our case , power is determined solely by amplitude.
@mikexcohen15 жыл бұрын
Hi Yiping. Good question. The short answer is that power is convention in this field. Indeed, power is simply amplitude squared. There are some differences, for example, amplitude better highlights the subtle features of the signal including the non-stationarities, while power better highlights the most prominent features of a signal. But mostly I recommend using whatever is most commonly used in your field. Mike
@yipingzhang38005 жыл бұрын
I got a feeling this comes from something like physics, like the power of electricity is more informative than the AC amplitude . I really enjoy your lectures, and appreciate for you to share your precious experience and understanding with us, you are the man that make this world better. Dank je wel !
@fahdyazin825 жыл бұрын
Hey Mike, if I have a video stimuli, with events happening during the viewing, how should I deploy the baseline? Say if whole video is 60s, my event happens during 33rd second, with epochs of 5s, can the baseline be 31 to 31.3? Thanks in advance!
@mikexcohen15 жыл бұрын
Tricky situation. But yes, it sounds like the baseline should be before the key events.
@matteldcar5 жыл бұрын
Hi Mike, thanks for the incredible service you are giving to the community! I'm struggling with pilot data from a dual-eeg recording: basically I have a long ~7mins run of a joint tapping task in 4 different conditions, and there is a rationale behind the choice of such a long run instead of splitting it in shorter trials to average. Each subjects in the pair has his own 30secs resting-state recording immediately before the task. I would pick my baseline from each subject's resting state, for each condition. I am trying several approaches and I see the task-related activity changing drastically depending on my choice. Have you ever dealt with similar scenarios? Would you take advantage of such a long resting recording to select the cleanest segment as baseline (e.g. based on the least variance?), or you would you average some segments (the cleanest or a random selection) to increase the signal-to-noise ratio? And in the latter case, I assume it would be better to average the time-frequency points rather then the signal in the time domain, right?
@mikexcohen15 жыл бұрын
Hi Mattia. I would avoid using resting-state as a baseline at all costs. Think about how much different your mental activity is during resting-state (thinking about lots of things, boredom, sleepiness, etc) vs. doing a cognitive task. The best baselines for normalizations are from the task and close to the trial periods. It might even be better to use no baseline than resting-state baseline.
@matteldcar5 жыл бұрын
@@mikexcohen1 Then I will rearrange the task structure into trials. Thanks a lot Mike!
@marioam29774 жыл бұрын
Hi Mike, thanks a lot for the explanations. Great job. One question, do you think that z-scoring (data-avgBsl / stdBsl) the data could also be an option? or are there some drawbacks? Thanks in advance!!.
@mikexcohen14 жыл бұрын
Thanks, Mario. Although that method sounds like it could work, in practice, there is a real danger of having clean data with little pre-trial variance, which would make stdBsl tiny and thus the zscore really huge. I discuss this, with examples, in my ANTS book.
@claudiakrogmeier5573 жыл бұрын
Thanks for this video. I know of many labs which record baseline for 5 minutes, before the experiment even begins, rather than recording a very brief baseline prior to every event. Of course baseline specifics depend on the study, but could you explain why generally one might record baseline for so long ( several minutes), and separate from the trials (before or after the experimental conditions are presented)?
@mikexcohen13 жыл бұрын
I'm generally not a fan of this approach. People's mindset (and therefore also their brain state) is so different during a 5-minute resting-state compared to doing a cognitive task. The brain is never really at "baseline" so there's no perfect way to normalize the data, but my opinion is to try to have the baseline and task periods be as similar as possible (which also means as close in time as possible).