I'm afraid this talk gave some people the impression that I was flippant about careful scholarship, or wrote a misleading paper with no substance. I want to be clear that I do take these things seriously, and that the paper made substantial contributions which still stand. The message I intended was that *even though we tried hard to be careful*, we still made mistakes, which we eventually fixed. Regarding novelty, I glossed over some details in the talk and maybe made things sound worse than they were. The original version of the paper clearly cited previous uses of the adjoint sensitivities method. The claim that Joel was annoyed about was that we (mistakenly thought) we were the first to have done so entirely using general and efficient vector-Jacobian products with autodiff. To check this claim, we had closely examined (and cited) the implementations in Stan, Fatode and Dolfin. After we published, Joel Andersson pointed out that his package, CasADi, did do a general vector-Jacobian implementation, which we cited in the next version. I think I also gave the wrong impression when I said "the only thing we're doing is bringing this to PyTorch" - I was just referring to the adjoint sensitivities method. There is plenty of other novelty in the paper. I also want to update the story about the MIT Tech Review article. After this talk, Karen Hao explained to me what she had meant about the 'inventing ODEs' claim. She initially thought that we were calling our new method 'ODE Solvers', which is why the first version of the article said we were proposing a new method with that name. As for the line about how we "need to work on our branding", she said: "That line was meant to tease the fact that you simply named your new neural network very literally, after ODEs, instead of choosing a simpler, perhaps more figurative, name. (Similar to if I had invented a new apple cutting device and just called it “apple cutting device” if you catch my drift.) Of course, I see now why it made it sound like you were the first to ever string together the words “ordinary differential equations.” Hence, why I corrected it upon request.". Until that conversation, I didn't realize that Karen Hao already knew about ODEs. The original version of the article did clearly give the impression that we had invented ODE solvers, but I'm sorry for having passed on my mistaken impression that Karen didn't understand ODEs at all in this talk.
@freddiekalaitzis57085 жыл бұрын
"you can't cross the streams with Neurals ODEs" I see what you did there
@Erotemic5 жыл бұрын
This is how you science. Great job at owning your mistakes and valuing truth over profit.
@vicktorioalhakim36665 жыл бұрын
The fact that he got away with such bullshit, HIGHLIGHTS the problems with modern review process and *doing Science*, especially in ML. Neural ODEs... LOL There's no way you can convince me that this bullshit based on Euler's method is actually useful XDD
@linglingfan81383 жыл бұрын
It works, and people are using it. That explains everything. I like this work a lot.
@keixi5125 жыл бұрын
Really appreciate his honesty.
@revoiceful5 жыл бұрын
Very valuable too see someone admitting to failures in science. Still, I think the idea is refreshing and we need that more even if there might be problems to begin with. Isn't that how science is supposed to work anyway ?