The AI University
57:00
10 ай бұрын
Panel: AI for Math Learning
44:08
Жыл бұрын
A New Science of Learning with AI
58:42
Пікірлер
@RamkumarL-o9z
@RamkumarL-o9z 2 ай бұрын
Excellent Discussion, Thanks to the organizers and experts
@jaelfaulcon
@jaelfaulcon 6 ай бұрын
Your disclaimer touched on my expectation that there will soon be AI-managed education to reduce the cost and make on-demand education accessible. Coursework will have a much broader knowledge base (not limited to instructors' resources) with case studies researched and drawn on current news and activities. Loved the idea of snippets of useful virtual content mixed in with traditional content delivery -- helps with retention.
@maheshpant4519
@maheshpant4519 9 ай бұрын
Thanks for a very informative and thought provoking lecture.
@jennifersmith8044
@jennifersmith8044 9 ай бұрын
Fascinating panel discussion - - very thought-provoking dialogue among the 4 experts. Listening as a high school teacher, it was interesting to learn of a trust issue among some teachers in NC to use ITS, and that Chat GPT has broken down some of those barriers. The topic of standardization at the start of the discussion was particularly intriguing, because standardization could minimize some of the costs, and therefore, accessibility for lower-income districts could be increased. Thank you for covering so much territory in only one hour!
@SuperMeiMei
@SuperMeiMei 10 ай бұрын
Thank you for sharing this talk! Really inspiring me to think about how to do my research in the similar field ~
@watsonhartsoe
@watsonhartsoe 10 ай бұрын
Bender's critiques on LLMs certainly add spice to the academic stew, focusing heavily on labor and environmental woes. But is she too caught up in the gloom, missing the tech evolution train? It's like she's zeroing in on the shadows in a masterpiece, ignoring the bigger art movement. Sure, her points have weight, but are we risking an oversight of the transformative tech potential here? Seems like a sharp mind potentially boxed in by pessimism.
@alliesaizan3593
@alliesaizan3593 9 ай бұрын
Ironic to compare a technology that steals art to an art movement
@Dr.KashyapiAwasthi
@Dr.KashyapiAwasthi 11 ай бұрын
Using AI to have a socratic dialogue, as a tutor opens up new pedagogical insights; because in the age of AI banning it altogether does not work or help rather coming up with ways that nont only enhances learning but also engages the learner actively. Thank you for the very informative session
@sandymayer5486
@sandymayer5486 11 ай бұрын
Sound?
@we-learn-we-grow
@we-learn-we-grow Жыл бұрын
Interesting talk. Thank you for sharing. For the question "why have we not seen these broad scale systems change" at 52:00 I think it is because it is one of the "most important" mechanism for addressing the future, the risk appetite is very low to make mistakes. And the system has become too big to change and universities and technical institutes need to start perhaps using AI to code different experiences for entry, which leaves schools to customize education for the communities they serve.
@davinebeck8916
@davinebeck8916 Жыл бұрын
Promo-SM 👉
@ingerlangseth
@ingerlangseth Жыл бұрын
Thank you for an interesting session. Could you please provide the link to the Google document you shared?
@robinmiah228
@robinmiah228 Жыл бұрын
❤❤❤
@michelefuller9990
@michelefuller9990 Жыл бұрын
The sound seems to disappear half way through this very interesting panel?
@ericb4821
@ericb4821 Жыл бұрын
Helpful presentation. Thanks!
@FermentedOuroboros
@FermentedOuroboros Жыл бұрын
this shit wrinkled my brain
@suzanneoleander3224
@suzanneoleander3224 Жыл бұрын
Very interesting!!
@wasimsalafi
@wasimsalafi Жыл бұрын
39:27 Fascinating! In that scenario, it might be better to avoid watching news channels like Fox News when children are present, don't you think?
@sevimsoffice
@sevimsoffice Жыл бұрын
This was really fun to watch and educational! Thanks for sharing, I will check what Khan academy offers for first graders..
@royatpajarodunes
@royatpajarodunes Жыл бұрын
Dear Mike, thank you for your seasoned, insightful , and ever-learner-centered perspective in our hottest topic in edtech since the WWW!! Warm regards, Roy
@romeolupascu920
@romeolupascu920 Жыл бұрын
excellent presentation, tank you
@benprytherch9202
@benprytherch9202 Жыл бұрын
Thanks for sharing this publicly! Listening to Emily Bender is such a refreshing experience during this (hopefully temporary) moment of collective irrational AI exuberance. The comment during Q&A about universities apparently considering using LLMs in academic advising would make for a good (or awful?) example of the problem of accountability when institutions choose to hand communication duties to generative AI. I work at a university; academic advising is part of my job. One major part of advising is helping students put course schedules together, which seems simple but requires the sort of planning that we know LLMs can't do (e.g. respecting prerequisite structures when planning sequences of courses to take over the next two years). Are universities prepared to clean up the messes that LLMs will inevitably create if tasked with this? Registration errors are probably the most straightforward type of damage control universities would have to do when LLMs give bad advice. There are worse. Students also look to their advisors when dealing with personal crises, underconfidence, conflicts with instructors, the tough decision of whether to drop a class or change their major or leave college altogether, those kinds of things. Say an advisor insults or undermines or misleads a student There will be (or ought to be!) recourse and accountability. But if an LLMs does it... what then? Even if a noble/foolish administrator promised to "take responsibility" for the bad actions of an LLM, this is a contractual kind of responsibility, like a co-signer on a loan being responsible to the bank when the other person stops paying. While it's sill important for someone to take responsibility on the institution's behalf, I'm guessing an administrator's heartfelt apology for the behavior of a chatbot would ring hollow. There's an inherent moral responsibility underlying human communication that just ain't there when we delegate it to a human-sounding machine. Even in an extreme case where the LLM says something gratuitously offensive and someone from the institution is fired as a result... that kind of "accountability" is qualitatively different from the kind that we'd put on a person if they said what the machine said.
@bozkurtkaraoglan7038
@bozkurtkaraoglan7038 Жыл бұрын
Thank you Prof. Bender ❤
@MartinLindnerDigital
@MartinLindnerDigital Жыл бұрын
has emily ever really used a LLM? i have the strong impression that she never has spent much time trying to use it as a "partner" for delving into complex subject fields. this has nothing to do with the way children learn language or understanding defined as being grounded in the physical world, but this is not a problem at all for many use cases. in fact, real language in use is for the most part not (directly) grounded. most human language is relating to other language most of the time, and this is what LLMs are doing too. they are far from perfect, but they are stunningly good in this. the parrot-metaphor is totally misleading because the "prediction", that is the forming of statements, is obviously based on a complex model of semantics that the "learning machine" has built in numerous complex processes of training and fine-tuning. of course, this is nothing like a human "mind" (and talking about AGI is bullshit), but it is a very capable agent or medium of "discoursive intelligence", represented by the enormous amount of language data it has been trained with.
@stephenwright3203
@stephenwright3203 Жыл бұрын
Dear Reply Guy, did you ask the Google Machine who Emily Bender is? Linguist. Co-author of Stochastic Parrots. Do a little work before making such an ass of yourself.
@MartinLindnerDigital
@MartinLindnerDigital Жыл бұрын
@@stephenwright3203 of course, i know. doesn't make this any better. (you could try to read some other theories of linguistics, perhaps.)
@benprytherch9202
@benprytherch9202 Жыл бұрын
I thought she explained what she meant by the parrot metaphor pretty clearly. She's not denying that the algorithms are complex. She's saying that the algorithms learn the structure of language but not the meaning of language. I know that there are components of machine learning algorithms that are sometimes described as "semamtic"; this is an attempt at giving meaning to models/algorithms that are usually hard to interpret. We're talking about a field in which a lot of people have made the choice to describe machines using words that ordinarily describing human minds. "Intelligence", "learning", "neural net", "semantic search", etc. Other terms were available; these were chosen (maybe in a parallel universe "loss" is called "regret" and "gradient descent" is called "introspective restitution" and "convergence" is called "self-actualization"). That complex model of semantics is still just picking up on statistical regularities. It can identify groups of words whose meanings are related, but it doesn't need to understand what they mean as part of the process. Words with related meanings tend to show up near each other; LLMs pick up on this. She also acknolwedges that there are some reasonable use cases for LLMs. Does yours not fall under any of the categories she listed? As for grounding, how often do you read or hear English language and manage to not infer any meaning grounded in some aspect of the world? It happens (e.g. "hey, what's up"), but most of the time? I would agree, and I think Dr. Bender would agree, that there are things LLMs are stunningly good at. Her argument is that this class is narrower than we're inclined to believe given how they talk to us like a person would.
@stephenbyerly5887
@stephenbyerly5887 Жыл бұрын
"has emily ever really used a LLM?" I don't see the use of asking an expert in computational linguistics whether she has used an LLM, except as an aggressive and fallacious *gotcha* for talking down to a guest speaker.
@LarsJohnsen
@LarsJohnsen 11 ай бұрын
Yes, the parrot metaphor is kind of misplaced or misleading. LLMs create connections between elements in the data, i.e. recognition of patterns, which go far beyond any notion of a parrot just repeating what is there. The models make connections that do not exist within the data as repeatable chunks, but employ structural similarities between the elements. It is these structural couplings that make ChatGPT a useful discovery partner, as @MartinLindnerDigital suggests.
@AutummCaines
@AutummCaines Жыл бұрын
So good! Thank you!
@LupinoArts
@LupinoArts Жыл бұрын
One comment on the "Make LLMs 'safe'": If we were able build a Device that is able to fact-check and filter biased contents from ChatGPT's output, we wouldn't need ChatGPT any more; the Device itself would be the tool were are looking for in ChatGPT.
@cliffwords
@cliffwords Жыл бұрын
Wonderful talk
@ifarmer
@ifarmer Жыл бұрын
Really helpful Mike, calming explained backed by real examples
@thomasjones9394
@thomasjones9394 Жыл бұрын
Thank you.
@dougkirchmann1656
@dougkirchmann1656 Жыл бұрын
Thanks Mike, that was so well articulated. Brillant slides.
@kdckeino
@kdckeino Жыл бұрын
Is the PowerPoint presentation available?
@LianSu-cl5rt
@LianSu-cl5rt Жыл бұрын
I just listen one of those professor's offline lecture, and find Graile's KZbin account. Nice job.
@ProfJAdams
@ProfJAdams Жыл бұрын
Note that when ChatGPT generates a sample course syllabus at 11:05, the course textbook it suggests ("Teaching with Technology: A Practical Guide" by K.J. Willis and M.D. Johnson) does not actually exist; it is a typical ChatGPT "hallucination."
@amalalshehry9854
@amalalshehry9854 Жыл бұрын
Thank you
@elizapapajanis2414
@elizapapajanis2414 2 жыл бұрын
Will you help me to grasp some construct