I am a student in AI, not philosophy. I disagree with a lot of things and come to believe you don't understand AI and misrepresented it. I didn't read your book though, but the talk content is not convincing. My points: - What is a "will" even? Can we not say that a "will" is just an urge to do something that ultimately leads to an optimization of joy of life (which should reflect evolutionary advantageous behavior)? In that sense AI also has a will because it always optimizes for some minimum of a function as well. We as the AI engineers can just choose what it should optimize for. We can also set the objective function s.t. minimizing it results in the same kind of behavior humans do, or completely different good or bad behaviors. The fear about AI is that bad/not careful humans let their AI loose to onto the world with the wrong objective. - When you explain what AI is, you're only explaining AI that solves Supervised learning tasks (which is just function approximation). There is also Reinforcement Learning (which is not function approximation), which essentially also what animals and humans do. In reinforcement learning, we allow AI to interact with the world and learn by trial and error. There does not need to be any data prepared for that. It creates the data on the fly from the current interactions with the world. That's why RL methods can also be used for things like stock market exchange, while supervised learning methods can not. AI that interacts with the world on-line and learns while it interacts is gonna be the real deal, not AI programmed explicitly through the data that was collected by humans. AI already exceeds human performance in some tasks/games thanks to RL. It would not be possible to have an AI that outperforms the Starcraft II players just by Supervised Learning from the game replays (except if you had infinite compute at each millisecond and did brute force search). - In order to deal with complex systems, we don't need an exact model, just a probabilistic approximation, that is carefully dealing with uncertainty. E.g., we humans also don't have a full model in their head of what the other humans are thinking in the traffic, and we can still estimate what the other person is gonna do and perform. I so no reason why AI can't do the same. Ai can also work in an uncertain environment (again, Chess, Starcraft, Dota2... are all environments where AI is better than human performance, even though facing human opponents, which is not modelable) - When you answered the question of Yulia about AI outperforming humans, that's exactly what I mean. RL is outperforming humans, and it does not base knowledge on human knowledge as you explain. It completely learns from scratch. The human-engineered features thing is a thing of the past, and not necessary anymore, because since 2012 with the Imagenet success, Neural nets are much better at feature engineering than humans. - From what I understand in AI research, all that is currently missing for AI to outperform humans in almost any domain, is how to build an effective model of the world. And you can see already that AI is capable of (seemingly) understanding the world, from ChatGPT, and from the recent video generation methods. Again, no need to really understand the world (humans also don't). All that is necessary, is a good enough model. - In general as I understand you, to you a "model" is something that accurately describes some system. But I think even physical models are often not completely accurate because they're modelling systems that are inside of our world, and thus influenced by some random noise. We are always dealing with noise that comes from our world into the system that we are trying to model. So we can't anyway have an accurate model. We always therefore in any problem that we deal with, have to use an approximate model.