Point 3. How do models know how to do bad things that they are trying to nerf AI models from telling you how to do? Because it is in the training data of the model. How do those bad things get into the training data? Because the training data is from the internet. Thus, the "bad" stuff is already on the internet...so if it is in the model or not is less relevant as the knowledge is already available. Not saying AI safety isn't unnecessary, but I think it is a less important than all the people talking about it think it is. And the current champions of it sucked at it.