Neural Net Analogy
Training Set
Consider a typical modern deep learning network. Consider the difference made by the choice of what it is trained on, what we call its training set. I will phrase things in terms of 'asking a large language model', though I mean things in a general sense where 'asking a model for an image' may involve pictorial input as well as textual input. When we make a request to a model, and that request is reasonably within the compass of its training set then, provided it is well trained, it will give a reasonable output. If, however, the input is far from what the network was trained on, the output will likely be garbage.
In the context of psychosis, the internal state of the brain, in terms of what signals are flowing around its neurons, and possibly quantities of neurotransmitters floating around, are well outside of what that brain is used to, so it is as if the brain is being asked questions that are well outside the compass of its training set. In this situation, a person's brain does its best to interpret its state, but this will likely appear very abnormal to an outsider. How the brain gets into such a state is a separate question to both how it behaves in such a state, how to get it out of such a state, and how to prevent it getting back into such a state in future.