Meta “programmed it to simply not answer questions,” but it did anyway.
Hallucinating is a fancy term for BEING WRONG.
Unreliable bullshit generator is still unreliable. Imagine that!
AI doesn’t know what’s wrong or correct. It hallucinates every answer. It’s up to the supervisor to determine whether it’s wrong or correct.
Mathematically verifying the correctness of these algorithms is a hard problem. It’s intentional and the trade-off for the incredible efficiency.
Besides, it can only “know” what it has been trained on. It shouldn’t be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn’t know how to use these models.
Uhm. Have you ever talked to a human being.
Human beings are not infallible either.