D&D General D&D AI Fail

Twitter thinks there's a new WotC president who will give you a baby dragon.

I guess I don’t have to worry about my job going away quite yet. This is what Twitter’s AI thingy thinks is currently happening in the industry I work in.

Screenshot 2024-04-19 at 17.21.30.png
 

log in or register to remove this ad


log in or register to remove this ad







EzekielRaiden

Follower of the Way
That is the problem in a nutshell.

Currently, AI cant tell the difference between "trending" versus true.
Because it can't. It is physically incapable of doing so, and unless a radical new development occurs, this will not, cannot change.

Whether or not something is factually true is part of semantic content. The meaning of the statement. LLMs and (to the best of my knowledge) all other current "AIs" have no ability whatsoever to interact with or process semantic content. They can address syntax, which can be very powerful and do some very interesting things,* but it cannot even in principle address purely semantic content like truth-value. As some researchers have put it, AIs are "confidently incorrect."

The only way to teach an AI how to avoid this would be to train it to only truly listen to trusted sources, and then it would only be as reliable as the sources it drew upon—and would have some issues if those sources are too few, as it might not have enough training data to spit out meaningful results. In theory though, you could make one designed to collate and summarize existing news reports.

*E.g. I recently learned that in the high-dimensional vector space of the tokens for GPT, if you take the token vector for "king" and add to it the vector that points from the token "male" to the token "female," you actually get relatively close to the token for "queen." That means the vector "female - male" in some sense encodes the syntactic function of gender in the English language, which is pretty cool.
 
Last edited:



Remove ads

Remove ads

Top