Truth is under attack. It has always been, of course, because truth has always been a mortal enemy for those who attempt to seize or keep power in their hands. But the amplification of the phenomenon by today's information technology is extremely worrisome. AI today can generate fake videos and images that even experts have trouble flagging as such. This, combined with the different news value and propagation potential of false information with respect to typically less attention-grabbing true facts has created an explosive situation. What to do?
I think we can take social media as a playground for an experiment. Imagine that, if you posted something on Facebook which contains data or claims or other kind of news, you were prompted to provide supporting links to sources of the information you wish to share. You could then decide to omit offering supporting links, or include them. If you omitted to provide supporting evidence, your post would still be published, but with a different background colour, and a label "not fact-checked" accompanying it. If, instead, you provided links, the post would undergo an automated fact-checking by a LLM, which would involve a comparison of presented data with ones present in the sources, and an evaluation of the trustworthyness of the latter. If those failed, the post would be published with a flag "failed fact-checking". Otherwise, it would be published with a "fact-checked" flag.
Such a simple model would not be very hard to develop, and it would certainly not be perfect. But it would have the benefit that it would offer immediate guidance on the trustfulness level of the published content. It would also not interfere critically with the business model of the social media platform, although I suspect it would overall decrease traffic toward big fake-news promoters, thereby potentially also decreasing the profits of the provider of the service. However, tax incentives could be used to make this viable.
If something like the above experiment proved that just as AI tools can be used to generate fake content, they can also screen us from fake content, we would have made a small step toward improving the falsehood content of the internet. I ache to see how such an extraordinary tool as the world wide web has been hijacked and exploited and how its revolutionary value as an educational tool is decreased by individuals, corporations, governments who exploit it for their gains.
Why aren't we moving in this direction? I guess I know too little about this, so I would love to hear your comments in the threads.
Restoring The Value Of Truth
Related articles
- Twitter Is The Place To Go For Fake News
- Youtubers Earn $1 Million A Year (Estimate) For Doomsday Videos- Teens Panic And Become Suicidal- Time To End Ad Support
- We Don't Need Facebook Authoritarianism: Casual People Spot Fake News As Well As Paid Fact Checkers
- How Does The New York Times Get 'How Scientists Got Climate Change So Wrong' So Wrong?- Fact Checking Their Climate Articles
- Transparency Weaponized Against Scientists





Comments