Short Story: Veritas and Mendacium

I remember when Veritas was launched. It was at a time when the fake news industry started drowning the web. And it was an industry, make no mistake, much like spam emails or those fake tech support calls. It all started when some clever people noticed there was a lot of money in gaming our society with content, especially news. Which was ironic, because traditional newspapers had been struggling with this for years. Mostly because of staffing costs. Everything of quality takes time and money, right? Well, if you eliminate facts and just concentrate on creating the most outrageous, polarizing, grotesque click-baity content? Turns out that people will read it, regardless if it is true or not. And if the fabrication is just close enough to reality, just a bit, feeding into their very human sense of paranoia, reinforcing that something is wrong with this world, then they will even believe it. Spread it. Support it. Own it. And thus the fake news industry could leech money from it.

Oh, it started slow and innocent, but before we knew it, there were entire content factories creating fake news in all kinds of languages for all kinds of topics: In the US and Europe, but also in Russia India, China, Indonesia, even Pakistan. This wasn’t about politics or a new world order, it was about money, plain and simple. Whatever topic went best was remixed — faster, higher, more terror, criminals, turmoil, angst and Armageddon. The world wide web became a constant stream of catastrophe, misogyny, fatality that fueled hate and anger all over the world. But you won’t believe what happened next.

A couple of Google engineers built a system in their 20% time that was able to take any article or factoid and in real time tell you if the contents were true or not. Or rather how true individual facts were. It based the score on a huge library of knowledge, including history, science, most global news media archives plus the Google cache in general.

They called it Veritas. Cute, right?

The theory sounds easy: First Veritas used a number of methods to cross check claims the content object contained. For this it would tap into factual databases fed by the different sources to identify obvious falsehoods. You know, like that presidential inauguration crowd thing a couple of years ago.

For the next verification steps, Veritas got clever. The assumption was that if history is bound to repeat itself, we should be able to abstract and derive what other groups and societies did in similar situations and use that to indicate a most likely tendency. In a way, Veritas was able to “see” the dynamic of local and global societies and rate the likelihood of events that might happen within them.

Pretty soon the Veritas Score was introduced to Google search, indicating which content sources were telling the truth and which were not, clearly marking everything with a “truthiness rating”. Once mature, the system was wired into the page rank, so pages scoring mostly as untrue would rank lower. It was aimed against the fake news industry, but since fake news spread mostly on social media channels, it neither affected their popularity nor their earnings much.

As more social networks got interested in Veritas, the whole system was put in the public domain under the control of an NPO not affiliated with Google, so that it could be used by anybody. Within 3 short months it was incorporated into the big social platforms like Facebook and Twitter. Ecstatic cheers erupted from the tech community as fake news sources were marked, muted, hidden and blocked.

Not everybody was happy, though. Alt-right groups saw Veritas as an attack on free speech and tried everything to discredit the Veritas Organization. Liberals voiced concerns that scoring information was one thing, but blocking them went too far. Scientists and philosophers noted that such a thing as objective truth did not exist, thus the Veritas score was meaningless anyway. And the tech community just pointed at spam and how nobody complains how spam filters would violate free speech. So for the first time, the flood of fake news seemed manageable. Sure, some dubious stories still made it through the filter, but overall the web was a quieter, more serene place.

I also remember when Mendacium announced itself. Well, everybody remembers. Nobody knows who created it or how it was originally called. The name Mendacium was first used in the official UN investigation report and somehow stuck. Kind of fitting, don’t you think?

They traced its development to a message board called “Alt-chan/newsluls”. It started its life as a news optimizer AI. You see, fake news creators didn’t write all their content themselves. Instead, they entered keywords and text templates into a generator AI, which then created a number of different articles from it, including translations. Sophisticated generators even auto-created meme images and videos. But Mendacium was a different beast. Fed by the news streams of the world it constantly probed Veritas with alternative versions, learning what it would rank as true and what it would reject. It also measured public interest in topics and trends. And finally, it was able to create completely fabricated news including variations and spin-odd’s on web, news and social platforms. Some say it was meant to be a better news optimizer or ghost writer. I don’t believe that for a second. Too much deliberate deception. That social proof generator it used? Scary stuff. But I also don’t believe the authors could have predicted what it would actually do.

On Friday, January 22nd, 21:33 PST, Mendacium was activated on an auto scaled instance of a cloud service provider in West US. It quickly analyzed the news stream and created multiple articles around terror attacks in Jordan and Israel, which were immediately rated as authentic and true by Veritas. Facebook activated their safety features in the region as Mendacium owned Twitter accounts posted pictures of victims and explosions. While everybody was scrambling to understand what was happening, the system launched a second wave of faked articles about Israel's plans for retaliation in Palestine and the annexation of Gaza, again backed by fabricated “live videos” on social platforms showing shaky videos of tanks, helicopters and fighter planes attacking cities. (Fake) cries for help flooded social platforms, as did (real) demand for retaliation. Some hacker, likely associated with the Alt-chan/newsluls group managed to switch off parts of the telecommunications network in the Gaza strip. People started to share the news. And then panic. And that was it, really. Turns out all it took to fool Veritas was a very violent but realistic what-if scenario. We created it in our image and of course it would believe our own darkest nightmares.

Verified by high Veritas scores, governments all around the world watched a simulated war in the Middle East that didn’t take long to turn into a real one. Since it’s hard to differentiate between real and fabricated reports, we don’t actually know who fired the first shot. Or who threw the first bomb. We don’t even know who fired the first nuclear device. Pakistan? India? Maybe Israel? Who cares. All we know is that while Mendacium played out its war scenario, reports of armed conflicts spread all over the Middle East, South Asia, finally reaching the Koreas.

Today digital historians use Russian and Chinese digital platforms to try and piece together a coherent timeline of what really happened. Since Mendacium focused on US based, western social platforms, their eastern equivalents are less polluted. However you should be careful what you believe. Big parts of the Middle East are now under Indian or Russian control. Korea is now China. Both regions contain large radioactive areas where not even brave robots dare to venture. But maybe that’s still lucky as neither Russia nor the US pressed the big red buttons.

As for Mendacium — after ramping up more and more instances and processing power of the cloud service provider, the credits for the account ran out and it went into hibernation three hours, eleven minutes and thirty two seconds after it was launched. It didn’t even see the world burn itself. Funny, there are still groups worshipping it as the first true AI prophet. After all, everything it reported turned into reality. And Veritas? It was ultimately seen as a flawed attempt to solve a social problem with technology. The farewell note on the website says that people should think and judge for themselves instead of trusting a system to tell them what is right or wrong. Kind of obvious in hindsight.

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dirk Songuer

Living in Berlin / Germany, working at Microsoft, loving technology, society, good food, well designed games and this world in general. Views are mine, k?