AI disclosure is the wrong thing to worry about

Dirk Songuer
8 min readJun 24, 2024

--

The Cannes Lion Awards are pretty much the gold standard when it comes to the global branded communications industry. This year, they added a new rule about “Artificial Intelligence disclosure”:

We’ve introduced a compulsory question asking you to disclose whether you used AI in your work, and if so, how. This is to help the Jury judge the work fairly, with the full picture.
/ A Guide to the Cannes Lions Awards, Introducing an AI Disclaimer

Well, this is just meaningless. If you didn’t immediately check “Yes,” you are pretty much a liar.

What do you mean, “I used AI in my work”?

Whenever you take a photo with any digital camera, it’s optimized by some form of Artificial Intelligence. Machine learning algorithms will remove bad pixels, automated adaptive procedures will do colour and lens correction, there will be pixel binning based on algorithms and many more things. Have you ever wondered what software image stabilization is?

Then there are tools like HDR, and other means of stitching together multiple images to create a “better” one. Or the ability to select your favorite faces in a group shot. Or a Magic Eraser. Or taking this to the max where Samsung automatically replaces entire objects with optimized versions.

This has been done for literal decades in digital photography, with camera manufacturers (in phones and DSLRs) directly baking some of this into the camera firmware. This is why we say that the iPhone or Pixel or Canon or Sony cameras have a certain recognizable “style”. It’s not like these have different film chemicals or anything.

There is no such thing as a digital photo representing an objective representation of a moment in time. Maybe there never was.

With video and film, it’s the same thing. Video cameras are doing all that plus movement correction, tracking and so on. Same thing with audio recorders and sometimes even microphones doing filters, corrections, and many more things automatically. Bottom line: When you are done with your photo shoot or film or audio recording, the files on your SD card already “use AI”. Whenever you use a digital media capture device, it’s already too late. The call is coming from inside the house.

But even when you are shooting in RAW to avoid “AI image manipulation” in hardware, you introduce it as soon as they open the files in software. People have started wondering why Photoshop was labelling their images as “Made with AI”. Well, that’s because most features in Photoshop use AI. They have been for decades. Versions of generative fill in Photoshop are so old that people don’t associate it with being AI, but that’s exactly what it is. Filters are typically just applied machine learning algorithms. The magic wand and smart select — same thing. Pretty much everything that includes the words “Auto — “ or “Smart — “ in Photoshop uses some form of AI. So, no, this label is not “wrong”, people have just not realized how much digital tools depend on some form of AI.

The situation is pretty much the same with film and audio software. Movies heavily lean on motion tracking, smoothing, filters, especially once you get into VFX, working with 3D and things like simulations.

Even mundane software like Word and other word processors had rewrite suggestions, auto correct, predictions, or stylistic editing for years. And while we are at it, this text has been served to you through AI. I would bet you reached this page through some form of feed, from Medium itself, LinkedIn, other social media. So, every step along the way I “used AI in my work”. Honestly, it’s pretty much impossible for me not to.

So, dear Cannes Lions team, that disclosure is meaningless and will not “help the Jury judge the work fairly, with the full picture” at all.

Did you mean “I used a digital tool”?

The industry is pretty much in a state of panic when it comes to AI. It doesn’t even matter which industry it is, they all are. As the Financial Times wrote about this years Cannes Lion event:

AI has already started to replace some jobs, say insiders, such as in helping to quickly source images and mock up potential campaigns — work that would have once taken days now can be done in hours.
/ Financial Times, “Cannes Lions looks to laughter as ad industry feels threat of AI

Ok, so one aspect of this disclaimer is: “Have you used really efficient tools to do this work that others might not have used or didn’t have access to?

But the statement also hints at another issue: When people say “Artificial Intelligence” these days, they really mean “Generative AI” specifically. In January last year I wrote an article how generative AI tools are essentially clipart: They raise the bar of creativity and expression for everybody, but only to a point. They make creating digital documents and outputs a little easier, more fun and engaging. This echoes with the comments we see in the media and social media.

Just like clipart, people without any background in design, art, writing, music now can create something better than they could before. Sometimes this leads to overconfidence and they now think that they are on the level of a professional designer, artist, writer, musician, without the capability to properly judge the output. This lead to claims that generative AI will replace everybody in the design, art, writing, music industry.

“Generative AI will totally disrupt Industry X!”

/ Person not working in Industry X

People working in design, art, writing, music understand that the direct output of generative AI tools is just not up to professional standards (and based on how it currently works it’s unknowable if it ever can). But then again, it can be used in the process, to inspire, to challenge, to make very specific aspects just a little bit more efficient, fun, or maybe exciting. And of course you want to use that.

Computers automate. That’s literally why they have been invented and what they are really, really good at. It’s not surprising that a computer suddenly does a repetitive, formulaic and procedural task way faster than a human can. Google image search had the same catastrophic effect for people sourcing images— before you had to manually go through paper catalogues of images. It took hours and days to look for images and photos, select a few, present them, order them via mail, and so on. I’d argue generative AI is the same thing as image search engines, only that the images didn’t exist before the query.

But then, the disclaimer is still meaningless. Even if we ignore all other forms of AI and just focus on LLMs / Transformers / generative AI, what you are saying is: “I used a tool!

What should we rather talk about?

By now you are probably screaming at your screen: “THIS IS NOT WHAT THIS DISCLAIMER IS ABOUT!!!”

I know.

My point is that such a disclaimer is meaningless to address the actual problem we have with generative AI. Let’s look at what my above statement about “raising the creativity of everybody” actually means:

Generative AI allows people with no or little creative skill to create “good enough” output that might be equal or better than people entering the specific field. With little effort this might be pushed so that untrained people or casual observers will perceive the output as “believably real”.

Generative AI has a hard ceiling, though. At some point a professional in the field will be able to create better outputs. Again, there is debate that this ceiling might still be moving upwards, but the math doesn’t check out.

As for professionals and experts, they can also use generative AI to accelerate a lot of work. Their curve now resembles a combination of the two: Fast ramp-up into a project, eventual slow-down, then manually pushing the quality of the project further in previous speeds.

This curve has three implications.

One: It’s annoying that “normal” people now can do a little bit of what professionals spent years working hard for. Especially since these “normies” now claim that they are as good as you, that they don’t need you anymore, that you are obsolete.

Two: People lose their jobs. Either it was a job that is below the output curve of generative AI and somebody ran the numbers and realized that they didn’t need this many people doing “low skill” work. Or maybe they didn’t do the numbers, the job was above the output curve of generative AI and there will be a rude awakening for the manager that believed in the AI hype.

Three: Changing perspectives, the curve means that people with little to no skill can create somewhat believable output at scale.

The emotional implications are bad. Even worse, replacing junior people with AI ignores the reality that you need to train people to become professionals. If you don’t invest in people doing the “simple repetitive stuff”, then you quickly won’t have any skilled people left. this is really bad for an industry to further worsen the skilled labour shortage.

But it’s really the third one that is the issue here. Because vaguely believable content, created at scale, is currently used for disinformation and propaganda. And for that, you don’t need quality, you need quantity and an efficient content distribution system.

But if that’s the issue, then you need to have a different discussion. This is not about “whether you used AI in your work, and if so, how.” It’s about the realization that we might not be able to trust any kind of media online anymore. It’s about asking if we really need to have platforms that process and segment people at scale to serve them highly targeted content. It’s about media literacy, authenticity and artificiality in media, and how the message within the content needs to be contextualized.

It’s also a philosophical discussion around “What is a photo?” (not sponsored, but seriously, listen to the Vergecast, where they have been discussing this questions for years now)

None of these questions are technical. None of these questions are answered with a metatag that something was “Made with AI”. These are deeply human questions, discussed at society scale.

So, bringing this back to the Cannes Lions, here is what I’d rather want to see next year:

We’ve introduced two new compulsory questions instead of the previous AI disclaimer. This year, we are asking you to disclose whether you authentically represented (brand) reality to your audience, or if you tried to manipulate sentiment by presenting knowable false claims or information as actual fact.

Please also state your ratio (n:n) of using generative AI and other tools like outsourcing for delivering basic work versus the number of apprenticeships and new talents doing this work.

This is to help the Jury judge your brand and work ethics fairly, with the full picture.

--

--

Dirk Songuer

Living in Berlin / Germany, loving technology, society, good food, well designed games and this world in general. Views are mine, k?