RAID - Responsible AI Disclosure
With the advent of easily accessible and usable “artificial
intelligence” or models such as ChatGPT and Midjourney, more and more
work is being produced almost entirely by so-called “AI”.
Given the wide social and commercial interest in these models and their
outputs, the profound ethical implications of AI use, and the potential
impacts on employment, human agency, and society at large, there is an
urgent need for labelling of work that has been created with AI.
Experts have warned that — thanks to the proliferation of
software making use of AI APIs and open-source AI models — the internet
is being swamped with AI-generated content that readers (and search
engines!) find hard or impossible to distinguish from human-created
content.
What’s more, there is an urgent need for creators to be able to
watermark or otherwise disclose the fact that AI was not used in their
work’s creation. For many creators, announcing that their work was not
made by or with the help of AI is becoming increasingly popular.
They know their audiences and customers see more value in
content that was created by humans. A watermark or disclosure that can
be used widely across work where AI was not used in its creation will
aid this effort.
For a transparent solution that helps viewers, users and purchasers of
digital works understand how AI was (or wasn’t) used in the creative
process, we propose the following digital disclosures or AI “licences”.
We call this Responsible AI Disclosure, or RAID.
RAID Disclosure types
RAID - AI was used to make this work
Use this disclosure when: AI was used extensively and primarily to
write content, rewrite content (for example to change tone or
authorial voice), to create art, or to substantially alter art.
Examples include but are not limited to: Human-entered prompts and
light editing as the main human input, so-called deepfakes, or AI
voice-printing or cloning.
Download
RAID-I - AI was used for the ideation of this
content.
Use this disclosure when: AI was used to generate ideas, brainstorm,
come up with title ideas, or similar activities around ideation, but
not for creating content directly.
Download
RAID-E - AI was used in editing this content.
Use this disclosure when: AI was used to edit, summarise, or suggest
changes to otherwise human-created content.
Examples could include using ChatGPT to summarise a document, or
using AI-powered software to select the best-framed photos or
suggest cuts or edits to a piece of video.
Download
RAID-P - AI was used to proof this work
Use this disclosure when: AI was used to proof-read, fact-check, or
suggest alterations to otherwise human-created content.
Download
RAID-F - This content was created by a human user,
then substantially altered by AI as part of an automatic or software
process.
Use this disclosure when: A human creates content which is
subsequently altered by AI processes i.e. photo or video filters.
Examples might include but are not limited to: TikTok or Instagram
“youth” filters, Photoshop “sky removal” features, and the like.
Download
NO-AI Disclosure types
NO-AI - No AI was used in the creation of this
content.
Use this disclosure when: The creator of the work can authorise that
no AI at all was used in the creation of a given piece of work.
Download
NO-AI-M - Non-digital creations
This disclosure was created entirely by non-digital processes before
images, video, or other digitization of the content was uploaded to
a digital device. No AI was used either in the creation or
digitization of the work.
Use this disclosure when: A work was created entirely by hand or
manual processes i.e. a drawing, typewritten manuscript, clay
sculpture, wood turned on a lathe, a cabinet built with hand and
electronic tools.
Download
NO-AI-C - No AI was used in the creation of this
work - with caveats.
Use this disclosure when: A given piece of content was written on a
system that used algorithms or processes not typically thought of as
AI but still fitting its technical description for extremely minor
changes such as spell-checking or predictive text input. (Examples
include Grammarly, Google Docs and Gmail) or when the creation of a
piece of art used line-smoothing assistance powered by an algorithm
(examples might include Adobe Photoshop.)
Download
How to use the RAID and NO-AI licenses / disclosures.
These disclosures can be downloaded from this website and used to
identify to users or viewers of a given piece of content or artwork when
or how AI was used in the work’s creation.
We urge the creators of AI applications to include these disclosures as
watermarks or downloadable assets on directly AI-generated content, for
example when copying and pasting from ChatGPT.
Licensing
This system was inspired by
Creative Commons
and is licenced as CC BY-SA 4.0 (Attribution-ShareAlike 4.0
International.) We encourage others to freely use and build upon this
work, even for commercial purposes, so long as credit is given to the
creators.
Potentially Asked Questions (PAQ)
Disclosure or licence?
We’ve used the term “licence” in a cheeky riff on the fact that, in the
commercial-creative world, when a commercial entity wants to use work
made by another creative — for example, music, or piece of art — they
are required to licence it. This typically means an exchange of money or
services. The creative is compensated for their work, and the
corporation may use the work within agreed-upon bounds.
AI, currently, is different. AI has been trained on the work of humans
who, in almost all cases, did not give permission or intend for their
work to be used in this way. The artefacts “created” by AI are merely a
kind of regurgitation, often (correctly) referred to in jest as “spicy
autocomplete”. There is an argument that this method of training and
output represents mass plagiarism or theft. Getty Images certainly seems
to think so.
Be aware that RAID “licences” are not real licences and have no legal
effect in the current, almost regulation-free, AI environment. However,
they are real disclosures. We encourage those who have (and haven’t)
used AI in the creation of work to use them. We also like to think that
one day — hopefully soon — there will be a legal framework in place that
requires disclosure of the extent to which AI was used in the creation
of a work.
This is a cool project. Can I alter it, or build on what you’ve started?
Please do. The RAID/NO-AI concept is licensed under a Creative Commons
CC BY-SA 4.0 licence and we encourage development and remixing.
Iterations we would love to see include:
-
Automatic inclusion of RAID disclosures in content created by LLMs
like ChatGPT and automatic watermarking in AI image generators like
Midjourney.
-
Digital disclosures in the form of identifying metadata that search
engines can use to classify content created (or heavily assisted by)
AI. Work (created after a certain date) that does not include this
metadata could be tagged as such in search engine results, or
downranked.
-
Disclosure on TikTok / Instagram filters that were created with AI.
-
Disclosure on social media posts — looking at you, LinkedIn — where
users have used AI to create posts.
-
Remixing or creating new icons for the different disclosure “licence”
categories.
- We’re sure you can think of many, many more.
Are you anti-AI?
No. There is much in AI technology that promises to be helpful or even transformative. The creators of this site have both used AI. We are advocates for transparency in the use of AI, of the responsible use and development of AI, and of thoughtful regulation of the technology.
We encourage the creators of AI tools, like Open AI, Google, Meta, Stable Diffusion, and others, to incorporate or otherwise use this framework (or a derivative of it, under the terms of the Creative Commons CC BY-SA 4.0 licence) so their users can transparently disclose when work was created by their AI-powered products.
Why disclose AI involvement in a given piece of work?
Labelling, disclosures, and licences have a long history and their use is critical for enabling not only informed consumer choice but supply-chain tracking and safety. For instance, it’s required that clothing be labelled by its country of origin. Other activities that have substantial downsides but are still considered vital for societal functioning and economic activity also require licences.
For instance, many countries require drivers to be licensed. If, as seems likely, content is to be produced by AI at scale, why should that activity not be identified?
How is this useful for artists, writers, and other creators?
It is rapidly becoming important and useful for artists and writers to disclose when their creations were not made by AI. An example is the No AI Art movement that took place across artist social media like Instagram and Tumblr. We believe that specifically identifying work as not being assisted by AI will make that work more valuable and noteworthy in a world where an increasing volume of content is AI-generated.
We envisage use of these disclosures / licences in, for example, the front matter of a book where copyright information is currently displayed, or on websites or social media apps where creative work is hosted.
I don’t want people knowing something was made with AI!
That’s not a question, but why not? Does that devalue it somehow? 🤔
Did you use AI to make this site or content?
No. If we had, it would be like rain on our wedding day. The draft copy was written in Google Docs, and the draft designs done in Figma. As we wish to be the first users of the RAID licence/disclaimer we’ve invented, let it be known that all content on this site is created under RAID-C.
Who made this?
This project was originated and developed by Joshua Drummond and Walter Lim, two New Zealand-based creatives. Josh is a writer, artist, and startup co-founder. Walter is a designer / ux engineer that makes and designs stuff when he feels like it sometimes.