The drum | Obfuscation and blockchain contracts: Artists try to stop AI ripping them off

As brands rush to exploit image-generating AI models such as Dall-E 2 and Midjourney, a growing number of critics are speaking out against their plagiarism of the works of (human) artists. We take a look at the debate as part of The Drum’s latest Deep Dive, AI to Web3: the Tech Takeover.

In a very short time, artwork generated by artificial intelligence has exploded into popular culture.

Generative AI – a term that refers to a class of artificial intelligence models designed to produce images, text and other forms of content – ​​has become the buzz phrase du jour across much of the marketing world in recent months (you can read our full guide to essential AI terms). Inspired by the capabilities and rapid rise of models like ChatGPT and Midjourney, a bevy of big brands—from Coke to Instacart to Snapchat—have rushed to stake their claims in the booming generative AI sector.

But in the glare of the current generative AI craze, it can be easy to overlook a dark spot: the negative effect the technology is having on some artists.

All AI and machine learning models are trained using massive amounts of data. In the case of image-based generative AI models such as Midjourney and Dall-E 2, much of this data is artefacts obtained from the internet. That means that when a user asks one of these models to create, say, a “hyper-realistic, sci-fi, digital image of a robotic bird flying over a dystopian cityscape,” the model will start generating an “original” work . of art potentially largely derived from the work of human artists.

Moreover, many generative AI models can create such an image in seconds, while a similar piece may require months of painstaking work by a human artist. And many of these AI models are completely free to use.

How can artists hope to compete with this new reality? And perhaps even more pressingly, is there anything artists can do right now to protect their intellectual property—and their revenue—in the age of generative AI?

“I thought, what’s the point of trying?”

Deb JJ Lee, a freelance illustrator currently living in Brooklyn, New York, remembers the moment they first discovered their work had been stolen by a machine. “​​A friend of mine reached out and said, ‘Hey, I noticed someone planting your work in a [generative AI] model and they don’t tell anyone who the artist was, but it was clearly yours, says Lee.

“I looked at it and my stomach dropped … I can literally point out which piece was copied from which piece.”

For Lee, the experience of having their artwork poached from the internet and recycled by an AI model was a mockery of the passion and craft they had devoted much of their lives to. “It’s really sad. When you’re an illustrator or artist, shaping your craft and your voice is a lifetime achievement, a lifelong work in progress. You never stop… When you illustrate, you use everything that your background gives you… Your inspirations for the way you draw can be your childhood, books, your past traumas. So when an AI just takes it and just spits it out, it’s just like “are you serious, dude?” Like, AI doesn’t have trauma.”

Lee’s artwork is striking in its use of vibrant colors. In many of their pieces, DayGlo tones pop off the page, filling the viewer with a sudden, energetic buzz. Seeing their work digested by a generative AI model temporarily took the color out of the vision Lee had for his career. “I was just very sad. For a while I thought ‘what’s the point of trying?'”

Lee is not alone in their critical stance on generative AI (“I think for humanity it’s a bad thing,” they say). Earlier this year, three artists filed a lawsuit against Midjourney, Stability AI and DeviantArt – three companies that have launched image-generating AI models.

Potential solutions

It often takes the law a while to catch up with technological innovation. In some cases, it is not until long after a technology has become highly developed and intertwined with many people’s daily lives that lawmakers and regulators have a chance to understand the changes that have taken place and respond accordingly.

This has certainly been the case with social media and big tech, as most recently documented by TikTok CEO Shou Zi Chew’s congressional hearing. And at this point, it seems likely that a similar pattern will play out around generative AI.

“Artists are generally protected from simple copying by the existing legal framework, whether it’s done by a machine or a human,” says Matthijs Branderhorst, a lawyer who specializes in technology patents, among other things. “The unknown territory we [currently] find ourselves in with generative AI is that the original work from artists is used to train AI algorithms.”

Some believe that a potential solution can be found in blockchains – immutable digital ledgers that ensure full transparency for all parties involved in a particular transaction. Through the use of blockchains, the thinking goes, artists can secure smart contracts—legally binding blockchain-based contracts that automatically take effect as soon as certain predetermined conditions are met—to ensure that they are compensated or at least notified whenever another artist (human or machine) borrows his work.

“The way blockchain works is you literally can’t post or use an image if it has a smart contract and you haven’t complied [the terms of that smart contract], says Mark Long, author and CEO of the gaming company Neon. “The contract goes wherever the picture goes. I think it gives the power back to the creators … if we could get these licensing agreements in place, then the revenue goes directly to the creator.”

The specifications of smart contracts, Long adds, can be tailored by individual artists: “Maybe the artist never wants anyone to be able to use it, or they want them to be able to look at it, but nothing else – they can specify that in their smart contract .”

Sounds simple enough. But how can the use of smart contracts be integrated into broader copyright laws, which vary from region to region? Put another way, is it possible that if more and more artists were to start using smart contracts to protect their work from generative AI models, could this practice be incorporated into existing legal frameworks?

Branderhorst, the patent attorney, doubts such a possibility: “The decentralized nature of the blockchain and the distribution of nodes across many different jurisdictions would [be] difficult to accommodate the territorial nature of copyright law.”

Another solution—one that Lee, the Brooklyn-based illustrator, is currently using—is a technology called Glaze developed by a team of researchers from the University of Chicago. As the name suggests, Glaze adds a subtle layer of distortion to artwork, thereby making it impossible for generative AI models to identify and mimic the distinctive elements of a particular artist’s style. It’s like putting on aviator sunglasses in front of a retinal scanning device; it’s only a thin layer of obfuscation, but it’s enough to make the device look dated.

Why should all this matter to marketers?

These issues pose a new dilemma for marketers: first, it is a radically transformative technology, potentially saving enormous amounts of time, energy and resources; at the same time, it poses an existential risk to the livelihood of many artists.

How, then, should brands enter this brave new (AI-generated) world?

First, it seems clear that any brand that works with human artists—an illustrator like Lee, for example—should engage those artists in conversation before launching a campaign that might leverage artificial art. As this technology spreads, so will the awareness among artists of its potentially harmful effects. A carelessly launched generative AI campaign risks alienating a brand’s (human) artist collaborators.

“Like a guy who uses art [in his work]i’m sympathetic to artists and i want to make sure they get compensated so i’m going to choose [generative AI platforms] which I know the artists are protected on, says Neon’s Long.

Additionally, as the lawsuit mentioned above should make clear, generative AI raises some legal—as well as ethical—concerns. Any marketer considering integrating generative AI into a campaign would do well to pay close attention to the laws that are likely to grow and develop around this new technology – including perhaps laws intended to protect artists’ intellectual property.

“It will take some time for the legal landscape to settle [and] more lawsuits will probably follow, says Branderhorst. “But hopefully a balance will be found there both sides [the companies building generative AI models and human artists] wins – and not just the lawyers.”

For more on the latest happenings in AI, web3 and other cutting-edge technologies, check out The Drum’s latest Deep Dive – AI to Web3: the Tech Takeover. And don’t forget to sign up for The Emerging Tech Briefing newsletter.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *