Help us reach our goal of $409,774 this season to keep SFC going. Until January 15, the next $155,489 of support we receive will be matched!
$49,398 matched!
$155,489 to go!

Policy Fellow to speak on FTC roundtable about “Creative Economy and Generative AI“

October 4, 2023

Screenshot of video recording: “Creative Economy and Generative AI“

Click the thumbnail for a link to the recording which includes proprietary Javascript.

Software Freedom Conservancy's Policy Fellow Bradley M. Kuhn, participated today in the FTC's roundtable discussion about the “Creative Economy and Generative AI”. Bradley represented the FOSS and indepndant software authorship communities on this panel. Bradley joined the voices of artists, union activists, and other policy makers to discuss the pressing issue of how machine learning impacts the rights and livelihoods of artists, technologists and others. We thank the FTC for putting the issues of software freedom and rights front and center in this important mainstream issue.

Given the increasing prevalence of machine learning technologies, SFC applauds the FTC's efforts to convene creatives, technologists and forward thinking policy makers concerned by the lack of regulation and oversight around deployment of machine learning platforms. There has been significant conversations and coverage representing the large corporate interests surrounding AI technologies, but we hope this panel highlights the needs and concerns of the labor force and general public surrounding these issues. This panel lifts voices affected by the overreach of corporations seeking to profit off of the labor existing works.

SFC has written and spoken previously on the concerns around AI by creating a committee to examine AI assisted software creation, Executive Director Karen Sandler keynoted a conference about AI Law and Ethics, hosted a track at the first annual FOSSY conference, and Policy Fellow Bradley M. Kuhn has written about the licensing and ethical concerns around GitHub's CoPilot.

You can watch the recording of the discussion, and find more information about the panel on the FTC's events page.


Below, we include in their entirety Bradley's open statement at the event:


First, I'd like to thank the FTC for organizing this panel. It's humbling to be here among these key individuals from such a broad range of important creative endeavors.

Folks will notice that I'm not appearing by video today, and I again thank the FTC for providing a method for me to join you today without requiring that I agree to Zoom's proprietary terms and conditions. As a matter of principle, I avoid using any proprietary software, but in this case, it is not merely esoteric principle. Zoom is among the many Big Tech companies that have sought to cajole users into allowing their own user data as training input for machine learning systems. If consumers take away anything from my comments today, I hope they remember to carefully read the terms and conditions of all software platforms they use, as they may have already agreed for their own creative works to be part of the company's machine learning data sets. It may take you a week to read all those terms, but it's sadly the only way you'll know what rights you've given away to Big Tech.

The creative works that I focus on, however, is the source code of software itself. Software is unique among creative endeavors because it is so easy to separate the work that's created by humans (which is the source code), from the form of the work that's enjoyed day-to-day by consumers (which is the compiled binary). I'm an activist in the area of software freedom and rights specifically because I believe every consumer deserves the right to examine how their software works, to modify, improve and change it — be it altruistically or commercially. Free and Open Source software (abbreviated FOSS) aims to create, through licensing and other means, an equal field for all software professionals and hobbyists alike, and to grant rights to consumers so they have true control of their own tools.

For 30 years, our community has created FOSS and made it publicly available. Big Tech, for its part, continues to refuse to share most of its own software in the same way. So, as it turns out, nearly all the publicly available source code in the world today is FOSS, and most of it is licensed under terms that are what we call copyleft: a requirement that anyone who further improves or modifies the work must give similar permissions to its downstream users.

This situation led FOSS to become a canary in the coal mine of Big Tech's push for machine learning. Hypocritically, we've seen Big Tech gladly train their machine learning models with our publicly available FOSS, but not with their own proprietary source code. Big Tech happily exploits FOSS, but they believe they've found a new way to ignore the key principles and requirements that FOSS licenses dictate. It's clear Big Tech ignore any rules that stand in the way of their profits.

Meanwhile, Big Tech has launched a campaign to manufacture consent about these systems. Big Tech claims that the rules, licensing, and legislation that has applied to creative works since the 1800s in the United States are suddenly moot simply because machine learning is, in their view, too important to be bogged down by the licensing choices of human creators of works. In the FOSS community, we see this policy coup happening on every level: from propaganda to consumers, to policy papers, to even law journal articles.

I realize that I sound rather pessimistic about the outcomes here. I'm nevertheless hopeful sitting here in this panel today, because I see that so many of my colleagues in other fields are similarly skeptical about Big Tech's self-serving rhetoric in this regard, and I hope we can work together to counter that rhetoric fully.

The FTC asked Bradley this question:

What kind of insight do you feel like you have now into how your work or likeness is being used by generative AI systems, and what kind of transparency do you feel is needed?

to which Bradley responded:

First of all, there is now no question that the body of copylefted FOSS is a huge part of the software-assisted development machine learning systems. Big Tech are also playing cat-and-mouse, by simply excluding on the back-end the most egregious examples of copyright infringement that are found.

We now know Big Tech has disturbingly found a way to take a transparent body of freely shared information on the Internet and exploit it in secret. We simply shouldn't accept that as legitimate, and there is no reason that Big Tech shouldn't be regulated to make these systems transparent — end to end.

In my view, the public should have access to the input set, have access to the source code of the software that does the training and generation, and most importantly, access to the source code that does these forms of back-end exclusion, which will hopefully expose the duplicity of Big Tech's policies here.

Finally, I expect that once we have real transparency, it will bear out what many of the other speakers today also noted: that the issues with these machine learning systems can't be solved merely with a financial compensation model to creators. FOSS shows this explicitly: since most FOSS is written altruistically and the compensation that authors seek is the requirement for future improvement of the commons, not financial compensation. We really need full transparency in these systems to assure that essential non-monetary policy license terms and the consumers' rights are upheld.

Connect with Conservancy on Fediverse, X, Facebook, and YouTube.

Main Page | Contact | Sponsors | Privacy Policy | RSS Feed

Our privacy policy was last updated 22 December 2020.