<p><span class="p-body">Under the guidance of Clinical Associate Professor Mary Beth Altier, grad students in the NYU SPS Center for Global Affairs' MS in Global Security, Conflict, and Cybercrime program collaborated on a project for the US State Department's Global Engagement Center to propose guidelines and best practices for the authentication, detection, and labeling of AI-generated content.</span></p>
<p><span class="p-body">Students Marina Harmon, Dylan Labita, Nikkie Lyubarksy, Ann Mathew, Liam McLeod, and Hugo Neuille presented their findings at the NYU SPS May 2024 Capstone Fair. The group generated two briefs focusing on the US, UK, EU, and China. The first brief presented countries' varying strategies for authenticating, detecting, and labeling synthetic content. The second analyzed current practices of authentication, detection, and labeling in the private sector, suggesting guidelines, implementation requirements, and policies aimed toward increasing the transparency of synthetic content—both within the tech sector and for the general public.</span></p>
<p><span class="p-body">Prompted by President Biden's Executive Order 14110, which seeks to address potential risks posed by synthetic content, develop guidelines for federal agencies to authenticate content, and subsequently label AI-generated content via watermarking, the students conducted a systematic review of related literature on the effects of misinformation, the effectiveness of warning labels, and an analysis of the possible advantages limitations and consequences—both intended and unintended—of such labeling. This critical study provides the US government with a valuable point of reference regarding the development and implementation of government guidelines and potential regulations on tech companies.</span></p>