The Challenges and Consequences of Generative AI: Lessons from Web 2.0

writer_trendcongnhet

Generative AI has experienced a rapid boom in recent years, with advancements like OpenAI’s ChatGPT capturing attention and setting records. However, this surge in generative AI has also led to a growing concern among governments and lawmakers. The US Federal Elections Commission, Congress, and the European Union have all initiated efforts to address the potential pitfalls of this technology. But these challenges are not entirely new; they bear a striking resemblance to the issues that have plagued social media platforms for years.The Generative AI Canon - by Jon Radoff

 

In this article, we will explore the parallels between generative AI and the problems faced by social platforms during the Web 2.0 era. We will delve into the issues of deceptive campaign ads, the development and labeling of training data, and the unintended consequences of outsourcing. Additionally, we will examine the responses of AI companies and social media platforms to criticism and the implications of generative AI on the proliferation of disinformation. Finally, we will discuss the need for effective regulation and the societal implications of the fast-paced development of AI technologies.

 

The Parallels between Generative AI and Social Platforms

 

Problematic Infrastructure and Outsourcing

 

Generative AI companies often build upon the problematic infrastructure established by social media platforms. These platforms, including Facebook, had come to rely on outsourced content moderation workers to mitigate issues like hate speech, nudity, and violence. Similarly, generative AI companies tap into the same workforce, subjecting them to low pay and challenging working conditions. The outsourcing of critical functions distances the administrative oversight of AI systems and social networks, making it difficult for researchers and regulators to fully understand their development and governance.

 

Outsourcing also obscures the attribution of intelligence within a product. When content is removed, it becomes unclear whether it was the result of an algorithm or a human moderator. Similarly, the contribution of AI versus the human worker in customer service chatbots becomes muddled. This lack of transparency hinders efforts to assess accountability and understand the true extent of AI’s influence.

 

Ineffective Safeguards and Circumvention

 

AI companies, like their social platform counterparts, often implement “safeguards” and “acceptable use” policies to address the unintended consequences of their technologies. However, these measures are prone to circumvention and fail to provide comprehensive solutions. For instance, shortly after the release of Google’s Bard chatbot, researchers uncovered major loopholes in its controls, enabling the generation of misinformation. Despite promises of action against problematic content, the effectiveness of these policies remains uncertain.

 

Additionally, the implementation of policies to label AI-generated political advertisements, such as those by Meta and YouTube, falls short in addressing the various ways in which fake media can be created and shared. As a result, the spread of disinformation and the erosion of trust in genuine media are amplified, paralleling the challenges faced by social media platforms.

 

Reduction of Resources and Teams

 

As generative AI gains prominence, platforms have begun cutting back on resources and teams dedicated to detecting and addressing harmful content. Trust and safety teams, as well as fact-checking programs, have experienced reductions in their capacity, creating an unstable environment. The consequences of these cutbacks pose a significant challenge to effectively combat the deceptive and malicious use of generative AI.

 

The reckless approach of releasing AI models without sufficient consideration mirrors the “move fast and break things” mindset that was prevalent during the Web 2.0 era. The opacity surrounding the development, training, testing, and deployment of these products further exacerbates the challenges faced by regulators, civil society, and the public.This Copyright Lawsuit Could Shape the Future of Generative AI | WIRED

 

The Impact of Generative AI on Disinformation

 

Amplifying Disinformation

 

Generative AI has elevated the production and dissemination of disinformation to new heights. The ease, speed, and affordability of creating misleading content have empowered individuals to generate AI-generated videos of politicians, candidates, news anchors, and CEOs saying things they never actually said. This proliferation of disinformation poses a significant threat to the veracity of real media and information.

 

The implications of generative AI can be seen in the political arena, where fake videos of candidates could distort election campaigns. Despite efforts by platforms like Meta and YouTube to label AI-generated political advertisements, these policies fail to address the diverse range of techniques used to produce and share fake media. Consequently, the impact of generative AI on disinformation extends far beyond the measures currently in place.

 

Undermining Trust and Society

 

The prevalence of generative AI-driven disinformation not only perpetuates false narratives but also undermines trust in genuine media and information. Similar to how former US President Donald Trump dismissed unfavorable coverage as “fake news,” the use of generative AI has enabled individuals to discredit legitimate information by claiming it is fabricated. This erosion of trust has significant societal implications, as the veracity of information becomes increasingly difficult to discern.

 

The consequences of generative AI’s impact on trust and society are further exacerbated by the reduction of resources and teams dedicated to detecting and combating harmful content. The unstable landscape created by these cutbacks leaves platforms ill-equipped to address the deceptive and malicious use of generative AI, compounding the challenges faced by society.

 

The Need for Effective Regulation

 

Lagging Regulation and Capitalism’s Role

 

Regulation pertaining to generative AI lags behind the rapid development of AI technologies. The lack of regulatory frameworks creates an environment where companies can move quickly without fear of penalties. Regulators struggle to keep up with the pace of technological advancements, resulting in a significant gap between AI development and effective oversight.

 

Professor Hany Farid emphasizes that the challenges posed by generative AI are not solely technological but also societal. He argues that tech companies capitalize on privatizing profits while socializing the cost, allowing them to prioritize financial gain without adequately considering the consequences. This observation highlights the need for comprehensive regulation that addresses both the technological and societal dimensions of generative AI.

 

The Role of Congress and Global Regulators

 

Unlike the slow response to the challenges posed by social media, Congress and regulators worldwide have shown a determination to address the issues surrounding generative AI. However, the gap between AI development and regulatory oversight remains significant. The complexity of generative AI technologies and the lack of understanding among regulators make it challenging to implement effective regulations.

 

To bridge this gap, it is crucial for regulators to collaborate with experts in the field and leverage their expertise. Additionally, continuous dialogue between regulators, AI companies, researchers, and civil society is necessary to develop regulations that strike a balance between innovation and accountability.CEOs Are Bullish on Generative AI as Adoption Expands

 

Conclusion

 

Generative AI’s rapid growth and its accompanying challenges parallel the issues faced by social platforms during the Web 2.0 era. The similarities in problematic infrastructure, ineffective safeguards, and the amplification of disinformation highlight the need for comprehensive regulation and oversight. The erosion of trust in genuine media and the reduction of resources dedicated to addressing harmful content further underline the societal implications of generative AI.

 

To ensure responsible and ethical development, the development and deployment of generative AI must be accompanied by effective regulation. Regulators must strive to keep pace with technological advancements, collaborate with experts, and engage in dialogue with AI companies and civil society. By addressing the challenges of generative AI head-on, society can harness its potential while mitigating its unintended consequences.

 

Chia sẻ:

Tags: