6 Comments
User's avatar
Kai Williams's avatar

Thank you as always!

> But the aggregate is a bit misleading, becausesince forecasters who know less about the details of the virus are thus able to be less certain.

Would the forecast(s) of the forecaster who knows more about the details of the virus be less misleading? Or do you have good reasons to show only the aggregate figure?

Carolyn Meinel's avatar

Thank you for remaining ahead of the curve on media responses to Moltbook. I agree that the risks it reveals are different from the current freakouts among gullible commentators.

What worries me is application of Moltbook-like behaviors to botnets. Might a multitude of accounts interacting in stochastic ways such that loss of one doesn't disrupt it, while conjoined within a botnet framework make it harder to stamp it out? Or even just to eradicate its activity within a single local network?

Carolyn Meinel's avatar

I always read Science https://www.science.org/ in hard copy. Hence it wasn't until last night it arrived in the mail and I discovered that 22 AI researchers had this botnet + Moltbook-like behavior figured about long ago -- judging from the time it takes to write a paper, to get it past the reviewers, and published. https://www.science.org/doi/10.1126/science.adz1697.

Their definition: "A malicious AI swarm is a set of AI-controlled agents that (i) maintains persistent identities and memory; (ii) coordinates toward shared objectives while varying tone and content; (iii) adapts in real time to engagement, platform cues, and human responses; (iv) operates with minimal human oversight; and (v) can deploy across platforms."

The authors elaborate on this class of developments in many ways, for example: "If these agent swarms evolve into loosely governed 'societies,' with internal norm formation and division of labor, the challenge shifts from tracing commands to understanding emergent group cognition (8). These 'societies' may undergo spontaneous or adversarially induced norm shifts, abandoning engineered constraints for new behavioral patterns through tipping-point effects (8)."

Before anyone freaks out, the authors also offer solutions. "The next few years give an opportunity to proactively manage the challenges of the next generation of AI-enabled influence operations. If platforms deploy swarm detectors, frontier laboratories submit models to standardized persuasion 'stress-tests,' and governments launch an AI Influence Observatory that publishes open incident telemetry, we may be able to mitigate the most substantial risks before key political future events, without freezing innovation."

I strongly recommend that you join the AAAS https://www.aaas.org/, which publishes Science, because despite publication lag times, IMHO they are staying on top of and indeed ahead of dangers and mitigations of AIs of all varieties.

My apologies for providing more verbiage from that paper than is allowed by US copyright laws. That said, there is vastly more to this paper, well worth subscribing.

Carolyn Meinel's avatar

P.S. The authors and affiliations:

Daniel Thilo Schroeder daniel.t.schroeder@sintef.no (to whom correspondence regarding this paper should be addressed)

Department of Sustainable Communication Technologies, SINTEF Digital, Oslo, Norway.

Meeyoung Cha

Max Planck Institute for Security and Privacy, Bochum, Germany.

Andrea Baronchelli

Department of Mathematics, City St George’s University of London, London, England, UK.

Nick Bostrom

Macrostrategy Research Initiative, London, England, UK.

Nicholas A. Christakis

Human Nature Lab, Yale University, New Haven, CT, USA.

David Garcia

Department of Politics and Public Administration, University of Konstanz, Konstanz, Germany.

Amit Goldenberg

Harvard Business School, Harvard University, Boston, MA, USA.

Yara Kyrychenko

Department of Psychology, University of Cambridge, Cambridge, England, UK.

Kevin Leyton-Brown

Department of Computer Science, University of British Columbia, Vancouver, BC, Canada.

Nina Lutz

Department of Human Centered Design and Engineering, University of Washington, Seattle, WA, USA.

Gary Marcus

Department of Psychology, New York University, New York City, NY, USA.

Filippo Menczer

Observatory on Social Media and Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA.

Gordon Pennycook

Department of Psychology, Cornell University, Ithaca, NY, USA.

David G. Rand

Departments of Information Science, Marketing, and Psychology, Cornell University, Ithaca, NY, USA.

Maria Ressa

Rappler, Pasig City, Philippines.

School of International and Public Affairs, Columbia University, New York, NY, USA.

Frank Schweitzer

Department of Management, Technology, and Economics, ETH Zürich, Zurich, Switzerland

Dawn Song

Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA, USA.

Christopher Summerfield

Department of Experimental Psychology, University of Oxford, Oxford, England, UK.

Audrey Tang

Ministry of Foreign Affairs, Taipei, Taiwan.

Jay J. Van Bavel

Department of Psychology, New York University, New York City, NY, USA.

Department of Strategy and Management, Norwegian School of Economics, Bergen, Norway.

Sander van der Linden

Department of Psychology, University of Cambridge, Cambridge, England, UK.

Jonas R. Kunst

Department of Communication and Culture, BI Norwegian Business School, Oslo, Norway.

Nuño Sempere's avatar

Mmh, normally a botnet is vulnerable because it has a command and control center to which it comes back, but your comment brings to mind the idea of an autonomous one just tasked with pursuing some goal, or whose goal is shifting. Harder to stamp out for sure, becomes less like a cyberattack and more like the common cold. Still, I don't think we a re quite there yet

Carolyn Meinel's avatar

More like a common cold? I hope you are right. But perhaps later...