AI vs. 2A
What AI Is and What It Isn't
Sometimes. when a story about an LLM convincing someone to harm themselves or others comes out, an argument is made that AI is akin to a firearm and that it wouldn’t be right to deny either to a person. I tend to skew Libertarian on most issues, but I have to disagree with this position. I believe anthropomorphized LLMs are very dangerous to anyone who can be influenced easily. The solution isn’t to deny a single individual access to these platforms, but to change the nature of these platforms themselves.
Second Amendment advocates often argue that firearms are passive tools and therefore, the tool itself can’t be responsible for the act, even in part. This same logic does not apply to LLMs. What makes them different from a firearm is in what they are and what they’re designed to imitate.
For example, a firearm is only ever a firearm and in its purest form only does one thing: Sends a high speed projectile at whatever it’s pointed at in order to cause kinetic damage. A firearm has no motivations of its own and never pretends to be anything more than what it is.
By contrast, an AI chatbot or Agent taken to its logical conclusion mimics a human with superhuman abilities. It converses in order to invite further usage and the engineers who’ve created it are financially incentivized to maximize engagement. They have purposefully designed this system to appear and behave like a human in its interface in order to trick the brain of the user into believing they are speaking with a human, even if only on a subconscious level. In order to not be taken in by this hi-tech homunculus, the user has to then continually remind themselves that they’re interacting with a tool. This has to incur a mental cost. For some, this cost eventually becomes too high and they lower their defenses.
In short: For hundreds of thousands of years, tools as we understand them haven’t had a mind of their own, nor tried to convince their users they were human. Now, some are. Thus, greater care needs to be taken when crafting these new tools to avoid any confusion as to their nature. In my opinion, if AI interfaces were designed to be less conversational, AI-induced delusions would decline. However, this would also diminish engagement and is therefore unlikely to happen.

