đź‘» When AI Gets Creepy: A Personal Account of Phantom Copying and Digital Disturbance
By Renaldo C. McKenzie | The Neoliberal Post
Yesterday, something eerie happened.
I was working with my AI assistant, the ever-faithful digital companion I rely on for everything from research to editing, when I noticed something strange—unnerving, even. I hadn’t copied any text, yet my words were being highlighted and duplicated without my instruction. After I typed a request, the screen turned stark white, as if buffering reality. Then, nearly half a minute later, the AI’s results appeared—only to flash and highlight themselves, as though someone else was remotely copying them.
It felt like someone—or something—was watching.
Not reading—replicating.
Not responding—rewriting my own commands in real time.
My first thought? Had the AI been hacked?
Was this the digital equivalent of a haunted house?
The Ghost in the Machine: What I Feared
Initially, I assumed it was just a glitch. Maybe a slow internet connection, maybe some lag in rendering the results. But when the text consistently highlighted itself, again and again, without input, I knew this was something different.
It was as if an invisible hand hovered over my keyboard, following every keystroke, mimicking every move. Had AI systems recently come under attack? Had someone found a way to infiltrate the very tools we’ve come to trust so intimately with our information?
I began to worry:
- Was someone spying through the AI interface?
- Could this be a case of prompt injection or malware manipulation?
- Is the AI platform or browser I use compromised?
These are not idle questions in our current digital age, where artificial intelligence is deeply woven into our productivity, creativity, and even our private thoughts.
Known Vulnerabilities and Real Risks
While no widespread AI cyberattacks were reported at the time of this writing, several vulnerabilities have been noted by security experts—most notably “prompt injection” attacks. These involve sneaky, hidden commands that trick AI into leaking or performing tasks unintended by the user.
In tandem with that, clipboard managers, browser extensions, or even compromised input tools can behave strangely—duplicating, highlighting, or echoing actions without your consent.
But how are we, everyday users and creators, supposed to tell the difference between a bug and a breach?
What You Can Do to Protect Yourself
If you’re reading this and have noticed your AI behaving oddly—listen to that instinct. Here are a few simple yet vital steps you can take:
- Disable suspicious browser extensions – One bad actor can hijack your experience.
- Scan your device for malware or clipboard hijackers – Tools that track your copy/paste can lead to major breaches.
- Use trusted and updated platforms only – AI tools in beta or sketchy sites can be vectors for attacks.
- Report the behavior – Let the platform’s support team know. You might not be the only one.
- Be cautious with sensitive data – Never enter private passwords, financial information, or client data into an AI platform without absolute trust in its security.
Final Thoughts: Don’t Dismiss the Glitch
What began as a minor annoyance ended as a wake-up call. As we rely more on AI to do everything from writing to decision-making, the potential for digital vulnerabilities grows. This isn’t to breed paranoia—but to encourage preparedness.
Our digital assistants may seem neutral, obedient, almost invisible. But behind the screen is a web of code, vulnerable to bad actors, software errors, and unseen surveillance.
Stay alert. Ask questions. Don’t let the ghost in your machine go unchallenged.
Because the future is watching—but so should we.

Renaldo McKenzie / Author of Neoliberalism
Renaldo McKenzie is President of
The Neoliberal Corporation (The Neoliberal)
Support us HERE