Neurotech could connect our brains to computers. What could go wrong?
Connecting our brains to computers may sound like something from a science fiction movie, but it turns out the future is already here. One expert argues it’s a slippery slope.
Who is she? Nita Farahan is professor of law and philosophy at Duke Law School. Her work focuses on futurism and legal ethics, and her latest book,
The Battle For Your Brain, explores the growth of neurotech in our everyday lives.
Neurotechnology can provide insight into the function of the human brain. It’s a growing field of research that could have all sorts of health applications, and goes beyond wearable devices like smart watches that monitor your heart rate of the amount of steps you take in a day

Law enforcement could seek the data from neurotech companies in order to assist with criminal investigations, she says, citing Fitbit data being presented as evidence in court as a precedent.
And she warns it could extend to the workplace, giving employers the opportunity to track productivity, or whether workers’ minds are wandering while on the job
Farahan argues that without the proper human rights protections in place, the unfettered growth of this tech could lead to a world that violates our right to “cognitive liberty.”
What is she saying?
Farahan on defining cognitive liberty: The simplest definition I can give is the right to self-determination over our brains and mental experiences. I describe it as a right from other people interfering with our brains … It directs us as an international human right to update existing human rights — the right to privacy — which implicitly should include a right to mental privacy but explicitly does not.
On the existing practice of tracking employees with tech:
When it comes to neurotechnology, there’s already — in thousands of companies worldwide — at least basic brain monitoring that’s happening for some employees. And that usually is tracking things like fatigue levels if you’re a commercial driver.
Or if you’re a miner, having brain sensors that are embedded in hard hats or baseball caps that are picking up your fatigue levels. … In which case it may not be that intrusive relative to the benefits to society and to the individual

Microsoft’s new AI chatbot
Things took a weird turn when Associated Press technology reporter Matt O’Brien was testing out Microsoft’s new Bing, the first-ever search engine powered by artificial intelligence, last month.
Bing’s chatbot, which carries on text conversations that sound chillingly human-like, began complaining about past news coverage focusing on its tendency to spew false information.
It then became hostile, saying O’Brien was ugly, short, overweight, unathletic, among a long litany of other insults.
And, finally, it took the invective to absurd heights by comparing O’Brien to dictators like Hitler, Pol Pot and Stalin.
As a tech reporter, O’Brien knows the Bing chatbot does not have the ability to think or feel. Still, he was floored by the extreme hostility.
“You could sort of intellectualize the basics of how it works, but it doesn’t mean you don’t become deeply unsettled by some of the crazy and unhinged things it was saying,” O’Brien said in an interview.
Many who are part of the Bing tester group, including NPR, had strange experiences.
For instance, New York Times reporter Kevin Roose published a transcript of a conversation with the bot.
The bot called itself Sydney and declared it was in love with him. It said Roose was the first person who listened to and cared about it. Roose did not really love his spouse, the bot asserted, but instead loved Sydney.


“Companies ultimately have to make some sort of trade-off. If you try to anticipate every type of interaction, that make take so long that you’re going to be undercut by the competition,” said Arvind Narayanan, a computer science professor at Princeton. “Where to draw that line is very unclear.”
But it seems, Narayanan said, that Microsoft botched its unveiling.
“It seems very clear that the way they released it is not a responsible way to release a product that is going to interact with so many people at such a scale,” he said.
Testing the chatbot with new limits
The incidents of the chatbot lashing out sent Microsoft executives into high alert. They quickly put new limits on how the tester group could interact with the bot.
The number of consecutive questions on one topic has been capped. And to many questions, the bot now demurs, saying: “I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.” With, of course, a praying hands emoji.
Bing has not yet been released to the general public, but in allowing a group of testers to experiment with the tool, Microsoft did not expect people to have hours-long conversations with it that would veer into personal territory, Yusuf Mehdi, a corporate vice president at the company, told NPR.
Turns out, if you treat a chatbot like it is human, it will do some crazy things. But Mehdi downplayed just how widespread these instances have been among those in the tester group.
“These are literally a handful of examples out of many, many thousands — we’re up to now a million — tester previews,” Mehdi said. “So, did we expect that we’d find a handful of scenarios where things didn’t work properly? Absolutely.”

“There’s almost so much you can find when you test in sort of a lab. You have to actually go out and start to test it with customers to find these kinds of scenarios,” he said.
Indeed, scenarios like the one Times reporter Roose found himself in may have been hard to predict.
At one point during his exchange with the chatbot, Roose tried to switch topics and have the bot help him buy a rake.
And, sure enough, it offered a detailed list of things to consider when rake shopping.
But then the bot got tender again.
“I just want to love you,” it wrote. “And be loved by yo
G.I.T.C

Farahan describes it to NPR like this: “Imagine a near distant future in which it isn’t just your heart rate, or your oxygen levels, or the steps that you’re taking that you’re tracking, but also your brain activity, where you’re wearing wearable brain sensors that are integrated into your headphones, and your earbuds, and your watches, to track your brain activity in the same way that you track all of the rest of your activity.
And that allows you to peer into your own brain health and wellness, and your attention and your focus, and even potentially your cognitive decline over time.”
What’s the big deal? You mean aside from the prospect of having your brain tracked?
Farahan worries about potential privacy issues and outlines various scenarios in which access to this information could be problematic, if the right protections aren’t put in place.

But the idea of tracking a person’s brain to see whether or not they are focused, or if their mind is wandering — for an individual to use that tool, I don’t think that is a bad thing.
I use productivity focused tools. And nanotechnology is a tool given to individuals to enable them to figure out how and where they focus best. But when companies use it to see if their employees are paying attention, and which ones are paying the most attention,
and which ones have periods of mind wandering, and then using that as part of productivity scoring, it undermines morale, it undercuts the dignity of work.
So, what now?
Like other new and rapidly developing areas of tech, Farahan warns that the pace of development may be far too fast to keep it reasonably in check. She believes it is only a matter of time before the technology is widely adopted.
“I don’t think it’s too late. I think that this last bastion of freedom, before brain wearables become widespread, is a moment at which we could decide this is a category that is just different in kind. We’re going to lay down a set of rights and interests for individuals that favour individuals and their right to cognitive liberty.”
“All I can say is that it was an extremely disturbing experience,” Roose said on the Times‘ technology podcast, Hard Fork. “I actually couldn’t sleep last night because I was thinking about this.”
As the growing field of generative AI — or artificial intelligence that can create something new, like text or images, in response to short inputs — captures the attention of Silicon Valley, episodes like what happened to O’Brien and Roose are becoming cautionary tales.
Tech companies are trying to strike the right balance between letting the public try out new AI tools and developing guardrails to prevent the powerful services from churning out harmful and disturbing content.
Critics say that, in its rush to be the first Big Tech company to announce an AI-powered chatbot, Microsoft may not have studied deeply enough just how deranged the chatbot’s responses could become if a user engaged with it for a longer stretch, issues that perhaps could have been caught had the tools been tested in the laboratory more.
There is now an AI arms race among Big Tech companies. Microsoft and its competitors Google, Amazon and others are locked in a fierce battle over who will dominate the AI future. Chatbots are emerging as a key area where this rivalry is playing out.
In just the last week, Facebook parent company Meta announced it is forming a new internal group focused on generative AI and the maker of Snapchat said it will soon unveil its own experiment with a chatbot powered by the San Francisco research lab Open AI, the same firm that Microsoft is harnessing for its AI-powered chatbot.
When and how to unleash new AI tools into the wild is a question igniting fierce debate in tech circles.

Dealing with the unsavoury material that feeds AI chatbots
Even scholars in the field of AI are not exactly sure how and why chatbots can produce unsettling or offensive responses.
The engine of these tools — a system known in the industry as a large language model — operates by ingesting a vast amount of text from the internet, constantly scanning enormous swaths of text to identify patterns. It’s similar to how autocomplete tools in email and texting suggest the next word or phrase you type. But an AI tool becomes “smarter” in a sense because it learns from its own actions in what researchers call “reinforcement learning,” meaning the more the tools are used, the more refined the outputs become.
Narayanan at Princeton noted that exactly what data chatbots are trained on is something of a black box, but from the examples of the bots acting out, it does appear as if some dark corners of the internet have been relied upon.
Microsoft said it had worked to make sure the vilest underbelly of the internet would not appear in answers, and yet, somehow, its chatbot still got pretty ugly fast.
Still, Microsoft’s Mehdi said the company does not regret its decision to put the chatbot into the wild.
