At a get-together in Paris, OECD ministers said a new ‘anticipatory’ approach is needed. Consumer neurotechnology, increasingly able to read minds, is seen as one of the biggest novel risks
Ministerial events of the OECD Science and Technology Policy Ministerial, in Paris, France, April 23, 2024. Photo credits: Maud Bernos / OECD
Science ministers from across the world have endorsed a new approach to how new technologies are regulated, warning that governments need to foresee and steer breakthroughs, rather than being caught on the back foot and reacting to problems after they emerge.
Spooked by sudden advances in artificial intelligence (AI), they endorsed a new set of ideas drawn up by the Organisation for Economic Co-operation and Development (OECD) in a get-together this week in Paris.
While forecasting and planning for new technologies might sound like common sense, the OECD’s new framework underscores a shift away from an arguably more laissez-faire approach that prevailed in the 1990s and 2000s, when technological advance was left more to the market or the curiosity of individual scientists.
The idea is to move from “management of technological risks” towards “‘getting ahead’ of technology developments,” the Framework for Anticipatory Governance of Emerging Technologies says.
One of the key ideas – and most relevant to universities and researchers – is the need for much better “strategic intelligence” about how emerging technologies might play out, and the new social and security risks they might create. The Netherlands and Canada, for example, have set up specific horizon scanning exercises on where quantum technologies might lead.
This kind of future-gazing of course already happens, but the OECD wants to give it a boost. With a better sense of what might be coming, governments are better placed to draw up “strategic visions, plans, and roadmaps for emerging technologies.”
Hanging over science ministers in Paris, who represented OECD countries plus a handful of others, including Argentina, the Philippines and Thailand, is the explosive emergence of generative AI – which few if any governments saw coming.
“The release of generative AI and its sweeping functionality took many by surprise, underscoring the challenges of governing powerful new technology and highlighting the need for anticipation,” the framework admits.
It also stresses that governing fraught new technologies requires international cooperation, but that “deepening competition” between countries risks putting “downward pressure” on any controls that might be needed.
Chinese representatives did participate the Paris conference, but not at a ministerial level, and China – not an OECD member – is not one of the countries to endorse the new framework.
Reading minds
The OECD’s new framework is – perhaps inevitably – rather high-level and abstract. But the Paris conference also got into specifics, in particular about the coming risks of a new wave of neurotechnology which can read – and even alter – citizens’ minds.
This technology is quietly moving out of the realm of science fiction and into the hands of consumers and authorities, argued a recent book by Duke University researcher Nita Farahany last year.
For example, last year, researchers at the University of Texas at Austin managed to roughly decipher a subject’s internal monologue just from scanning their brain activity.
This could be revolutionary for stroke patients who are unable to talk, for example – but risks an enormous invasion of mental privacy if done for commercial or security reasons.
Cheap, commercially available brainwave sensing headbands are not yet able to decode specific thoughts, said Paweł Świeboda, neurotechnology lead at the International Center for Future Generations, a Brussels-based technology think tank, who attended the conference.
But they are already able to detect mental state or mood, he said, and are advancing rapidly. “This technology is fast developing,” he said.
The worry is that employers could demand employees wear headbands to monitor their mood or concentration levels, or tech companies could use brain data to work out what advertisements or content resonates with consumers.
Users could be lured in to wearing headbands by the promise of controlling their computers or smartphones using brain signals, for example.
At the Paris conference, the OECD put out a new toolkit on neurotechnology regulation, designed to make it easier for policymakers to step up regulation in this area, featuring a laundry list of things governments have done so far to rein in risky uses of neurotechnology.
This new toolkit follows pioneering OECD guidelines in 2019 on how to roll out the technology responsibly.
But there are precious few examples of actual laws regulating neurotechnology – since 2019, governments have instead issued a stream of guidelines, declarations and memoranda.
Chile has arguably taken the lead in anticipating the risks of neurotechnology. In 2021 the country enshrined “neurorights” into its constitution, and in a case last year, its supreme court ordered the deletion of brain data by a neurotech company.
The country is now trying to work out how to actually enforce this constitutional right, Aisén Etcheverry Escudero, Chile’s science minister, told the conference. Her ministry is grappling with how “implement and take these constitutional rights, that are now there, and turn it into regulation that can be enforceable,” she said.
Within the EU, Spain has been active too, using its council presidency last year to push through the so-called León Declaration on neurotechnology and human rights.
And earlier this month, Colorado passed a law protecting data found in users’ brainwaves.
With the EU’s AI Act all but finalised, Świeboda wants the next Commission and Parliament to turn its attention to other technologies, including neurotechnology.
“It’s not too late to address its challenges,” he said. “That moment was arguably missed with respect to AI.”
It’s possible that existing EU regulations, such as the General Data Protection Regulation, General Product Safety Regulation, and the Medical Devices Regulation could cover problematic uses of neurotech, Świeboda said.
But these rules are “generic”, meaning there might be a need for a more dedicated neurotechnology law. Currently, “there’s no distinction as to whether its your knee or your brain”, he said.