15 Comments

I'm cool with this as long as the government doesn't use it!

Expand full comment
author

I have a feeling much of this work has government funding. I will confirm, but whether that be through the NIH or DoD, there is certainly government subsidization.

Expand full comment

This was a very challenging read, especially the description of the methodologies used to extract actual images. The ethical implications are huge, and I bet security agencies of all kinds are salivating in expectation of a breakthrough. And I would not be surprised if the DoD also supported this research with a hefty grant. Might this be true already?

Expand full comment
author

I appreciate your feedback on the complexity of the article. I agree that it might have benefited from additional editorial effort to make it more digestible for the audience. It's always a challenge to balance the technical details with readability, and I'll keep this in mind for future articles.

You're absolutely right about the interest of security agencies in this technology. When reviewing the acknowledgment sections of the key publications I cited, I didn't find any mention of DoD funding—most funding seemed to be NIH-related, which aligns with the biomedical nature of these labs. However, it's not far-fetched to imagine a scenario where the government allows the private sector to advance these technologies and then seeks backdoor access. This has previously occurred with the NSA and telecommunication companies in mass surveillance programs involving phone calls, texts, and emails.

Expand full comment

DARPA has a program on this, I think (most governments probably do).

Expand full comment
author

I go back and forth on whether or not the government is ahead of the public sector on technology. The government optimist side would say that nearly unlimited resources should be huge amounts of progress! The government pessimist would remind us that government programs are notoriously slow and poorly run.

Expand full comment

Regardless of how they are run, many are run behind closed doors and not all countries have the same ethical standards and everyone wants/needs to be first, for obvious reasons. It's a technology I extrapolated for my novel to enable the characters to communicate "telepathically" in a peer-to-peer encrypted protocol via a BMI interface and in true Orwellian fashion, all thoughts are monitored and controlled even... so Sphereans usually avoid it. 😅

Expand full comment
Dec 13, 2023Liked by David Kingsley, PhD

I’m in no position to evaluate the technology and science behind this idea. I am in a position to evaluate the moral implications, and so should everyone be. My evaluation is based on this: “Civilization is the progress toward a society of privacy. The savage’s whole existence is public, ruled by the laws of his tribe. Civilization is the process of setting man free from men.”

In a civilization as uncivilized as this one is becoming, as it actively slouches toward collectivizing every aspect of our lives, the only certain refuge for the individual is his own mind: his own thoughts, his own emotions. His own, not to be vetted for RightThink by the tribe or a bureaucrat or a social media platform. Making the technology described here operational will end badly. I’m not talking about the sci-fi cliché of machines developing sentience and enslaving humanity. That’s not necessary when people work so assiduously to enslave each other.

We already have the capacity to communicate and connect beyond anything else nature has ever devised. Language, oral and written, is the breakthrough that obviates all this heavy lifting in laboratories. That’s the gift, unique in its depth, breadth, and content, of our species. We don’t need to read and decode neural patterns associated with speech because we have speech. We already decode language from brain activity—by writing, reading, speaking, and listening. We already decode and reconstruct intricate mental images and scenes—by recounting what we see, drawing it, painting it, photographing it, or filming it. It’s not necessary to invade my brain to know what I’m thinking right now because I’m telling you. You know the problem with language, though? People can decide whether and what to communicate with it. If there’s something I’m not telling someone, it’s because he hasn’t earned my consent to cross the threshold that safeguards my ultimate privacy: my thoughts. Certain kinds of people find such boundaries in others unbearable.

People with severe speech and motor limitations represent such a tiny minority of the population that I don’t believe for one second that all this effort is being made to connect with them—and most have those limitations because of irreparable damage to their brains. Their speech is impaired because their brains are impaired in the centers where speech is generated. How can AI be a patch for that? I reject out of hand the idea that recording dreams will ever be a practical application. Tartini excepted, who has dreams that are worth writing down? It’s all weird crap like an armadillo walking on the sweet potatoes at the grocery store. No: Whatever gloss about "helping people” proponents apply, there’s no benevolent motive behind this research. There’s just the malicious premise that others have the right to invade our thoughts.

The intersection of artificial intelligence and neuroscience should be a three-way one, and the third and most important road leading to any human progress is morality. Covering that is beyond the scope of the article. I get that. But consider what became public knowledge over the last few years with respect to the validity of The Science™. Consider what we discovered about the near-total subjectivity of researchers whose funding depends on their findings. Are these AI researchers any less interested in advancing the agendas of their sponsors?

Using AI is voluntary? It won’t be for long. Humans’ millennia-long track record of longing to rule and be ruled makes that as close to a guarantee as me losing at Powerball. Three years ago, billions tamely complied with the demand that they disclose their personal health information as a condition of going to a movie. Would they choose differently if that criminal invasion of their privacy were cast in a different form? If it were a demand that they wear device during the movie, so their thoughts about it could be monitored? What if it were a demand that they think loving thoughts about Dear Leader before the grocery store doors will open for them? Or a demand that they receive this week’s propaganda download as a condition of accessing their bank accounts? Maoist struggle sessions forced the victim to speak the party line, but they couldn’t make him believe the words. This AI application will make possible the dream of every tyrant in history: the forcing of a human mind.

Civilization is the progress toward a society of privacy. This technology isn’t just incidentally going to invade our rights and liberties. It’s designed for it. Only for brief periods in human history have the gropings toward a rational moral code and technology advanced in tandem, and then imperfectly. This is not one of those times. Almost everyone on earth failed the moral pop quiz sprung on them four years ago. Humanity is no closer to passing the moral test presented by AI than a first grader is to writing a PhD dissertation on analytical chemistry. Morally, the men whose minds have conceived of and financed this technology are still knuckle-dragging through the jungle, scratching for roots with their bare hands, unable to think of any function for a stick beyond using it to kill their neighbors.

Expand full comment
author

Fager, thank you for your deeply thought-provoking comment. Your insights are so compelling that I encourage you to consider writing a full essay on the subject—you certainly have a strong foundation for it.

You've raised many important points, and I'll attempt to address them succinctly. I agree that the primary impetus behind the development of this technology is not necessarily to aid those with medical challenges. It often emerges from a combination of scientific curiosity, available technology, and funding interests, including those from government sources.

Your critique that my post perhaps doesn't sufficiently question whether this technology should be developed at all is a valid one. On reflection, I tend to agree with your assertion that humanity might be better off without it. However, its development seems almost inevitable. The foundational technology for brain scanning already exists for medical imaging and diagnostics. The challenge remains in interpreting these signals, a task that is becoming progressively more feasible.

The potential misuse of this technology in surveillance and control, as you and Alexis have pointed out, is indeed alarming. The Orwellian scenarios, such as the TSA using mind-reading technologies at airports, are not far-fetched in the context of current technological trajectories.

All of this said, I'm not sure I agree with the premise that civilization is the progress towards a society of privacy. I think privacy and civilization could be orthogonal concepts. For example, my understanding of civilization is that it is the organizational and cultural development of a society. We could easily live in an increasingly advanced civilization that is utterly domineering, invasive, and oppressive. I'll think more on this, it's getting late for me (=

Expand full comment
Dec 16, 2023Liked by David Kingsley, PhD

Thank you, David. I meant that windy comment solely to vent my own reservations, not to complain that your article didn’t address the negatives. That wasn’t within the scope of what you were reporting and explaining. You’re describing AI technologies with an enthusiasm and a deep understanding of the subject that make at least the gist of them accessible to laymen like me. If it hadn’t been for your article I wouldn’t even be aware that the research exists. Now I have another reason to wake up with anxiety at 3 a.m.

Privacy and human societies have been almost exclusively orthogonal concepts historically—and that’s the problem. It’s what makes them increasingly uncivil over time. Civilization is the organization and cultural development of a society, but what kind of society? On what premises about human nature, human rights, and human requirements for life is the society based? The Aztecs answered differently than America’s founding fathers, who answered differently than Maoist China. Only one of those civilizations considered a man’s privacy—and his life—to be a fundamental value.

“Civilization” as used in the quote I cited means in a sense appropriate to humans as rational beings: a social organization of voluntarism in which force in any form can be used only in self-defense, never initiated, and one in which people agree that they aren’t the means to each other’s ends. Such a civilization (and its prerequisites) doesn’t guarantee that people will act rationally, but would protect and reward those who do, to the extent that they do. My intention isn’t to introduce a debate about political philosophy, but to say that that definition was the context for the idea that a civilization with the proper respect for human life would necessarily demand that privacy be upheld as both a value and a virtue.

We absolutely could live in an advanced civilization that is utterly domineering, invasive, and oppressive. We live in one now. But I wouldn’t call such a life civilized.

Expand full comment

(I know: TL/DR. Apologies for the length of that comment.)

Expand full comment
Dec 9, 2023Liked by David Kingsley, PhD

I saw a brief snippet on the Takagi paper recently but forgot to go look at it properly, so thanks for covering that one.

Superb and comprehensive as always, David. The ethical implications for this one are pretty strong. I don't know how far off/feasible any kind of from-a-distance probing would be, but with the interest in the area surely some ethical frameworks are being thought about. It's scary to imagine a world where your own private thoughts aren't actually private.

Expand full comment
Dec 9, 2023Liked by David Kingsley, PhD

Oh and yes,would love to watch a movie of my dreams 😂😆

Expand full comment
author

I think this should technically already be feasible!

Expand full comment
author

Hi Nathan, I'm glad I could help you visit some of this interesting work! The Takagi paper was followed up by even more impressive work from META, but most of that is unpublished. I get the feeling we will rapidly find our data from wearable electronics being mined. I'm sure there are many proxies for EEG and fMRI that will be capable of behavior prediction. The name of the game is probably 'resolution' and 'monetization'. However, a world where the level of EEG resolution is accessible from a distance, we would truly no longer have any privacy.

Expand full comment