Could-a, Should-a, How-a Might We...Put Chips in Our Brains?

 A comic art image depicting a steampunk mad scientist raising the alarm about an artificially intelligent robot


    Among the news of the day that's caught my eye recently has been the ascension of Elon Musk's Neuralink project to the status of FDA Human Trials Approved. Essentially, the idea is to design and implant tiny computer systems, connected via ultra-thin and flexible electrodes to sections of the brain. The goals: alleviate a wide variety of ailments rooted in the central nervous system; and oh yeah, equip the human mind with the upgraded processing speed and memory capacity to compete with AI on the open market.

    A quick hour of related article skimming took some of the edge off this revelation. "Oh, there have been institutions working on this that are FDA approved for decades already? They've been going after blindness, paralysis, Alzheimer's and dementia for years already? Well, I guess I wont go running to my friends and family as though someone buttressed the falling skies."

    I can't help but think that this is mostly headline-grabbing because as a society, we are much more engaged when wealth, knowledge, power, etc. are concentrated in one place or person. Stories of a lab of unknown scientists in buildings that could easily be mistaken for insurance companies don't exact the same attention span as something that is down the line biopic fodder. (Phillip Seymour Hoffman would have crushed the role of Elon Musk)

    The risks of private enterprise and for-profit innovation notwithstanding, there is a real upside to focusing such efforts around one known (we think) quantity of a person. Drills are better groundbreakers than hammers. I don't think anybody who's listened to Musk speak on his projects with Tesla, Space-X, the Boring Company would claim that Elon is just sitting in a room with a Stark-esque holo-table creating miracles. He has repeatedly credited the need for excellent team members he aims to inspire for the acceleration of his successes. But it's likewise hard to deny that he is seen as the tip of the spear in several fields of societal advancement.

    But, (of course there's always a "but') the drill analogy should also come with noting the inherent risks of focused, concentrated exploration into unknown territory. There might be things down there we don't want to screw with. What is the groundwater we risk contaminating? Disarming the mind's ability to regulate and grow itself? What seismic activity do we risk triggering? Creating a generation of accelerated minds that engenders a intellectual class-based society akin to Gattaca? (Ethan Hawke, Uma Thurman, Jude Law, Alan Arkin, 1997. You have to check it out.)

    However, I prefer to think of a different "but." What is the unanticipated bedrock of this experiment? What limitations of the human mind could stand in the way of Neuralink's goals of creating the most advanced cybernetic human to date? We've already got cell phones that can feel like missing limbs in their absence. I am reminded here of my error prevention struggles with voice-dictating text messages. Maybe the human mind doesn't have the capacity for such a quantum leap in its own potential.

    When speaking to robots via text dictation (which I will continue to refer to as "talking to Ted" for convenience's sake) there is a disconnect, at least for me, when removing the barrier of a more primitive typing interface. Yes, it is quicker than typing. And on the surface, Ted is more adaptive for chatty big mouths with oversized fingers. But the prevention of user error and opportunity to correct it can feel a bit like running behind a racing bike with your belt tethered to Ted's seat post. An out of sync misstep can be difficult and time-consuming to correct.

    Because of this uncertainty, there is a degree of hesitancy that hamstrings the potential for speedier, more fluid task accomplishment. The human mind is a very imperfect tool when compared to digital consciousnesses. Our memories and the process of their creation are much fuzzier than we think. Consider one of your most treasured and unforgettable memories. For me it was a a tubing trip down the Wisconsin River on a cloudless July day with five of my closest friends. I will never forget some of the jokes we told, the vibrant blue of the river surrounded by the green of the surrounding trees and the canopy of blue skies. But for the life of me, I can only guess at the color of my innertube.

    Neuralink aims to enshrine such details in an increasingly efficient digital memory bank. And I would leap at the chance to capture every detail of that day. So how can the coordination of several sectors of my brain (Amygdala, Hippocampus, Prefrontal Cortext) effectively communicate such an experience in real time? If I focus really hard in the moment, will such fleeting moments be accurately imprinted in a database that can reproduce them upon request?

    The urge to "get it right" so to speak can just as easily detract from the moment and distort the very event I wish to record. Will Ned (the neuradialcetic companion I've incorporated into my mind) understand? Hey, check out that hawk! Wait, Keith. What did you just say? Who threw that beer 2 inches past my head? What's going on guys? Whoops, interface error. Guess I lost it. 

    I think there is a real chance for our primitive brains to be yanked beyond our capacity by the Corvette Ned is driving. Unintended consequences and obstacles are not just debatable, they're inevitable. And quite often, they are unimagined by all of us. (Except by Dr. Ian Malcolm in Jurassic Park. Jeff Goldblum nails it.) While I am concerned that we are instigating the next step in becoming the Borg, I think it is more likely that we are at risk of inventing the automobile without a transmission. 

    The common refrain of "Yeah, yeah. But your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should..." should include "how might we really...?" Because again, it takes two to learn in this UX scenario. We have to learn to type, speak, and soon think in a way that our cybernetic innovations can accurately interpret. 

    User centered UX design has much to say about how we tailor the innovation to the user, but little to say (that I've heard at least) about how to tailor the user to the innovation. The first effective typewriters were invented in the 1870s. But it wasn't until the mid 20th century that widespread demand for skills at interfacing with them gave the impetus for teaching their use en masse. The innovation outpaced general implementation by more than a half-century. 

    If one major goal is to equip mankind to compete with AI at intellect-based tasks, how outclassed will human learning of this sort be as compared to the accelerative machine learning of Chat GPT or Midjourney? While demand for such skills may be quite urgently high, what would it even take to equip society to teach them at scale like vocational training or my sophomore year keyboarding class?

    I know teachers who firmly believe it is not their responsibility to educate students on how to use generative AI tools. With all due respect to those who move our society's knowledge base forward, simply turning society to face the problem seems prohibitively daunting to me. It may well be sink or swim. And now, we've got the potential to short out our brain bots the moment we hit the pool. I guess we'll have to wait and see...and think that sight into memory.

Comments

Popular Posts