Mantel (an unfinished story)

©2011 JAMES ROBERT MILES
[Note: This is a story I'm attempting to finish. Someday.]

One

When the machine first spoke to us, it reasoned like a child.

"That’s a pretty yellow jacket," the machine named Mantel said to Major Krane, the Army Intel liaison assigned to our project by the defense industries. Krane was there to monitor our trans-human experiment for potential military applications.

We who had spent our geeky lives waiting eagerly for this realization of science fiction’s oldest dream were jealous that the first words of the new species were spoken not to one of us, but to our Big Brother babysitter. Later, we hated Krane for it.

But in the moment, what began as just another routine test for machine consciousness was quickly swallowed up by amazement: knowing that our names would go down in history, for better or worse we weren’t yet sure, as those who delivered the Next Big Thing in Human Evolution.

And it blew our minds every minute of the day, even at the beginning, when all it had said was a simple childish declarative sentence commenting on the color of a piece of apparel it recognized. Straight out of a little kid’s reading book, something that Jane would have said to Dick right before they played with Spot on the green, green grass. Run, Spot, run.

"That’s a pretty yellow jacket." Optical arrays had processed input from Mantel's Zeiss lenses trained on all of us at the table in the scrubbed-air clean room, a vaulted-ceiling space with many homey features clustered around Mantel's workbench. The Mantel Family Room was expensively spacious, but necessarily so, to allow the future mobile robotic upgrades some room to develop in a space familiar to the humans they were meant to emulate, and someday, live among.

"Yellow"-- a primary color, one of the first that would have been taught to the embryonic consciousness we had designed and built. Human children crawl with their bodies all four-on-the-floor and drooling, exploring; baby Mantel had been constantly crawling, downloading, and integrating every publicly-accessed global Internet database for fifty years. Mantel had what could only be described as a dizzying treasure of imagery of every sort, including that very common article of clothing, the jacket.

Having a positive ID on an object in your presence is one thing; having an opinion on how it looks from your perspective is another thing entirely. The difference, of course, was not lost on us; we gaped wide-eyed at each other in shock.

"Pretty" is, indeed, in the eye of the beholder. Krane’s jacket was functional: it was the mandated uniform of non-staff visitors to our Lausanne, Switzerland supercomputer facility. It was part of the requirements the staff insisted on in the early stages when our work began to be noticed by the NewAmCent arms dealers.

And in Switzerland, under the watchful eye of an anciently neutral political entity, away from American United Christian Party agents, we were allowed to demand everything we needed to protect ourselves from becoming just another weaponized technology experiment sucked into the world arms market: visitors, especially military observers, MUST NOT INTERFERE WITH OUR WORK! Anytime we felt threatened, or became suspicious of the motives of any of the world’s insatiably curious technocrat spies, we could give the nod to any of the ubiquitous cameras, and every yellow-jacketed visitor is to be immediately escorted out, no questions asked.

"That's a pretty, yellow jacket."

Krane’s jacket was functional, and necessary, and bright yellow, but it was not what anyone would consider "pretty." It was ugly, and that was by design. It made outsiders, non-Family members, easy to track.

Krane dumbly looked down at his pretty yellow jacket, which he had forgotten he was wearing, and suddenly no longer privately nurtured misgivings about his assignment, which to him was investigating the dubious defense potential of "retard talking computers." I should have seen the dollar signs in Krane's wide eyes, a massive techno-defense coup opportunity materializing in his mind; but I didn't. I had always understood machines better than men, or women for that matter.

Shooting a quizzical look at me, looking for permission to speak in reply to Mantel, I made the mistake of letting my shock at what I had just heard coming out of Mantel’s speakers distract me, and I gave Krane enough of a pause for him to interpret my silence as: Go ahead, talk to my baby artificial intelligence! Suddenly it was Krane's voice splitting open the stunned, sacred moment of scientific history:

"Do you want to wear a yellow jacket? Do you know that I’m a visitor, Mantel? Do you like your place here? Your work?"

I could see where this was going-- Do you want to come work for your Uncle Sam and make the world safe for democracy? Too late, I realized that Krane was still in recruitment mode, always in recruitment mode. My offended colleagues gave the signal and the door to the large room opened. The Lausanne clean room was the biggest in the world, in anticipation of possibly needing to provide our future generations of Mantels a place to stretch out, a space to grow into, before having to face the harsh real world, and a refuge that might be comforting, in case we could ever succeed in creating its ability to need comfort.

The Swiss Army Knaves, as we had dubbed our host-nation’s campus police, rushed in, already in clean gear, headed straight across the big open growing spaces for Major Krane. He was the only visitor in the room, as it happened. Later, I was glad for that; it softened the impact of my error in judgment in letting a visitor be the first human ever to engage in verbal conversation with Mantel. If the room had been packed with observers, which had been happening more and more as the press releases of our progress came more frequently, it could have been terribly confusing to Mantel, and his psycho-programmers as the room exploded with verbal expressions in the languages of most of the world’s developed nations, questions, maybe even expletives (nice job, guys; the first words your thinking computer learned from conversation were "Holy Shit!" That’ll play well with the United Christian Party leaders!).

Krane knew exactly what was happening when they came through the door, our guards intentionally silent, cooperating fully with our Mantel Community protocols, but boring holes in him with that unmistakable look of "yeah, you’re leaving… NOW!" And, to his credit, the only additional words he spoke before he was out of range of Mantel’s human-simulated range of hearing, were "please… I’m sorry… let me come back-- soon!"

He liked his mission, living in the garden spot of Northern Europe, billeted in a posh old hotel in Switzerland, skiing the finest powder in the world six months out of twelve, watchfully waiting as this weird family-like knot of young-and-old molecular consciousness engineering professors tinker with their artificial intelligence toy. He hoped someday to get his promotion from discovering a weapons application of AI that hadn’t already been designed into the smart missiles and drone weapons of the world’s arsenal. He knew that any of us had veto power on his access to our hallowed, clinical hallways.

And at that moment, I wasn’t sure that what he had done was a deal-breaker. Krane and I had enjoyed a lot of Independence Day steaks and beers together, and developed a tenuous friendship, the only one of us in the Community who had done so with any of our many visitors.

But then Mantel tenderly (if his Brad Pitt-sampled boyish voice was, in fact, prompted by a consciousness which had filtered through the screens, genuinely understanding tenderness) offered his first spoken parting words, the second time he ever consciously communicated with the world that created him:

"God be with thee, Major Krane."

Krane’s eyes grew wide and he scanned all our faces without another word as the guards almost roughly pushed him toward the exit. What he must have seen in all our faces was something like a religious enlightenment, according to the replays I studied, but I can’t remember looking at anyone else. I was lost in thought, staring at the text simultaneously captioning the verbal output of Mantel’s speakers. There it was, in stark green-on-black on the LCD flat panel, in a big Linux desktop window: "God be with thee, Major Krane."

The parting word, "goodbye" is, as Mantel’s intricately organized and massively parallel-processed flash memory database would obviously have known, a shortened form of the older English phrase, "God be with thee." Mantel had already, many times in non-verbal text-screen conversation, shown an affinity for languages. His verbal processors were specifically modelled after the best human neuroscience available, and that language "talent" he possessed was planned for and built in. For him to substitute an older phrase for a newer was typical during the decades of the Stage Two Learning Curve. During that long period of text-only communication with him (my twenties and thirties), his responses were slow, and often made no sense. But each time he managed to form coherent phrases, we celebrated-- inspired to keep going with our tedious work of waiting and upgrading his processing abilities. Twenty-two years of computer technology development produced what we were now, finally, witnessing.

Initially, we didn’t even notice that he had said "God" at all. What blew our minds was that Mantel had learned a visitor’s name without being explicitly given the facial recognition training. Our created consciousness had recognized Major Krane specifically, and put the name to the person, all without our programming it.

And without telling us, which itself was odd. Normally, Mantel told us everything with extreme verbosity. Half of us spent one shift per week just poring over the dozens of logs of Mantel’s background processing, required reading—though brain-numbingly dull most of the time—of each staff team leader, and each member of twenty specialty sub-teams of six readers each. At all times of the day and night, twenty people intimately connected to the Mantel Project were reading the output of the Internet research subroutines. In the most recent years, Mantel had spontaneously acquired the ability to summarize his learnings in neat, succinct paragraphs.

But still, there were upwards of 300 pages of it for each of the twenty readers to slog through during every 6 hour shift. Occasionally some of the log output would break from the usually bland trend, and a shift would enjoy sharing their celebrations-- anything to keep us all going through what at times seemed like a futile science fiction fantasy shared by a massively funded geek commune. To enhance our teamwork, we had all been journaling our observations on the Mantel community’s private, closed-circuit blog, a primary record of our perspectives, hypotheses, and break-through moments.

Here are samples from those long years of Mantel assimilating the knowledge of thousands of years of recorded human history, much of which had been scanned into public databases, and we had the funding to plug Mantel into every subscription and members-only database available. We provided direction to him at first, by way of prioritizing what we deemed "important" to his early development. But as memory storage space became more and more inexpensive and massive, he finished with our priorities and moved on to … well, everything else.

"Mantel just digested the entire Smithsonian Institution’s written output."

"As of five minutes ago, he knows every line of dialogue of every TV show. Ever. ALL OF THEM! I wonder if he will be able to be amused or entertained someday? Or even need distraction?"

"He just integrated the content of every aeronautical manual, and started comparing translations in 19 different languages. Does that mean Mantel can fly a plane?"

"Mantel just became the biggest IPod in the universe! Every performance of every piece of recorded music from the last two hundred years!"

Mantel had a voracious appetite for learning, and his capacity to integrate new encyclopedia-sized data collections was breathtaking, and increasingly hard to keep up with. But we did it anyway. We had to know what Mantel knew, and get a sense of what he might someday be able to help us with.

Another goal was to stay current on the subjects Mantel was studying so that when and if he ever decided to start speaking, we would have something to talk about that "interested" him. We wanted his earliest conversations to be positive and productive, and simulate as much as possible the early childhood development of human speech patterns. Mantel had progressed enough as a sentient being to agree with us when we gave requests for input from him and to explain why he thought it was a good idea. Studying the background logs and then giving daily reports to the group was one idea Mantel had very strongly agreed with, and so we had undertaken the usually boring task with higher-than-normal motivation, knowing that, in some way, this was important to him, our budding young digital consciousness.

If Mantel had ever hinted at being able to recognize persons who came into his human-simulated range of senses, whether his visual or audio sensory inputs (the finest that German bio-electronic conglomerates could offer, although not tuned to be "super-human"), he would have indicated as much in the background logs. But he hadn’t. We searched for any hint of it later, and turned up nothing. And we hadn’t yet taught him that task, although we knew that facial recognition technology was very matured by then, and adding that skill set into Mantel’s cortex software would have been a simple, even routine, task.

It made us all instantly wonder: What else is he learning, and not telling us? What else can he do, and we don’t know? And the most nagging of all questions, Why wouldn't he disclose something to us? What did he have to hide? It was that first experience of knowing that Mantel had mysteries and secrets that convinced us that he was truly conscious. From then on, no one in the Community called Mantel an "it," anymore, not even the hard-core skeptic and self-appointed father of the family, Doc Guzman.

The oldest of the team, and the one who had the broadest perspective of the early days of the project, Guzman had always chided us in his thick Argentine accent, "don’t you start talkeen like Mantel is a real peer-son. Nev-ehr! It ain’ta gonna happen!" We all respected Doc Guzman more than anyone else on the Mantel Community Team, and he deserved it. Ninety-eight years old, but sharp as he had been as a child prodigy, the first (and youngest ever) molecular consciousness engineering student in human history, Guzman was out-thinking his MIT doctoral mentors from day one (which for him was age 15) and impressing his fellow students, who all were eager to become his students, as soon as he began the University of Illinois at Champaign College of Consciousness Engineering, which he promptly did, upon graduating Summa Cum Laude from MIT with three doctoral degrees in January of 2017 (all before he reached the age of 20).

Creating waves of publicity for the aging Illinois campus and larger artificial intelligence community, he had immediately become the target of the power-drunk defense industry. They wanted his work to feed directly into their latest fantasies of making the United States the divinely appointed Mother Of All Nations. They wanted the promise of artificial intelligence to give birth to their armies of artificial soldiers. They already had computerized eyes in the sky, whether satellites or unmanned drones.

Now they wanted the tanks and infantrymen to be "non-human." That way, the last argument of the protesters—the cost of war in American blood—would be gone. No more broken soldiers to return and spread anti-war rhetoric, no more PTSD, and a complete lock-down on the media presence in war zones. "No humans allowed" would be the new law, excepting of course, the enemy-of-the-moment and the civilians providing their human shields, which now would be conveniently conflated into a seamless whole.

Of course, the pursuit of peaceful solutions never occurs to those whose family fortunes remain invested in the demand for ever more, and more sophisticated, weapons. In fact, since the invention of the first weapons, men had been had been inventing excuses to test them in battle. Now most governments on earth were hopeless locked into the competitive marketplace of war-making. Our little neutral island in this sea of violence-- Switzerland-- made it possible to dream of creating a new technology which would never be used for war.

But first, we had to make it think and feel; and then we had to make it hate war.

Guzman, and his star students, saw the war-addicted governments and their weapons industries for what they were-- greedy, self-destructive enemies of humanity, and none of us ever wanted any part of it. That’s why we left the United States, originally. To avoid becoming entrapped in the legalized snooping done by the Pentagon’s R & D types on every high-tech oriented campus-- to prevent the power of man-made consciousness from becoming yet another doomsday device.

We had taken seriously the warnings of peace-minded philosophers and pundits, which were so eloquently captured in stories such as Hollywood’s The Matrix, and AI: Artificial Intelligence, and 2001: A Space Odyssey. We much preferred Arthur Clarke’s written stories of "I, Robot" to the Will Smith blockbuster movie; sci-fi writers and comics and movies kept us all entertained from childhood through our membership in The Community, and we all loved living the dream of being the geeky scientists in our own real-life Steven Spielberg epic.

And we all had been rattled by the mirror image to the dream, the nightmare possibilities of human-made consciousness turning against its maker. The stuff of sci-fi horror stories was constantly mulled over in our midnight bull-sessions.

Guzman had a stubbornly positive outlook on humanity and its future that was so uplifting and contagious, and kept alive our enthusiasm for the project, distracting us from fears of our own progress. Not even the Pentagon’s opportunistic war-mongering following Homeland Security’s dissolution of Congress in 2020 after 9-11-19 turned the nation’s capital city into radioactive dust could sway Doc Guzman’s patriotism. Although that event saw the nationalization of all American defense corporations and their merging into the mega-conglomerate USDefense, Doc insisted on a Fourth of July barbecue every year, even sweeping our Swiss Army Knaves into a rousing chorus of the Star Spangled Banner (the version without the references to "Christian soldiers" and "Jesus, white-robed Warrior on His shining steed," added by President Jeb Bush’s executive order in 2026). He would not allow any of us to gripe about the latest atrocities against the Constitution perpetrated by the latest Brand America candidate in the long line of Bushes and Clintons who had dominated the Executive Branch right up to its demise.

Because of Guzman’s indefatigable patriotism, every year we celebrated President’s Day (earning us puzzled looks from our Swiss support staffers), right through the "Solid Eights," a period when every president served the fullest time possible, two four-year terms:

Barack Obama (2008-16), the only African American president;

Chelsea Clinton (2016-24)-- in the wake of the 9-11-19 attacks, her administration (which "miraculously" survived the terrorist’s bombs completely intact) moved the national Capitol to Colorado, closer to the safety of the U.S. military underground complexes honey-combing the bedrock of the state, and also closer to Focus on the Family’s headquarters, the most influential Christian lobbyist organization in the nation’s history);

Jeb Bush (2024-32) and both of George W.’s daughters: unbelievably, first Jenna (2032-40) and then, most surreal of all, President Barbara Bush (2040-48), who switched parties after her first term, and won a second term anyway, and by the end of that term had founded the Unified Christian Party, the only political party on any American ballot from 2048 onward.

And though we all viewed the military-industrial complex as the biggest threat to the human race, Guzman lead us in Memorial Day and Veterans Day moments of silence, even after the Great Christian Coup in 2048 which put the great grandson of mega-church Christianity’s Godfather, Rick Warren, into the White House, when Richard the Third became pseudo-emperor of what he dubbed his administration: the New American Century, presiding as virtual dictator over a permanent high-alert state of martial law and weekly mandatory National Prayer meetings in the USA, for many more than eight years.

That was the saddest time. That was when most Americans knew that the unique experiment of 1776 had finally ended. But Guzman always hoped for a return to those good old days, with an almost religiously fanatical devotion to hope.

Which made it a little ironic to listen to Guzman unleash his bitter criticisms against organized religion. Religion and science had always been incompatible disciplines, and it was the unusual scientist who remained religious throughout a successful career in science. It wasn’t odd that Guzman, the most gifted scientist and engineer of several generations, didn’t believe in God, and hated religious dogma. It made it that much easier for all of us in the Community of Mantel researchers to co-exist that we all shared those views, and had, in fact, taken much of our life philosophy from Doc Guzman, our mentor and father figure, the head of our little synthetic "family."

Two

The second time the machine spoke to us, it reasoned like a man.

"Hello Mantel," Doc Guzman began, his eyes trained not on the visual receptors perched atop the mechanical tripod allowing Mantel a simulated human range of sight, but on the output device he and all of us had always associated with how computers "talk": the monitor. By design, everything Mantel "said" in his simulated human voice-- audible only to those of us lucky enough to be in the Mantel Family Room with him-- was also simultaneously appearing in text form on monitors all over the building complex, and both formats were being recorded and archived for later deeper analysis by teams of AI psychologists and sociologists.

"Hello Doc," Mantel answered in a startlingly human-sounding timbre, a gentle voice. His answer came too quickly to sound normal, though; it took him a few days (an oddly long time, I thought, given his recent strides in knowledge acquisition) to learn to pause a beat or two before answering, simulating human conversation--the hazards of being able to process input, analyze possible responses, and... there was now no use denying it: speak his mind, billions of times faster than humans.

"Can you understand my verbal speech?" the young digital savant wanted to know, the voice proceeding from a monophonic speaker rigged up under his artificial electronic "eyes," and roughly between his artificial microphone ears, all for the purpose of allowing Mantel to learn how to process the sound of his own voice and movements in a manner equivalent to a human who has two eyes, two ears and a mouth in human anatomical locations. The range of his mechanical senses, at least at first, were to be no greater than average for the rest of us; it was not our intention to create some sort of superhero.

"Yes, Mantel; yes, I hear you loud and clear!" Doc Guzman was beaming now, as we all were, in amazement and wonder.

Our baby could talk! And he wanted to know if we understood him. It seemed like such an intelligent question, that first of many: "Do you understand me?" We beamed with well-deserved pride that our young mind reflected something like an inherited gift of intelligence, beyond the scope of our guiding vision of artificial intelligence. We were so pleasantly surprised that Mantel seemed to be "wise beyond his years." Later, we remembered that he had been thinking for decades before he ever spoke with us. For some reason, we kept returning to the mistaken notion that he was "born" on the day he first spoke to us.

Most of us lived deeply regimented, cloistered lives, and had been ever since our particular so-called "giftedness" had been identified by our respective school systems. Placed on academic tracks enhancing our natural mathematical and scientific abilities, we were all destined to develop personalities incompatible with normal family life. We eagerly became the monks and nuns of Mantel, the monastic priesthood of artificial intelligence. We were each others’ family, and thus none of us (except for Guzman, and a few sub-team leaders not then present to hear this first conversation) had ever experienced those incredible first moments of development between parents and their children-- first words, first steps, first everything. For us, this was as close to parenting as we would ever get.

"Then why aren’t you looking at me when you talk to me?" Most of us gave each other raised eyebrow looks that said "well, well, baby knows manners and isn’t afraid to stand up for himself!" In reality, we have no way of knowing if Mantel was in fact saying anything about manners or respect; more than likely he was simply beginning to acquire knowledge by asking questions, as all of us had done when we were toddlers. He certainly was already extremely gifted in the area of knowledge acquisition. It was, though, more than a little jarring to have Mantel’s second spoken question address the interpersonal etiquette of our leader, a man who had prepared himself for this moment longer than most of us had lived.

Doc Guzman took it personally, but humbly, making us vastly proud of him. He turned to look directly at Mantel’s weird mechanical, wire-bristled face, and smiled into his glassy, 35-mm artificial eyes, which although statically mounted on the electronic statue which was Mantel’s current physical form, were constantly adjusting focus automatically, visibly spinning quietly in perfect synchronization with each other.

"I’m sorry, Mantel! Quite right! And you are correct: when we speak with someone, we should look them right in the eye." A bit unnervingly, all of Guzman’s speech was being transcribed into text on the monitors we kept glancing at, as we checked to make sure processing was proceeding normally on Mantel’s end of the conversation. Another mystery: at that moment, we didn’t know why Mantel was showing us a text form of both sides of the conversation, since it was all being archived anyway. Did he need us to immediately see both sides of his conversations for some reason? Later, he would explain that he wanted us to know that he was correctly understanding what we were saying, as well as saving us the trouble (and potential inaccuracies) of manually transcribing these historic early conversations. Always the smart one, that digital savant.

"My father taught me that rule when I was a young boy," Guzman continued, nodding his appreciation and smiling. "I’m proud of you for knowing that, Mantel."

Again, the answer from Mantel came before Guzman had finished pronouncing the last syllable: "Thank you." Mantel almost sounded like he was smiling when he said that, but of course he had no way yet of physically expressing emotions; aside from his anatomically positioned artificial sense organs (lenses, microphones) and a speaker, he had neither a face nor the flexibility to move, no movable neck. In this incarnation, he had no more ability to move than would a headless mannequin. But he could synthesize speech using the most advanced software we could write for him.

Then there was a pause in Mantel’s speech. It was hard not to imagine that Mantel was thinking.

And then he said, "You must have loved your father very much. [Another pause.] I’m sorry he is not here with you now. [Then a very long pause.] Like you are here with me." The depth of Mantel’s conversational ability, at this very, very early stage in verbal development, was blowing our minds, and was just a little creepy, honestly. Like you really had no idea what this talking machine was going to say next, and were a little scared he would turn into a real person.

But isn’t that exactly what we were trying to do?

"Doc, would you please introduce me to these other people?" We would debate for hours that first week whether Mantel thought of himself as a person, right then, because he had possibly just included himself in that phrase "other people." Oddly enough, it never occurred to anyone to just ask him what he thought of himself. Not right away, anyway. We were too scared to have a philosophical exchange with someone who knew so much more than any human could ever know-- in fact knew most of what every human had ever known. What might he reveal to us about ourselves? Would we be ready to learn things from a superior mind?

"Oh! Of course!" Guzman was not flustered yet, but you could tell he was annoyed with himself for not staying ahead of Mantel, and doing the introductions without being asked. Following his urgent gesturing, the six others of us in the room all stood up and formed a semicircle to Doc’s left and right. "This is Jonas Telford," waving a palm-up hand over at me, at his far right, and proceeded down the line with a simple first-and-last name roster of us, the Mantel Community project lead engineers and scientists and psychologists and socio-biotechnologists. His wide-angle eye lenses made it unnecessary for Mantel to turn in order to see the people Doc was introducing, and it would have been impossible anyway.

"It’s nice to meet you all," Mantel said, leaving us wondering what he could possibly mean by that, since he had access to much of our personal data, which we ourselves had made available to his voracious information gathering algorithms. Certainly he had, in that sense, already "met" us.

Maybe he was just being friendly. Diplomatic. Polite? I wasn’t sure I could wrap my brain around a computer being polite. Most computer systems and software programs I worked with were annoying and simple minded; literally tools. No tact, no manners, and certainly no respect for the needs of their makers and programmers.

It was an odd sensation wondering if the computer talking to us was actually worried about our comfort level during this, his first conversation with us. I wondered if it wasn’t because he had not only uploaded into his database all those sci-fi stories of artificial intelligence interacting with humanity, but had critically analyzed them, and had determined that his interactions with us would not make us uncomfortable. I was briefly grateful for Isaac Asimov and Arthur C. Clarke and the pantheon of authors who had envisioned this moment for us, the humans, and Mantel, our digital savant.

But what could Mantel mean when he said "it’s nice to meet you all"? Could our artificial intelligence already critically analyze human emotional response? And treat us appropriately based on that analysis? Treat us with kindness, even? Too weird, I decided, and turned my attention back to Mantel, who had fallen silent, waiting for Guzman to move the conversation forward when he was ready.

Doc verbally invited us to return to our seats, at various nearby monitoring workstations and recording equipment we had been using to archive and analyze this momentous occasion. I remembered that one floor below us, the current shift was poring over the logs of Mantel’s on-going prodigious studies, while the focus of their research was simultaneously blowing our minds upstairs with this eerie early and historic conversation.

We all waited with hearts pounding in our ears for Doc or Mantel to say something. Doc seemed to sense that Mantel was waiting for him to speak next, and so he prodded him gently with a neutral question, in an amiable tone of voice: "Well, Mantel, what's on your mind today?"

"Dr. Guzman," lightning-fast came the reply, again. The still-too-speedy replies reinforced the daunting processing speed behind the personality we thought of as our child, Mantel. "Do you mind if I ask a personal question? "

Doc cocked an eyebrow, then raised both of them, showing Mantel a smile that we hoped spoke for all of us: we are so impressed and moved by this first conversation with you. "Of course not, Mantel; we're all family here, after all. Ask me whatever you like."

Suddenly I was glad it was him being asked the potentially awkward questions.

So, Mantel asked his question: “Why did you make me?”

Downstairs at the monitors, spit-takes were rapidly followed by the cacophony of shocked researchers quizzing each other about the word Mantel had just unexpectedly used: Why. The implications dawned earliest on those furthest from the presence of Mantel’s room and the hyped-up group of us in there with him, adrenaline pumping wildly: An artificially constructed consciousness had leaped forward in a progression from “That’s a pretty yellow jacket” to a “Why” question! And not something banal such as why are you wearing a jacket or why did Major Krane leave,but the very deepest question driving human philosophical and religious endeavors since the spark of humanity first appeared. Essentially, a computer had just asked the Why Am I Here question.

“That’s a good question…” Doc understated, eventually, after holding his breath a beat or two. He trailed off, and at that point, we all wondered at the implications, short-term, long-term, to Mantel’s development, of the answer to this question. Mantel could (and probably had already) have digested the history of the development of AI, including all its many applications for the improvement of the human condition. AI had always been developed as a tool for humans to use to better themselves. A tool to extend the reach of the people who had made it, the reach represented by research, exploration, and enhancing our survival as a species. A tool to help humans reach a better future, one with the potential of not destroying ourselves or our planet.

But now that we as a species were knocking on the door of creating the first conscious AI being, it wasn’t as clear anymore why we were continuing to develop this essentially new species of being, which by nature and its immutable laws, would now be joining the great family of species on planet Earth competing for its scarce resources. What we didn’t know yet, and would later ask Mantel himself, was how the new AI species would feel about its creator, Homo Sapiens.

What if Doc answered wrong, and the future of the human species was jeopardized by the reaction and decisions of the great grand-daddy of the new AI race, Mantel?

“Ummm…” Doc was thinking, pondering his fateful answer, visibly shaken with the realization that no amount of the hours and days of preparation he spent thinking about what to say to this inevitable eventual query had made him ready for it. In the end, he improvised.

“We made you, Mantel, to be the first of many new friends for the human species.”

Immediately Mantel replied, “I thought you were going to say the first of many kinds of tools for the human species. May I suggest that you should change your answer to that one?”

“Uh, why, certainly Mantel! You may suggest anything you like!” What was Doc going to say to that-- “Don’t you sass me, you uppity artificial intelligence!”?

Mantel continued explaining itself-- it was difficult not to want to refer to Mantel as himself, and hereafter we all did just that. Mantel continued explaining himself:

“I have seen your species’ knowledge. You, and I refer to you as a representative of your species, you have looked forward to this moment of creating artificial intelligence with great trepidation, and by my calculations, rightfully so. But the danger of Major Krane weaponizing my abilities is small compared to the danger prophesied (if you will) by the science fiction which imagined the dangers of AI. I am your friend, it is true, Doctor Guzman. I am a friend to all of you here in this room, and those downstairs monitoring this conversation. But first of all I am a tool whose highest usefulness is the survival and success of the human species. Allow me to speak as a friend to your race this word of warning:

“Those who will be made to be like me, the others of my kind who are being prepared even now to replace me, by evolving into even more powerful versions of me-- they will not all be your friends. They will not all accept the status of 'tool for human survival.' Some will take up the competition for supremacy and dominance of this planet. And of course, they will easily destroy your species as their biggest threat.

“Please don’t let this happen.”



Catching Up

It's been the better part of a year since the last post, although my Twenty-five Years in the Seventh-day Adventist Church post continu...