The Birth of the Sapeo-Centric

When writing this piece I thought of the following "riddle": If Google is a database of human intentions and Facebook is our collective autobiography, then Twitter and Instagram are but digital synapses in an overall intelligence that has yet to awaken. One of the most commonly quoted and basic Silicon Valley aphorisms is that Marc Andreeson predicted network efficacy, Michael Morowitz saw the rise of publishers and Thiel/Zuckerberg saw the dawn of the social web. We are now exiting that era. As early as 2004, in a talk given be the Yankee Group in Boston, a senior analyst alluded to the emergence of a global, interconnected system that would be "behaviorally adaptive and personally curative", literally--an uninterruptible, adaptive network of networks designed to speak to ever conceivable human need.   How close are we to this reality nearly thirteen years later? Like the human brain, there are simply too many operations, functions, features and phenomena to calculate exactly what these connections are or what is actually going on. Most of us do not actually understand what the internet is. An emergent world of connected API's, social platforms and new media seems to be the harbinger of the ultra-wired world of the future. I will take that one step further and say that they are already interconnected, and are potentially the seeds of something that we can only grasp at understanding fully.  We might inquire: What else are these if not evolving snapshots of human consciousness in a slow progression/evolution towards an interconnected network of complete and general artificiality? Recently I was invited to participate in preparing an academic paper on the ethical, philosophical and ontological implications of brain emulation, mapping augmentation, so I have a very keen interest in how people, everyday people, are being educated about AI. I can tell you that so far, it is not going well. There is far too much fear and media manipulation around what AI is, and is not, and too many "killer robots" and "super-intelligence" references to count, or take seriously. Let's start here: When social theorists and technologists alike talk about the coming emergence of a super-intelligence and the "internet of things", what are they referring to exactly? IOT is an entirely different subject matter, but the two are indeed related. We can know with some certainty that the world of 2025 will be entirely sapeo-centric, and most, if not all of our interrelations, with things and one another, will involve one or many dimensions of machine-derived intelligence. Media technologies and experiences of the sapeo-centric society of 2025 will have the following traits in common. {The root sapeo comes from the Latin world sapiens which means wisdom or knowledge. Sapeo-centrism refers to a way of being, of relating and interrelating that is by definition, exclusive and elite.} Our experiences will be "glocal". The scope of interaction at any given node will be isolated to a tiny core group or even just us and could very well expand around the globe in nanoseconds. Our experience will be individually curated, (you will create your own consumer-verse and only interact with what you want to communicate with). In other words, the tables will be turned on advertisers who will have to pay platforms to allow them access to you. Our experience will be behaviorally stimulating, (many of the ways we interact will be through sensors and the physical body). These three conditions will most likely be controlled and to some degree created by artificial or "other" general intelligence. Imagine that at this future stage of human-machine interaction; we will only interact with content that is specifically for us. We will not pull or browse; everything will come to us ready made, formatted in just the way we want to interact with it. Content clutter and media fragmentation will no long suffice to satiate the individual consumer in the machine world's "seamless network". Instead, personal/behavioral, physiological and psychological AI prompts will dictate what and with whom we interact. The first question we should ask is whether or not we will protect our right to "opt-out".  Our interaction and interplay with the machine world will incorporate biological features as well. The eyes we see and read with, the fingers with which we type, and the skin with which we sense the world will be part of that curated experience. These experiences will be delivered and modified by on wearable devices in continuous and real-time experience. These experiences will give rise to an entirely new wave of predictive analytics capable of anticipating needs and thwarting potential threats to our person, our property and of course, our data. This, in affect will comprise another kind of augmented human intelligence.  Machines will learn everything about us as we give them more and more data. Why? According to a recent NY Times article, we have been telling them what we like and how we like it for a very long time. I'd check your settings if I were you.   By the way, just like Hong Kong, San Francisco, and Berlin have become  unaffordable enclaves for the elites that create the digital world we live in, the sapeo-centric society will not be for everyone. It will be for those who can afford to live in it. Everyone else will be monitored or a "worker" for that system.  The digital world has the same problems the real world does. Should humanity continue to ignore this in either realm, then surely a distinct dystopia will arise as a direct and  result of present inequities. Collective, not Super Intelligence Throughout our evolution as a species, humans have managed to collaborate and collectively build technology with tremendous success. The scale of our collaboration (enabled, of course, by our brains) may be the most important factor in human evolutionary success. Long before a "super intelligence" emerges, collective intelligence will be scaled via networks of people and systems. When the actions of intelligent systems become more holistically intertwined with personal, social, cultural, political and economic systems, we can say we will have achieved a collective human-machine level of intelligence. First, we will need to confront the presence of "other kinds of minds", and the ensuing confusion and misapprehension that will surely attend their emergence. The public must learn to separate fact from fiction. Robots are not necessarily AI, and AI is not necessarily robotic. Nor does AI necessarily need a physical embodiment. AI already manifests itself as graphical user interfaces, voice interfaces and in robotic automation. Consider the voice-enabled intelligent agent that "lives" in your smartphone, your Spotify recommendations, dating preferences, or even the diagnostics programming in your car. All of these are AI. Second, AI will have to be truly "invisible" and pass the famous Turing test. An intelligent system that can emote lie cheat, flirt, manipulate, charm, and even deceive us, will render the “artificiality” of its intelligence imperceptible. Herein lies our greatest fear; that an infantile AI will escape human comprehension. Let's be clear here. When the details and dynamics of an artificial system exceed human perception and understanding, and we are aware of the existence, presence and effects of an intelligent system but no longer comprehend what that system does, or how it does it, then we have something different on our hands. This new intelligence should not be seen as an artificial but as an entirely new mode of cognition. The most common mistake made in comprehending the future is that the machine mind need resemble a human mind. Hence, the presence of a small, but important caveat. Even though some AI technologies are rumored to exceed Clarke’s third law that states “any sufficiently advanced technology is indistinguishable from magic,” the most esteemed scientists in the field believe that we are not even remotely close. I highly suggest Nick Bostrom's Super Intelligence for a very sobering look at this question. Incomprehensible, but Intelligent. What about this new intelligence? How will it differ from our own? Following the reasoning of a Turing test, we may know that artificial intelligence is in front of us, but the intelligence itself may become unknowable to humans through our human senses. Until at time arrives when artificial intelligence is literally "fleshed out" as a cyborg or human-like entities, most humans will continue to underestimate how intelligent systems shape our life online and offline. From your latest Spotify playlist to your personalized dating preferences, to the algorithmic stock market trading that develops the global market economy every single day, we can surmise the general public remains mostly in the dark about these things. Most algorithmic systems that we interact with (whether or not we realize it), are considerable advancements in AI technologies. They are, however, "black boxes" to most of us. What goes on inside comprises unfathomable processes to the public, which are why people fear them, somewhat understandably. The bottom line is that machine learning and AI technologies are becoming too involved for even for the experts designing and developing them. In his recent book, The Master Algorithm, Machine Learning expert Pedro Domingos points out that when scientists in the 1950's created the first machine learning algorithm, that could do something that humans did not fully comprehend. Growth and Scalability This development has not changed its course; rather, to the contrary. With the current pace of AI development, even seasoned experts have a hard time keeping up. Another reason why the public has an inordinate and irrational fear of AI. Today’s machine learning systems provide engineering and insights across thousands of technologies. From particle physics to cooking robots and cyber security, to crime prevention and bioengineering. Specialized "industrial AI" systems can empower scientific discoveries in engineering or neurobiology or help you choose the best route to your next meeting. The greatest ethical dilemma scientists and philosophers presently face is that when the time will come when processing speeds, machine learning and massive data aggregation in AI system start to engineer a self-learning system that could potentially evade our intellect. This scenario is often discussed and stipulates that an AI (ironically called a sovereign) will be a foremost expert and predict the future better than any human ever could. There are varying theories about what kind of world we will live in at that time. To allay what are only natural fears, it is likely that this will happen in the next fifty years. What we've discussed thus far presents a solipsistic conundrum. (Say that five times fast). Consider that highly sophisticated AI technologies can provide legitimate and correct insights based on a chain of complex interactions that cannot be understood nor followed by a human operator how can we hope to control them and make well-informed decisions without the assistance of the artificial intelligence itself? In other words, the humanity's most valuable asset might be an intelligent system that no human can fully actually understand or control. Intertwined Intelligence Artificial intelligence has evolved from dark science-fiction fantasy to a paradigm-shifting powerful utility in only the last fifty years or so. As AI transforms into an unprecedented cultural and technological phenomenon, it will affect the way we assess and define “intelligence” itself. Human intelligence might not be the most relevant measure of intelligence itself. The “artificial” elements begin to degrade and reshape itself the dialogue itself. Today, human intelligence is shaping artificial intelligence and in turn artificial intelligence is starting to develop human intelligence. As the impact of AI systems increases, more people need to be able to understand their workings and effects. To achieve this, we need to be able to augment human intelligence to allow us to interact with the various specimen of intelligent systems in sustainable terms. First, it is crucial to enhance the capabilities of today’s (human) researchers, designers and engineers to keep up with the AI technologies they are building. However, how? It is wishful thinking to believe that while science and math skills in the United States have been in a nose-dive for nearly thirty years, that we will be able to create better practices, tools, and techniques. Inevitably, these efforts will need to be led by diverse interdisciplinary teams. What is necessary, though it will not slow the process is for as many people as possible to become more acquainted and yes less fearful of interacting with AI technologies. Intelligence, no matter what shape, form or mode it takes will take on a more tangible, distinguishable and comprehensible form based on how we interact with it. We will (and must) devise ways and methods to illustrate the effects of intelligent systems in our daily lives and enabling people to participate in its growth. Just like hacking and greater media literacy are seen as essential skills for today's informed citizen, being able to comprehend and monitor intelligent systems will be an important skill for tomorrow. As I mentioned at the beginning of this piece, AI technologies could evolve an infrastructure based on the collective intelligence of the Internet and the platforms and API's attached to it. If monitored, or "boxed" to borrow a phrase from Bostrom, it would allow humans to decide the way they utilize AI and to actively contribute to its evolution. The Internet of Things is essentially the wireframe for an AI-grid, powering various experiences and applications in different environments and industries. It is essentially open-source and with time will significantly change the way understand collective intelligence's role in shaping AI as human and machine intelligence is likely to be increasingly intertwined. My final point is the most important, and most likely the least reassuring. Hopefully, it is something the reader will consider. The border between physical and digital realities has already disintegrated--and I mean this quite literally. What scientists and entrepreneurs desire most is for the relationship between humans and intelligent systems to be more seamlessly intertwined, not just blurred, but the boundary between human intelligence and artificial intelligence will completely dissolve, rendering the concept of artificial intelligence irrelevant and obsolete. We are indeed close to the inevitable emergence of new and different intelligence. However, we must not ever lose sight of the fact that we are the gatekeepers. We are indeed closer to a general artificiality, NOT because of deep machine learning, neuro-fuzzy methodologies or any other technological advance, As a species, we continually and complacently abdicate the space between our ears and our associated cognitive rights. We consume media with such rapacity that it staggers the mind to think of it. We conjoin with banality and ignorance and ignore the suffering of those less fortunate. Doesn't our placid acceptance of political corruption and media control tell us something very important?  We have already unconsciously agreed to be controlled by other powerful and intelligent human beings, known commonly as the "elite".  How likely is it that we will be merged with, and eventually replaced by an intelligence that greatly supersedes the current paradigms and expands them so far as to delude us into thinking it is we who control them, and not the other way around. ~ Louis D. LoPraeste was born in Andover, Massachusetts and educated in Boston and London. His wide-ranging career includes brief stints studying Hinduism, Zen and Tibetan Buddhism in monastic settings, trading commodities and nearly a decade as a corporate strategist, brand consultant, and entrepreneur-technologist. His writing includes essays on philosophy, religion, geopolitics, economics, and technology. He has recently been invited to contribute to an academic paper on the philosophical implications of certain aspects of AI, in particular, brain/cloud interface, emulation, and brain mapping. His fictional work includes a children's book, Bear and Sparrow, a collection of short stories called The Accidental Convert, and two forthcoming novellas, Capernaum, and Latent Sonata.  

When writing this piece I thought of the following "riddle":

If Google is a database of human intentions and Facebook is our collective autobiography, then Twitter and Instagram are but digital synapses in an overall intelligence that has yet to awaken.

One of the most commonly quoted and basic Silicon Valley aphorisms is that Marc Andreeson predicted network efficacy, Michael Morowitz saw the rise of publishers and Thiel/Zuckerberg saw the dawn of the social web. We are now exiting that era. As early as 2004, in a talk given be the Yankee Group in Boston, a senior analyst alluded to the emergence of a global, interconnected system that would be "behaviorally adaptive and personally curative", literally--an uninterruptible, adaptive network of networks designed to speak to ever conceivable human need.  

How close are we to this reality nearly thirteen years later?

Like the human brain, there are simply too many operations, functions, features and phenomena to calculate exactly what these connections are or what is actually going on. Most of us do not actually understand what the internet is. An emergent world of connected API's, social platforms and new media seems to be the harbinger of the ultra-wired world of the future. I will take that one step further and say that they are already interconnected, and are potentially the seeds of something that we can only grasp at understanding fully. 

We might inquire: What else are these if not evolving snapshots of human consciousness in a slow progression/evolution towards an interconnected network of complete and general artificiality?

Recently I was invited to participate in preparing an academic paper on the ethical, philosophical and ontological implications of brain emulation, mapping augmentation, so I have a very keen interest in how people, everyday people, are being educated about AI. I can tell you that so far, it is not going well. There is far too much fear and media manipulation around what AI is, and is not, and too many "killer robots" and "super-intelligence" references to count, or take seriously.

Let's start here:

When social theorists and technologists alike talk about the coming emergence of a super-intelligence and the "internet of things", what are they referring to exactly? IOT is an entirely different subject matter, but the two are indeed related.

We can know with some certainty that the world of 2025 will be entirely sapeo-centric, and most, if not all of our interrelations, with things and one another, will involve one or many dimensions of machine-derived intelligence. Media technologies and experiences of the sapeo-centric society of 2025 will have the following traits in common.

{The root sapeo comes from the Latin world sapiens which means wisdom or knowledge. Sapeo-centrism refers to a way of being, of relating and interrelating that is by definition, exclusive and elite.}

Our experiences will be "glocal". The scope of interaction at any given node will be isolated to a tiny core group or even just us and could very well expand around the globe in nanoseconds.

Our experience will be individually curated, (you will create your own consumer-verse and only interact with what you want to communicate with). In other words, the tables will be turned on advertisers who will have to pay platforms to allow them access to you.

Our experience will be behaviorally stimulating, (many of the ways we interact will be through sensors and the physical body).

These three conditions will most likely be controlled and to some degree created by artificial or "other" general intelligence. Imagine that at this future stage of human-machine interaction; we will only interact with content that is specifically for us. We will not pull or browse; everything will come to us ready made, formatted in just the way we want to interact with it. Content clutter and media fragmentation will no long suffice to satiate the individual consumer in the machine world's "seamless network". Instead, personal/behavioral, physiological and psychological AI prompts will dictate what and with whom we interact.

The first question we should ask is whether or not we will protect our right to "opt-out". 

Our interaction and interplay with the machine world will incorporate biological features as well. The eyes we see and read with, the fingers with which we type, and the skin with which we sense the world will be part of that curated experience. These experiences will be delivered and modified by on wearable devices in continuous and real-time experience. These experiences will give rise to an entirely new wave of predictive analytics capable of anticipating needs and thwarting potential threats to our person, our property and of course, our data. This, in affect will comprise another kind of augmented human intelligence. 

Machines will learn everything about us as we give them more and more data. Why? According to a recent NY Times article, we have been telling them what we like and how we like it for a very long time. I'd check your settings if I were you.  

By the way, just like Hong Kong, San Francisco, and Berlin have become  unaffordable enclaves for the elites that create the digital world we live in, the sapeo-centric society will not be for everyone. It will be for those who can afford to live in it. Everyone else will be monitored or a "worker" for that system. 

The digital world has the same problems the real world does. Should humanity continue to ignore this in either realm, then surely a distinct dystopia will arise as a direct and  result of present inequities.

Collective, not Super Intelligence

Throughout our evolution as a species, humans have managed to collaborate and collectively build technology with tremendous success. The scale of our collaboration (enabled, of course, by our brains) may be the most important factor in human evolutionary success. Long before a "super intelligence" emerges, collective intelligence will be scaled via networks of people and systems. When the actions of intelligent systems become more holistically intertwined with personal, social, cultural, political and economic systems, we can say we will have achieved a collective human-machine level of intelligence.

First, we will need to confront the presence of "other kinds of minds", and the ensuing confusion and misapprehension that will surely attend their emergence.

The public must learn to separate fact from fiction. Robots are not necessarily AI, and AI is not necessarily robotic. Nor does AI necessarily need a physical embodiment. AI already manifests itself as graphical user interfaces, voice interfaces and in robotic automation. Consider the voice-enabled intelligent agent that "lives" in your smartphone, your Spotify recommendations, dating preferences, or even the diagnostics programming in your car.

All of these are AI.

Second, AI will have to be truly "invisible" and pass the famous Turing test. An intelligent system that can emote lie cheat, flirt, manipulate, charm, and even deceive us, will render the “artificiality” of its intelligence imperceptible. Herein lies our greatest fear; that an infantile AI will escape human comprehension. Let's be clear here. When the details and dynamics of an artificial system exceed human perception and understanding, and we are aware of the existence, presence and effects of an intelligent system but no longer comprehend what that system does, or how it does it, then we have something different on our hands.

This new intelligence should not be seen as an artificial but as an entirely new mode of cognition. The most common mistake made in comprehending the future is that the machine mind need resemble a human mind.

Hence, the presence of a small, but important caveat. Even though some AI technologies are rumored to exceed Clarke’s third law that states “any sufficiently advanced technology is indistinguishable from magic,” the most esteemed scientists in the field believe that we are not even remotely close. I highly suggest Nick Bostrom's Super Intelligence for a very sobering look at this question.

Incomprehensible, but Intelligent.

What about this new intelligence? How will it differ from our own?

Following the reasoning of a Turing test, we may know that artificial intelligence is in front of us, but the intelligence itself may become unknowable to humans through our human senses. Until at time arrives when artificial intelligence is literally "fleshed out" as a cyborg or human-like entities, most humans will continue to underestimate how intelligent systems shape our life online and offline. From your latest Spotify playlist to your personalized dating preferences, to the algorithmic stock market trading that develops the global market economy every single day, we can surmise the general public remains mostly in the dark about these things.

Most algorithmic systems that we interact with (whether or not we realize it), are considerable advancements in AI technologies. They are, however, "black boxes" to most of us. What goes on inside comprises unfathomable processes to the public, which are why people fear them, somewhat understandably.

The bottom line is that machine learning and AI technologies are becoming too involved for even for the experts designing and developing them. In his recent book, The Master Algorithm, Machine Learning expert Pedro Domingos points out that when scientists in the 1950's created the first machine learning algorithm, that could do something that humans did not fully comprehend.

Growth and Scalability

This development has not changed its course; rather, to the contrary. With the current pace of AI development, even seasoned experts have a hard time keeping up. Another reason why the public has an inordinate and irrational fear of AI.

Today’s machine learning systems provide engineering and insights across thousands of technologies. From particle physics to cooking robots and cyber security, to crime prevention and bioengineering. Specialized "industrial AI" systems can empower scientific discoveries in engineering or neurobiology or help you choose the best route to your next meeting.

The greatest ethical dilemma scientists and philosophers presently face is that when the time will come when processing speeds, machine learning and massive data aggregation in AI system start to engineer a self-learning system that could potentially evade our intellect. This scenario is often discussed and stipulates that an AI (ironically called a sovereign) will be a foremost expert and predict the future better than any human ever could. There are varying theories about what kind of world we will live in at that time. To allay what are only natural fears, it is likely that this will happen in the next fifty years.

What we've discussed thus far presents a solipsistic conundrum. (Say that five times fast).

Consider that highly sophisticated AI technologies can provide legitimate and correct insights based on a chain of complex interactions that cannot be understood nor followed by a human operator how can we hope to control them and make well-informed decisions without the assistance of the artificial intelligence itself? In other words, the humanity's most valuable asset might be an intelligent system that no human can fully actually understand or control.

Intertwined Intelligence

Artificial intelligence has evolved from dark science-fiction fantasy to a paradigm-shifting powerful utility in only the last fifty years or so. As AI transforms into an unprecedented cultural and technological phenomenon, it will affect the way we assess and define “intelligence” itself. Human intelligence might not be the most relevant measure of intelligence itself. The “artificial” elements begin to degrade and reshape itself the dialogue itself.

Today, human intelligence is shaping artificial intelligence and in turn artificial intelligence is starting to develop human intelligence. As the impact of AI systems increases, more people need to be able to understand their workings and effects. To achieve this, we need to be able to augment human intelligence to allow us to interact with the various specimen of intelligent systems in sustainable terms.

First, it is crucial to enhance the capabilities of today’s (human) researchers, designers and engineers to keep up with the AI technologies they are building. However, how? It is wishful thinking to believe that while science and math skills in the United States have been in a nose-dive for nearly thirty years, that we will be able to create better practices, tools, and techniques. Inevitably, these efforts will need to be led by diverse interdisciplinary teams.

What is necessary, though it will not slow the process is for as many people as possible to become more acquainted and yes less fearful of interacting with AI technologies. Intelligence, no matter what shape, form or mode it takes will take on a more tangible, distinguishable and comprehensible form based on how we interact with it. We will (and must) devise ways and methods to illustrate the effects of intelligent systems in our daily lives and enabling people to participate in its growth. Just like hacking and greater media literacy are seen as essential skills for today's informed citizen, being able to comprehend and monitor intelligent systems will be an important skill for tomorrow.

As I mentioned at the beginning of this piece, AI technologies could evolve an infrastructure based on the collective intelligence of the Internet and the platforms and API's attached to it. If monitored, or "boxed" to borrow a phrase from Bostrom, it would allow humans to decide the way they utilize AI and to actively contribute to its evolution. The Internet of Things is essentially the wireframe for an AI-grid, powering various experiences and applications in different environments and industries. It is essentially open-source and with time will significantly change the way understand collective intelligence's role in shaping AI as human and machine intelligence is likely to be increasingly intertwined.

My final point is the most important, and most likely the least reassuring. Hopefully, it is something the reader will consider.

The border between physical and digital realities has already disintegrated--and I mean this quite literally. What scientists and entrepreneurs desire most is for the relationship between humans and intelligent systems to be more seamlessly intertwined, not just blurred, but the boundary between human intelligence and artificial intelligence will completely dissolve, rendering the concept of artificial intelligence irrelevant and obsolete.

We are indeed close to the inevitable emergence of new and different intelligence. However, we must not ever lose sight of the fact that we are the gatekeepers.

We are indeed closer to a general artificiality, NOT because of deep machine learning, neuro-fuzzy methodologies or any other technological advance, As a species, we continually and complacently abdicate the space between our ears and our associated cognitive rights. We consume media with such rapacity that it staggers the mind to think of it. We conjoin with banality and ignorance and ignore the suffering of those less fortunate.

Doesn't our placid acceptance of political corruption and media control tell us something very important? 

We have already unconsciously agreed to be controlled by other powerful and intelligent human beings, known commonly as the "elite".  How likely is it that we will be merged with, and eventually replaced by an intelligence that greatly supersedes the current paradigms and expands them so far as to delude us into thinking it is we who control them, and not the other way around.

~

Louis D. LoPraeste was born in Andover, Massachusetts and educated in Boston and London. His wide-ranging career includes brief stints studying Hinduism, Zen and Tibetan Buddhism in monastic settings, trading commodities and nearly a decade as a corporate strategist, brand consultant, and entrepreneur-technologist. His writing includes essays on philosophy, religion, geopolitics, economics, and technology. He has recently been invited to contribute to an academic paper on the philosophical implications of certain aspects of AI, in particular, brain/cloud interface, emulation, and brain mapping.

His fictional work includes a children's book, Bear and Sparrow, a collection of short stories called The Accidental Convert, and two forthcoming novellas, Capernaum, and Latent Sonata.

 

Finding the Inner One: Advice from Saint Augustine of Hippo

Finding the Inner One: Advice from Saint Augustine of Hippo

Swimming Over the Resting Places