The Plurality is Near


I decided to take a leap.

There is an institute in the United States started based on the writings of futurist Ray Kurzweil regarding the advent of computer intelligence called The Singularity is Near.

The institute is called “The Singularity Institute for Artificial Intelligence“.

Here is The Transcendent Man:

Vodpod videos no longer available.

more about “Transcendent Man“, posted with vodpod

I am going to advocate that as we approach computer singularity, we are also approaching human plurality.

More and more people are creating personal internet identities until every human being on earth will be online.

I call this human population threshold “The Plurality”.

The Plurality is Near.

Think about the implications.

I have incorporated “The Plurality Institute for Natural Perception”.

I will be setting up the website soon.

All it means is everyone online and thinking for themselves.

We will work to advance preparation for the impending plurality.

Design: Business Design Induction/Deduction


This is my latest incarnation of the Business Design Process.  Induction (Brainstorming–generation of ideas) is Counter-Clockwise.  Deduction (Refinement–elimination of ideas) is Clockwise.

Below is the Intelligence Architecture:


Here is the Media Architecture:


This is the Data Architecture for this model.  Note that all values are accepted even if they are wrong:


Below is the Network Architecture of this model.  Note that the values are unique (nodes) and they are sequential (edges):


Here is the Text Architecture:


Here is the Numeric Architecture:


Here is the Octonion Architecture:


Ethics: Robots and the Vulnerable

This is an article that merits consideration by everyone:


WASHINGTON – A BRITISH scientist is calling for immediate introduction of robot ethics guidelines amid surging use of the machines and concern about their lack of human responsibility while caring for children or the elderly.

In an article published on Thursday in the US journal Science, Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, argues that the steady increase in the use of robots in day-to-day life poses unanticipated risks and ethical problems.

Outside of military applications, Professor Sharkey worries how robots – and particularly the people who control them – will be held accountable when the machines work with ‘the vulnerable’, namely children and the elderly, stressing that there are already robotic machines in wide use such as the Japanese meal assistance robot ‘My Spoon’.

Robots could also soon be entrusted by parents to guard and monitor their children, replacing a flesh-and-blood nanny but posing potential problems in long-term exposure to the machines.

‘There are already at least 14 companies in Japan and South Korea that have developed child care robots,’ according to Prof Sharkey.

‘The question here is, will this lead to neglect and social exclusion?’ He said short-term exposure ‘can provide an enjoyable and entertaining experience that creates interest and curiosity’. But ‘we do not know what the psychological impact will be for children to be left for long hours in the care of robots’, he told AFP.

Experiments conducted on monkeys suggest there is reason for concern, Prof Sharkey said. Young monkeys left in the care of robots ‘became unable to deal with other monkeys and to breed’, he said.

With prices plunging by 80 per cent since 1990, consumer sales of robots have surged in the 21st century, reaching nearly 5.5 million in 2008, and are expected to double to 11.5 million in the next two years.

‘They are set to enter our lives in unprecedented numbers,’ said Prof Sharkey, expressing fear that an absence of ethical rules fixed by international bodies could mean the machines’ control will be left to militaries, the robot industry and busy parents.

The scientist also points to the remarks of Microsoft founder Bill Gates, who he said predicted that ‘over the next few years robots may be a pervasive as the PC’, or personal computer.

‘We were caught off guard by the sudden increase in Internet use and it would not be a good idea to let that happen with robots,’ Prof Sharkey said.

‘It is best if we set up some ethical guidelines now before the mass deployment of robots rather than wait until they are in common use.’ He said it was vital that action be taken on an international level as soon as possible, ‘rather than let the guidelines set themselves’.

For Prof Sharkey, who has studied robotics for 30 years, such standards are compatible with the rise of robots, of which he is an enthusiastic defender. He stressed the benefits that robots can bring ‘to dangerous work and medicine’.

Prof Sharkey shrugs off doomsday scenarios in books such as Isaac Asimov’s I, Robot about the threatening interaction between robots and humans, or in movies such as the The Terminator in which robots take over the world.

Such story lines will remain firmly in the realm of fantasy, even as societies hurtle towards greater automation, he said.

‘I have no concern whatsoever about robots taking control. They are dumb machines with computers and sensors and do not think for themselves despite what science fiction tells us,’ he said.

‘It is the application of robots by people that concerns me and not the robots themselves.’ — AFP

The Brain: Hardwiring and Softwiring

I’m just finishing a very fine book by Steven Pinker, The Languange Instinct: How the Mind Creates Language

and several years ago I read Donald D. Hoffman’s book, Visual Intelligence: How We Create What We See. Both books deal with the same subject: What part of our minds are hardwired–instinct–and what parts of our minds are softwired–reason. It is a truly fascinating exploration.

Stephen Pinker in The Language Instinct very thoroughly explores all the aspects of spoken language. He discusses how broken pidgin languages are turned into grammatically rich creoles by children. He explains that whether a person learns a language or not they can have complex thought he calls Mentalese. He explains Chomsky’s concept of a Universal Grammar and how, with language, learning does not cause mental complexity, but mental complexity causes learning. He reveals that children have an acute sense of the morphology of words and rapidly acquire vocabulary as listemes because of the nature of the relationship between child, adult and reality. The perception of speech as well as the physical production of speech is explored. How we derive meaning from language rejects the technical concept of packets being transmitted and received for a much more subjective process of interpretation. The ability of children to learn language is treated as an evolutionary trade off existing only long enough to adopt the tribes language and then shutdown to make way for other special priorities. The “Language Organ” or region of the brain that is responsible for speech is narrowed down. The chain of being is pushed aside for the bush of evolution to reveal that hundreds of thousands of generations existed for language and homo sapiens sapiens to evolve separate from all our other primate cousins. The difference between living spoken language is separated from living written language, the discipline required for each and the fact that language is never in decay. Finally the relativism of the Standard Social Science Model (SSSM) or tabula rasa as proposed by Margaret Mead is rejected, Pinker takes sides with the Evolutionary Psychologists stating that environment alone cannot create the complexity of the mind, the mind must have many complex modules to be able to learn from the environment at all. He discusses Donald E. Brown’s Universal Person (UP) inspired by Chomsky’s Universal Grammar (UG). Finally, Pinker tries to define the modules of the human mind and here I get excited as I find I am able to fit them easily into the Six Hats, Six Coats model. Pinker says that language is a system and extrapolates to say humans are a system of both hardwiring and softwiring.

Hoffman’s book deals with an aspect of mind that more easily subscribes to the module concept than language because it is a much more detached, empirical exercise to test for the visual hardwiring that humans have through the use of visual illusions. Hoffman takes us through many aspects of vision such as facial recognition, edge and shadow and color and the perceptual development of children to reveal what appears to be hardwired and softwired. He concludes with a relativistic statement, but I think that he chooses this because of the political desire of scientists to distance themselves from the eugenics of the first half of the 20th century instead of an objective conclusion that, yes, we have a complex module in our brain specifically hardwired and softwired for vision as used by our species. In other words, when presented with the depth of Steven Pinker’s work compared to the breadth of Donald Hoffman’s work, I believe that we do have a vision instinct.

All in all I believe that Steven Pinker’s and Donald Hoffman’s work is revealing that humans minds are far more than just an empty neural net at birth. That in fact there is an evolved complex predefined structure that humans make use of through the learning stages of childhood to understand their environment that diminishes to adult levels at puberty. Consequently, no form of Artificial Intelligence will succeed unless it also comes with a robust collection of Artificial Instincts.

Related Article:

Envronment: A New Level of Consciousness

I have been talking with Anthropogenic Global Warming advocates all afternoon and evening. All I got for it was insults and demands for deference from a juvenile academic with his head up his ass who ultimately resorted to tampering with my posts. I went for a cup of coffee and began thinking about what the prospects would be if AGW or Climate Change won in the opinion polls. Both sides suddenly looked like losing propositions. Will the solution factories produce the environmental equivalent of an Amazon, Ebay or Google? Or will we do the more likely thing? Treat the symptoms instead of improving the health of the planet.

The Great Wall of China didn’t meet performance requirements. Kafka had something to say about it.

The first thing I thought about was world agriculture and deforestation. It appears inevitable to me that virtually all arable land will be put under the till to produce food crops or feed for cattle, hogs and poultry. Second, I thought about the current hunt and gather practices of the fisheries. I expect that most large bodies of water will eventually be supporting some form of major aquaculture as wild stocks will be depleted. Third, forestry is unsustainable and will not be prevented in time.

Water desalination projects will become imperative world wide as water tables dry up and glaciers disappear.

There will be the need to create multipurpose corridors both East and West and North and South for pipelines, rail, highway, power and communications across continents as well as air hubs at the intersections.

I thought about global depopulation programs reducing the number of children a family could legally have to affect population decline. I also thought about global redistributions of population in the wake of population decline.

And then there was always war.

Finally, I took a break from the shrinking of the polar ice cap and I started thinking about artificial intelligence, robotics, cybernetics and genetics. Suddenly, they all merged together and I wondered if we would be able to preserve our species or we would eventually change all organisms including ourselves into a network of genetically altered cybernetic and artificial intelligences and robotic objects. Perhaps this will be the new level of non-Descartes consciousness we will need to save the planet. What do you think, Einstein?

Give it a century.

Media: Electric Consciousness


Reading Marshall McLuhan in Understanding Me is like witnessing the fulfillment of prophesy. This collection of essays and interviews from the 1950s and 1960s vividly describes the electronic world we live in today. What stood out for me was Marshall’s description of earth evolving into a man made product. We truly have become responsible for everything animal, vegetable and mineral including ourselves.

Marshall describes the world of the internet as a global village. A world in which we are fully and instantaneously involved in events worldwide. A world where we are experiencing a global tribalism. A world where entire societies are leapfrogging centuries of development to join us in the information age.

However, the information age that McLuhan describes is coming to an end and a new age is coming upon us. It is an age where electric circuits will join the tribe through artificial intelligence and robotics.

Electric Consciousness will not be a single step into consciousness. Like the evolving layers of consciousness as life forms became more complex step by step, electric consciousness will first be an electric fish, then an electric frog, then an electric dinosaur, then an electric mammal and so on. These subhuman consciousnesses will be our servants. We will have to go through all the phases of domestication and induction of these new tribe members. Humanity will gradually surrender more and more responsibility to electric consciousness. Purpose and leadership will take the place of process, data, network and time services for human occupations.

Finally, human level consciousness will be achieved and humanity will face an identity crisis. The gradual transition from low level to high level consciousness will soften the blow, but this will not be the case for the entire planet. Humanity will face an identity crisis of a scale never before known. Human principality and human republic will give way to principality of the conscious and republic of the conscious.

The Media of Electric Consciousness is upon us.

Science: Gradual Ascent Into the Singularity


I just watched this interview with Dr. Ben Goertzel from the Singularity Institute for Artificial Intelligence. One of the issues of prime concern for me when I have participated in SIAI forums online was the willingness of some of their thinkers to develop an AGI without built in controls. To me it was like willingly starting an atomic chain reaction without control rods. In this interview, Ben states plainly that an uncontrolled “takeoff” AI is not acceptable that he plans to create an architecture where “ascent” is gradual.

I am grateful that leading AI thinkers are addressing this.

I also think an AGI should be contained. Certainly it should have all the web available to it, but a copy of the web without external access. Google has made a copy of the web on its servers, why not for the first AGI capable of surpassing human intelligence?

We isolate and contain Nuclear, Chemical and Biological agents. Why not intelligent agents?