DemocraNet: Scale-free CPUs, HFGW Networks, Associational DBMSs, Iconic Languages, AlwaysOns and Laypeople

fractal

I came across this article http://tinyurl.com/58envr in Infosthetics.com regarding a medical iconic language. This lead me to think about iconic languages in general.

What would happen if we developed non-text languages where icons were not just “terms” but were used as “definitions” as well?

Consider having:

Iconic vocabularies.
Iconic grammars.
Iconic syntax.
Iconic linguistics.
Iconic dictionaries.
Iconic thesaurii.
Iconic wikis.
Iconic semiotics.
Iconic animation.
Iconic context.
Iconic databases.
Iconic functions.
Iconic organization.
Iconic networks.
Iconic events.
Iconic fonts.
Iconic classics.
Iconic metrics.
Iconic audio.
Iconic video.
Iconic mechanio (pressure)
Iconic olfio (smell)
Iconic gustio (taste)
Iconic thermio (heat)
Iconic nocio (pain)
Iconic equilibrio (balance and acceleration)
Iconic proprio (body position)

Such languages already exist. Chinese Hanyu for example. But what if a new global iconic language were developed?

In my reading I am discovering that even words are treated by our minds iconically as symbolic clusters. If the first and last letter of a word is correct the remaining letters in the word can be in any order. In fact, we do the same things with words themselves. We create word clusters and shuffle them around to create sentences. I think language does not have the formula Chomsky came up with using random sets of words arranged syntactically. Words are symbols and sentence fragments are symbols that we connect together. We do the same thing with lists which are basically paragraph fragments. All these fragments are are arranged according to the rules of a scale-free network not a hard wired linguistic structure. I think that would shake Steven Pinker up.

The thing that is necessary to point out is literacy and numeracy does not make us any more or less intelligent. It is a symbolic system like any other that trains us to think in certain ways to process language and quantities. Whatever we do we are simply learning another, perhaps more efficient way of processing symbols representative of reality. Plato thought that literacy was dumbing down his students because they did not memorize and meditate on what they learned, choosing to write it down and put it on the shelf instead. Are our children any different if they choose to let computers deal with the mechanical aspect of literacy and numeracy so they can concentrate on higher order operations? Do we agonize over our children being unable to weave cloth and tailor clothing?

If Marshall McLuhan is right, we are not past the point where we are pumping old media through the new internet media pipe. Text will always be with us, I think because it is just too darned useful. But we will utilize it differently as we become able to record, replay, produce, publish, communicate and collaborate using non-textual, non-numeric media and move beyond linear and tabular networks and into netular scale-free networks.

Something that occurred to me about phonetic language like English and syllabic language like Arabic versus iconic language like Hanyu Chinese is a phonetic or syllabic language enable you to encode or decode words according to their sound and store and retrieve them based on a simple index. Hanyu on the other hand provides no association between code and sound. You are dependent on the person you hear the word from to provide the association making coding and decoding author dependent. Iconic storage and retrieval indexes are not always obvious either although they do exist based on the subordinate symbols from which words are composed. The internet poses the remedy to this by enabling the automation of the association between sound and icon and definition.

It seems to me that iconic languages as a technology are undergoing a major evolutionary change that could not be achieved without the internet.

Computing is going through an interesting process:

Note: PL means programming language

Nodular Computer: Mainframe: Priesthoods operate
Nodular Network: ARPANET: Priesthoods connect
Nodular Data: Variable: Noun: Priesthoods Query
Nodular Language: Variable PL: Assembler: Priesthoods Manipulate
Nodular Communication: Variable Packet: TCP/IP Priesthoods Communicate
Nodular Schedule: Sequential Batch

Linear Computer: Minicomputer: Scribes operate
Linear Network: Ethernet: Scribes connect
Linear Data: String dbms: Verb: Scribes Query
Linear Language: String PL: 3GL: Scribes Manipulate
Linear Communication: String Packet: HTML: Scribes Communicate
Linear Schedule: Multi-Tasking

Tabular Computer: Microcomputer: Educated operate
Tabular Network: Internet: Educated communicate
Tabular Data: Relational dbms: Noun Set: Educated Query
Tabular Language: Relational PL: SQL: Educated Manipulate
Tabular Communication: Relation Packet: XML: Educated Communicate
Tabular Schedule: Multi-Threading

What is over the horizon and will accompany Iconic Languages I call “DemocraNet”

Netular Computer: Scale-free CPUs: Laypeople operate
Netular Network: High Frequency Gravity Wave Network: Laypeople communicate
Netular Data: Associational DBMS: Verb Set: Laypeople Query
Netular Language: Assocational PL: Iconic Language: Laypeople Manupulate
Netular Communication: Association Packet: XMPEGML: Laypeople Communicate
Netular Schedule: AlwaysOn

Scale-free CPUs will be solid state computers.  There will be no moving parts at all: Solid State Storage, no fans, no boards and a Network Processor.

High Frequency Gravity Wave Networks will make available bandwidth several factors larger.

Associational DBMSs will allow us to modify databases on the fly without concerns regarding referential integrity or normalization.

Iconic Language will Internationalize visual communication.

XMPEGML as new form of markup language for the standardization of iconic language exchange awaits development.

AlwaysOn would mean that you are always connected to democranet and always processing data.

Everything is in the mix to varying degrees, but each successive community is larger.

Advertisements
Posted in Uncategorized. Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Leave a Comment »

Databases: Structured Associative Model

oraclesentences

For years now I have been struggling with Relational DBMS technology and Associative DBMS technology attempting to get them to do what I want.  In my first efforts, Relational models were structurally restrictive, Dimensional models were unable to grow organically, EAV models are incompatible with relational architecture.  I came upon Simon Williams Associative Model of Data and although enthralled with its potential I found it too had limitations.  It was semi-structured and allowed for too much flexibility.  25 years in Information Technology had taught me that there was a single standard classification system for setting up databases not a plethora of ontologies.  I was determined to find the theoretical structure and was not concerned with hardware limitations, database architecture, abilties of current query languages or any other constraints.

The Associative Model of Data had made the difference in liberating me from Relational and Dimensional thinking.  A traditional ERD of the Associative Model of Data I at first thought would look like the following:

amdschema

Basically what you have is a Schema composed of Nodes with Node Associations through Verbs and Associations with Nodes Attributions through Verbs. The range of Node Entities, Verb Entities, Association Entities and Attribution Entities are endless.  As well the population of the Schema has an unlimited dataset of natural key values.  I have been challenged by Relational database specialists and SQL experts regarding the viability of this model within current limitations, however their arguments are irrelevant.  What is important is the logical validity of the model, not the physical validity.

After receiving the criticism I decided to revisit the model in order to simplify it.  I went over Simon William’s explanations of his model and its application and found I could reduce it to the following:

amdschema02

This was profoundly simpler and better reflected the Associative Model of Data’s Architecture.  But even with this simpler architecture I was not satisfied.  I felt that the Associatve Model although giving the benefit of explicitly defining the associations was a tabula rasa.  Research has shown that tabula rasa’s are contrary to the behavior of the finite physical universe.  There is an intermediate level of nature and nuture.  And this is what I sought to model.

zachman

When I first encountered the Zachman Framework, something about it struck me in a very profound way.  I could see there was something fundamental in its description of systems, however I felt that the metaphors that John Zachman used were wrong because they themselves lacked a fundamental simplicity.  The consequences of this were that those who studied under Zachman ultimately could not agree on what he was talking about.  Also the “disciplines” that Zachman’s Framework generated were continually reinventing the wheel.  Zachman had created a world of vertical and horizontal stovepipes.  To further the confusion Zachman refused to conceive of a methodology based upon his framework.  Consequently, there was no way to determine what the priorities were in creating a system.  I call this the Zachman Clusterfuck.

Zachman’s work spawned years of work for me.  I could see that systems had a fundamental structure, but I could not agree with Zachman.  Focuses and Perspectives were useless terms.  The construction metaphor was useless.  I read anything I could get my hands on dealing with systems, methodologies, modeling, networks and a broad range of other literature across the disciplines.  Out of this came a set of conclusions:

  1. There were a fundamental set of Noun Entities
  2. There were a fundamental set of Verb Entities
  3. There were a fundamental set of Association Entities
  4. There was a clear order in which the Nouns were addressed
  5. There was a clear order in which the Verbs were executed
  6. The structure was fractal
  7. The content was a scale-free network

I made some attempts at creating the vocabulary and experimented with this new Structured Thinking Language.  However, the real break came when I worked with John Boyd’s OODA Loop:

theboydpyramid

The OODA Loop revealed a governing structure for the methodology and guided my way into the following hybrid relational/dimensional/associational model I call the Structured Associative Model of Data:

samd

One of the key things this model demonstrates is the sequence followed by the OODA Loop.  Starting from the top, each dimension set spawns the next.  Choices are created from the dimensions.  There is no centrism to this model which is an inherent flaw in Service Oriented Architecture (SOA), Event based architecture, Data centric architecture, Goal-Directed Design, Rule based systems among others.  The stove pipes of Focuses and Pespectives disappear by reasserting a clear order of priorities and dependencies for achieving success.  The model also supports bottom up inductive as well as top down deductive sequencing.  This will make the system able to reconfigure to handle exceptions.

Some of the things I have learned in designing this model include the realization that unit defines datatype and that all measures are variable character string text.  This is because any displayed value is only a symbolic representation of the actual quantity.  If operations are to be performed on measures they are converted to the correct type as part of the operation.  I also recognized that Unit was necessary to define the scale and scalability of the system.  Further, it became apparent that analog calculations should not be practiced.  Every value should be treated as discrete and aggregated.

Another aspect of this system is the inclusion of currency and amount.  I have been critical of Zachman and academics for their hypocrisy regarding the economics of systems.  All systems have a cost and a benefit and they are measurable in currency.  Contrary to the reasoning of the majority, every decision is ultimately economic.

Tim Brown of IDEO has coined the term “Design Thinking” and has been toying with the concept for some time.  Many designers dwell on the two dimensional concept of divergence and convergence as modes of thought.  If we look at my model, divergence is the creation of choice while convergence is selection of choice.  There is no alteration or deletion of choice in my model as history is preserved.

Now what you have is a unencumbered framework with a clear methodological sequence.

czerepakcognitary

Welcome to the Cognitary Universe.

Universe: Hexahedron Theory

Hexahedron Schema:

  1. 4 Axes are Dimension Particle Sets
  2. 8 Vertexes are Space Particle Sets
  3. 12 Edges are Force Particle Sets

Additional Schema Components:

  1. 4 Axial Plane Sets
  2. 6 Edge Plane Sets
  3. 16 Axial Plane Triangulation Sets
  4. 24 Edge Plane Triangulation Sets

Look at the vertexes of the hexahedron as entities.

Entities are Sequence->Value->Type

Look at the edges and axes of the hexahedron as associations.

Associations

are: SourceEntity->VerbEntity->TargetEntity

or: SourceAssociation->VerbEntity->TargetEntity

The instances for the entities and associations are the sets we are working with.

The key is the universe is composed of particles of a broad variety.  But every particle is simply an association in the form of a set.  The lowest order particles are event and point.  They are one dimensional particles.  All subsequent higher dimension particles can be reduced to a subset of these particles.

I have revised my theory to include the observer in the system.  I am of the opinion that the observer is not unary but binary having two hemispheres to the brain.  Position and Velocity are composed of sets not points and are observed separately by the ordinal and cardinal hemispheres of the observer.  Consequently, the universe is not probalistic, but wholly deterministic.

Where – When : Space – Time

Sequa is an ordinal point set while frequa is a cardinal event set.

What – How : Mass – Light

Quala is an ordinal sequency set while Quanta is a cardinal frequency set..

Why – How Much :  Gravity – Energy

Grava is an ordinal quality set while Erga is a cardinal quantity set.

Who – Whom : Ordinality – Cardinality

Orda is an ordinal gravity set while Erga is a cardinal energy set.

I think there are even higher order entities and associations, but I have still to work them out.

STL: Structured Thinking Language R0.3

I had a bit of an epiphany today. What I realized is that by structuring Structured Thinking Language as I have, everything can evolve as lists. Each VERB is simply the addition of another list to the NOUN you are working with.

Six Verbs: CREATE, RELATE, REPORT, RECORD, AFFORD, ENGAGE

Six Nouns: MOTIVE, LOCALE, OBJECT, METHOD, PERSON, MOMENT

Four Adjectives: INDUCED, DEDUCED and IMPLICIT, EXPLICIT

CREATE INDUCED|DEDUCED IMPLICIT|EXPLICIT
     NOUN
        (   nounname_1,
            ...,
            nounname_n
        );       

RELATE INDUCED|DEDUCED IMPLICIT|EXPLICIT
     NOUN.nounname TO
                (    NOUN_1.nounname_1,
                     ...,
                     NOUN_n.nounname_n
                );         

REPORT INDUCED|DEDUCED IMPLICIT|EXPLICIT
    NOUN.nounname
                (    attributename_1,
                     ...,
                     attributename_n
                );       

RECORD INDUCED|DEDUCED IMPLICIT|EXPLICIT
    NOUN.nounname.attributename
                (    constraintname_1,
                     ...,
                     constraintname_n
                );         

AFFORD INDUCED|DEDUCED IMPLICIT|EXPLICIT
    NOUN.nounname
                (    SELECT
                     INSERT,
                     UPDATE,
                     DELETE
                )
                ON
                (   NOUN_1.nounname_1,
                    ...,
                    NOUN_n.nounname_n
                );         

ENGAGE INDUCED|DEDUCED IMPLICIT|EXPLICIT
SELECT|INSERT|UPDATE|DELETE

Obviously, it still needs work, but we can see where the Structured Thinking Language adds value to the design process. SQL does have it’s place in data manipulation. However, STL has a place in data definition. See the related posts for background information on this syntax.

Related Posts:

Structured Thinking Language R0.3