Design: Business Design

HexSheet

In an earlier post I was reflecting on the concept of a curriculum shared by Tim Brown of IDEO on his Design Thinking Blog.  At that time I shared a list of areas I felt would compose a curriculum.  I have continued to reflect on this and I have come up with the following table:

table-market-segments

The rows in the above table are the market segments the columns are the market segment actions.

Advertisements

Data Deluge? Upgrade Your Data Filters? No, Upgrade You Data Analysis!

Clay Shirky says welcome the Data Deluge, but improve your Data Filters

Vodpod videos no longer available.


Clay Shirky thinks the problem is filters. That has been the pat answer for as long as there has been data. However, that is not the answer. The answer to accept all the data and analyze it through better visualization. It is better to change the granularity of data and see “the big picture” to see the trends, make a hypothesis, make predictions. We have to learn induction instead of continually resorting to deduction.

There is no bad data. There is a lot of bad analysis.

Databases: Hyperbolic Schema Example

hyperbolic-geometry

In a TED.com talk, Margaret Wertheim proudly relates how a craft predominantly practiced by women solved the physical representation of Hyperbolic Geometry.  Click on the image to view a video of the talk.

This got me to thinking about the representation of Hyperbolic Schemas and Hyperbolic Data.

An example of a Hyperbolic Schema is as follows:

Take the name “John”.  “John” can be seen as an word element or as a composite of “J”, “o”, “h”, “n”.

The left side of the brain sees the element, the right brain sees the composite. This applies level after level.  Phrase element and composite of words.  Sentence element and composite of phrases.  Paragraph element and composite of sentences and so on.  The associative database can support both representations.  This give you very powerful editing capabilities at many levels of granularity as well as powerful searches and applications.

Our product does this using the Sentences Associational Database from Lazy Software.  A relational database cannot do this effectively or efficiently.

COA: Change Oriented Architecture

oraclesentences

The problem of this decade is the information technology platform. We have to switch to scale-free networks and abandon tabular lattice networks. It isn’t SOA that we need, it’s Change Oriented Architecture (COA).

I highly recommend rejecting Larry Ellison’s Oracle Relational Database and Sun’s MySQL and adopting Simon Williams’ Sentences Associational Database at lazysoft.com which provides a schema and interface you can completely change on the fly.  No data loss, no null values, no normalization.  The lazy developer is the efficient developer.

When the Board of Directors says “Change!” the CIO can say “Immediately!”

And have you ever heard of “Hyperbolic Data”?  Sentences can do it easily and dynamically.  Click on the image to learn more:

margaret-wertheim

Databases: 50 years of stupidity

stupidity

Database conventions are not best practices.  Database naming conventions are based on random ontological concepts.  Ideas about what constitutes an entity are misdirected.  Programmers know nothing about what a class or an object is or how to name them.  Hierarchical, Relational and Network databases have maintained a persistent and ignorant set of practices that the information technology intelligencia have followed mindlessly.  What we have after 50 years is a brute force patchwork of bad design practices and mediocre engineering that continues to work within the same set of assumptions.  It’s a product of the inertia of intellectual lethargy that dominates not just the technological world, but the world that uses technology in general.  Workers are too busy being inefficient and ineffective to improve their business practices.  They jump at silver bullet solutions that promise results without change.

Database people have never understood data.  Programmers have never understood data.  They have instead tried to please everybody’s ontological misconceptions with grotesque architecture and then shoehorn it all into a physical processor that is about as progressive and efficient as the internal combustion engine.  Eco-nut technologists like to use buzzwords like “organic” to describe the chaotic crap they are producing on the web.  It isn’t organic, its a massive slum composed of any piece of detritus the occupants can build with surrounding a district of monolithic towers of gross excess and shameless waste.  Google’s motto is “Don’t be evil.”  Has any company ever considered having the motto, “Be good”?  The more I work with corporations the more I recognize that goodness is discouraged and evil is whatever the corporation says it is.  If you work for anyone you are part of a Milgram experiment and you are delivering those electric shocks everyday under the direction of psychopaths.  The merit you get promoted for is based on your willingness to flip those switches more than anyone else.  Having a conscience is deemed unprofessional and grounds for termination.

This is the environment within which real innovation has to work.

Hungarian Backwords Notation, a naming convention by Charles Simonyi, has been abused and bastardized by programmers and database administrators with no understanding of semantics, which is most of them.  Consequently, it has been rejected by a large portion of the IT community.  Not even Microsoft knew what it had.  I fought with Simonyi’s concept for years and applied it in several working applications successfully against massive resistance.  The more I worked with it the more I realized that Object Oriented Programming was based on a completely false ontology.  The metaphors were completely wrong.  And the Unified Modeling Language entrenched the misconceptions even further.  Information technology is spawning increasing complexity without any appreciation for underlying order.  The order was datatypes.  There are only a handful of Classes and they are datatypes. The English are backwards, not the Hungarians.

If the world was looked at as a collection of datatype classes the entire philosophy of data and programming and systems would have to change.  Objects do not have properties, properties have objects.  And there are only a handful of properties.  I’ve realized this and it has changed my perspective of data design forever.  Throw away your OOP and your Data Model textbooks.  They’re crap.  Google, Apple and Microsoft are not the future.  Einstein had a better grip on reality than Turing ever did.  The typical mind, including the IT mind, still thinks elephants are bigger than the moon.

Related Links:

Databases: Structured Associative Model

oraclesentences

For years now I have been struggling with Relational DBMS technology and Associative DBMS technology attempting to get them to do what I want.  In my first efforts, Relational models were structurally restrictive, Dimensional models were unable to grow organically, EAV models are incompatible with relational architecture.  I came upon Simon Williams Associative Model of Data and although enthralled with its potential I found it too had limitations.  It was semi-structured and allowed for too much flexibility.  25 years in Information Technology had taught me that there was a single standard classification system for setting up databases not a plethora of ontologies.  I was determined to find the theoretical structure and was not concerned with hardware limitations, database architecture, abilties of current query languages or any other constraints.

The Associative Model of Data had made the difference in liberating me from Relational and Dimensional thinking.  A traditional ERD of the Associative Model of Data I at first thought would look like the following:

amdschema

Basically what you have is a Schema composed of Nodes with Node Associations through Verbs and Associations with Nodes Attributions through Verbs. The range of Node Entities, Verb Entities, Association Entities and Attribution Entities are endless.  As well the population of the Schema has an unlimited dataset of natural key values.  I have been challenged by Relational database specialists and SQL experts regarding the viability of this model within current limitations, however their arguments are irrelevant.  What is important is the logical validity of the model, not the physical validity.

After receiving the criticism I decided to revisit the model in order to simplify it.  I went over Simon William’s explanations of his model and its application and found I could reduce it to the following:

amdschema02

This was profoundly simpler and better reflected the Associative Model of Data’s Architecture.  But even with this simpler architecture I was not satisfied.  I felt that the Associatve Model although giving the benefit of explicitly defining the associations was a tabula rasa.  Research has shown that tabula rasa’s are contrary to the behavior of the finite physical universe.  There is an intermediate level of nature and nuture.  And this is what I sought to model.

zachman

When I first encountered the Zachman Framework, something about it struck me in a very profound way.  I could see there was something fundamental in its description of systems, however I felt that the metaphors that John Zachman used were wrong because they themselves lacked a fundamental simplicity.  The consequences of this were that those who studied under Zachman ultimately could not agree on what he was talking about.  Also the “disciplines” that Zachman’s Framework generated were continually reinventing the wheel.  Zachman had created a world of vertical and horizontal stovepipes.  To further the confusion Zachman refused to conceive of a methodology based upon his framework.  Consequently, there was no way to determine what the priorities were in creating a system.  I call this the Zachman Clusterfuck.

Zachman’s work spawned years of work for me.  I could see that systems had a fundamental structure, but I could not agree with Zachman.  Focuses and Perspectives were useless terms.  The construction metaphor was useless.  I read anything I could get my hands on dealing with systems, methodologies, modeling, networks and a broad range of other literature across the disciplines.  Out of this came a set of conclusions:

  1. There were a fundamental set of Noun Entities
  2. There were a fundamental set of Verb Entities
  3. There were a fundamental set of Association Entities
  4. There was a clear order in which the Nouns were addressed
  5. There was a clear order in which the Verbs were executed
  6. The structure was fractal
  7. The content was a scale-free network

I made some attempts at creating the vocabulary and experimented with this new Structured Thinking Language.  However, the real break came when I worked with John Boyd’s OODA Loop:

theboydpyramid

The OODA Loop revealed a governing structure for the methodology and guided my way into the following hybrid relational/dimensional/associational model I call the Structured Associative Model of Data:

samd

One of the key things this model demonstrates is the sequence followed by the OODA Loop.  Starting from the top, each dimension set spawns the next.  Choices are created from the dimensions.  There is no centrism to this model which is an inherent flaw in Service Oriented Architecture (SOA), Event based architecture, Data centric architecture, Goal-Directed Design, Rule based systems among others.  The stove pipes of Focuses and Pespectives disappear by reasserting a clear order of priorities and dependencies for achieving success.  The model also supports bottom up inductive as well as top down deductive sequencing.  This will make the system able to reconfigure to handle exceptions.

Some of the things I have learned in designing this model include the realization that unit defines datatype and that all measures are variable character string text.  This is because any displayed value is only a symbolic representation of the actual quantity.  If operations are to be performed on measures they are converted to the correct type as part of the operation.  I also recognized that Unit was necessary to define the scale and scalability of the system.  Further, it became apparent that analog calculations should not be practiced.  Every value should be treated as discrete and aggregated.

Another aspect of this system is the inclusion of currency and amount.  I have been critical of Zachman and academics for their hypocrisy regarding the economics of systems.  All systems have a cost and a benefit and they are measurable in currency.  Contrary to the reasoning of the majority, every decision is ultimately economic.

Tim Brown of IDEO has coined the term “Design Thinking” and has been toying with the concept for some time.  Many designers dwell on the two dimensional concept of divergence and convergence as modes of thought.  If we look at my model, divergence is the creation of choice while convergence is selection of choice.  There is no alteration or deletion of choice in my model as history is preserved.

Now what you have is a unencumbered framework with a clear methodological sequence.

czerepakcognitary

Welcome to the Cognitary Universe.

Fern Halper: Data Makes the World Go ’round

fern-halper

I was going through my blog statistics today when I came across an auto link that led from a blog by Fern Halper.

Dr. Fern Halper is a partner at Hurwitz & Associates, a consulting, research and analyst firm that focuses on the customer benefits derived when advanced and emerging software technologies are used to solve business problems. Fern has over twenty years of experience in data analysis, business analysis, and strategy development. Fern served as Senior Vice President for enterprise applications and services for Hurwitz Group and has held key positions at AT&T Bell Laboratories and Lucent Technologies. Fern spent eight years at Bell Laboratories leading the development of innovative approaches and systems to analyze marketing and operational data. Fern has published numerous articles on data mining and information technology and she is an adjunct professor at Bentley College, where she teaches courses in Information Systems and Business. Fern received her BA from Colgate University and her Ph.D. from Texas A&M University.

I have been reading Fern’s posts with much interest as she not only discusses the industry concepts but gives an example of a product relevant to the concept.  This makes for a much richer explanation as I find myself experimenting with the free trials for hours afterwards.

Link:  Fern Halper’s Data Makes the World Go ’round