Entering Data Space
We have just entered the new millennium with all computing engines full steam ahead after overcoming their biggest hurdle ever, the dreadful Y2K problem. The millennium computer bug that wasn’t, fizzled away in the early hours of the newly minted year 2000. Now our vistas are opening into the next couple of millennia. Will there be a Y10K problem? Probably not, even though it is easy to predict that there will be several orders of magnitude more machines, each one significantly more powerful than any of today’s.
It took computing a mere half century from its initial inception in Britain, Germany, and the US to potentially cause a major disruption in all sectors of world society - military, energy, economy, transportation, etc. In only a single generation of 25 years the Personal Computer penetrated more than half of all US households and pretty much every single business in most countries in the world. This tremendous proliferation of digital machines received an additional decisive boost at the very close of the 20th century, "five years before twelve".
With unparalleled speed the World Wide Web broke through the closet doors of academia and the military who had considered the Internet their private play pen. The new and decidedly commercial Internet is equipped with URL’s, hyper-links, and a fool proof multi-media browser acting as the unified window into a vast and cancerously growing data space of millions of web servers, all globally interconnected and able to talk to each other.
The term data space more accurately depicts the realities of today’s computer environments than the euphemism "Cyber Space". It’s data which is at the center, it’s data which is being created, controlled, mined, transferred, and sold. And the more data there is the more meta data is needed to sort and search and surf successfully without getting lost in this data universe.
Online profiling and one-to-one marketing, personalized web sites and web performance statistics, opt-in and opt-out mailing lists, news servers and chat rooms, trading patterns and tracking records, cookies and passwords, real-time audio and video streaming, digital photos and 3D graphics: it’s data, data, data; and meta data on top to catalog and navigate any data space.
The meaning of all this data is facilitated by codes which interpret a multitude of new kinds of languages in addition to the known old ones. Number systems and human languages had been encoded digitally right from the start, but now there are page description languages, networking protocols, hypertext protocols, audio and video formatting codes, run-length encoding, and JPEG, and MPEG, and M3, and VRML, and so on and so forth.
The Universal Symbol Processor
A brilliant insight by the British mathematician Alan Turing foreshadowed most of these developments (if not the computer network itself) when he posed the question "Can machines think?" He resolved this question by unveiling to the symbol processing capabilities of the computer naming it a "Universal Symbol Processor". Until then such qualities were the prerogative of a very special symbol processor, the human brain.
Turing’s reasoning already brought the times when human thinking and communication between humans were Alpha and Omega in symbol processing to an abrupt end. In the new era of networked computing novel concepts are rapidly introduced with acronyms to boot: The newest rage is B2B (business to business) and B2C (business to customer). It remains to be seen what full-fledged C2C (computer to computer) communication will bring about in the future.
Since computers are symbol processors, they can process and deliver meaning in the real world and in data space. Data particles are the building blocks on which symbols can reside and in turn, meaning is ignited in a cognitive act. An analytic description of the key features of data particle systems should help us to better understand the complexities of contemporary distributed and networked computing environments.
Principles of Data Particles
A data particle is a discrete element. In computing it is the smallest single unit which is not divisible.
Its material substance is completely reduced to pure logic. Unlike any other material the digital matter does not possess any physicality; no weight, no shape, no form. All there is is logic, indeed the data particle is suspended in very thin air, it freely floats in the realm of logic, a space of a different kind.
The particle is binary, i.e. it simultaneously contains its identity and —latently- its inverse. It’s not simply on or off, black or white; the data particle is always both, one state is activated, while it is already poised to turn into the other just waiting to be switched to its opposite at any given point in time.
Data is instant, the switch to the inverse or the copy is produced instantly. The data particle exists in a zen-like time, a void. Any action performed on it or with it occurs without any delay, or within a zero interval. When thinking conceptually of the data particle all time collapses into one and the same moment. Data particles enjoy the fastest speed of any type of element. Since they do not have any corporeal extensions they do not have to overcome any distance. The same data can be here and simultaneously, there.
Aggregation by Code
Any number of particles can be aggregated into complex data sets. There are no limits to how two or more data particles relate to each other since there is no up or down, no left and right. Proximity is not required to form a set, all that is necessary is a definition describing the organization of the data. There also are no limits as to how much data can be organized into single sets, their size is potentially infinite.
The formatting of data sets is done with a rule known as the code. The code can be invented at will and any code can be applied to any data set, though producing different results. The same code needs to be used to reproduce the same interpretation of a given data set. Symbolic reference to meaning (content) is established via other symbol systems (language, icons, numbers, etc). If encoded properly any symbol system can be embedded in data sets. After the transformation into the digital domain the original symbol system participates in all the features that data particles enjoy.
Meta Data Everywhere
A data space is comprised of numerous data sets which necessitates additional super-structures. Meta data is created to enable navigation within or between more complex data sets. To make sense of any database, to search and compare and use the data stored, pointers are needed which address and identify the actual content data. To be able to traverse the global internet a multitude of protocol and addressing data is necessary. This auxiliary data can be called meta data, its existence is subservient to the content data. For any single unit of content there is a growing number of units of meta data, today.
Meta Data can also serve to define transformation rules applied to data sets. Symbol processing is not limited to simply translate data sets from one code into another. Computing has generative powers derived from —again- encoded algorithms which are applied to several data sets. Simple or complex algebra (including logic or Boolean) algebra, can be executed as easily as rule-based computation, data amplification, recursive data generation, etc.
(A detailed analysis of how these features are utilized in Generative Aesthetics can be found in: "Digital media: Bridges between data particles and artifacts", Frank Dietrich, The Visual Computer (1986) 2:135-151.)
Now that the very basic components of data have been discussed the real work begins. First, the right questions about the massive changes networked computing cause need to be asked. This seems to be the prerequisite to finding good answers to better understand the complexities of the new Turing Galaxy in which we live.