The Siren Song of Artificial ( Human ? ) Intelligence

Artificial intelligence mongering/bashing is a popular ( and profitable ) activity for a small sub industry of academic proponents/opponents. The debaters either challenge preconceptions about the universality of human intelligence or else carve out new territories for the uniqueness of human intelligence in the universe – take your pick which one. Sometimes it is fun to pick up parts of both sides at the same time, combine them together and see how it works out. The possibilities are unlimited.

But it all boils down to one simple question. What is intelligence ? This includes artificial intelligence or natural or whatever. If one were faced with an example of intelligence, how would one know if it were artificial or not ? ( See the Turing Test ). What are the distinctive characteristics of intelligence such that I can distinguish between the artificial and the natural ? If intelligence were an artificial additive to canned soup, would it taste like the natural ingredient ?

In the beginning of rule-based systems, the term ‘intelligence’ was used sort of like Central Intelligence or Business Intelligence, that is collecting and analyzing information about the world in order to make better decisions. A more correct term might have been Synthetic Information rather than Artificial Intelligence.

But, at some point in the late 1980s, waves of AI gurus descended on the world of intelligent business applications and the siren song of artificial human intelligence became the order of the day. The debate was on and it was a huge struggle to undo the damage done. Another favorite from expert system days was the inevitable humorist commenting, “Well, now we have the expert system, what do we need the expert for ( ha ha ha )”. Always a crowd pleaser in an important meeting …

So, when rule-based systems became Business Rule Systems in the 1990s, it seemed that the siren song of artificial human intelligence was to be silenced forever. But was it ? Unfortunately, the AI siren song still seems to crop up in business rule applications even in this day, even with people who should know better. And, of course, since most people know that AI is impossible, the natural conclusion is that business rules technology is also impossible, almost by definition ( except that there isn’t any definition of course ).

Having said that, it is also necessary to say that the prospects for artificial human intelligence never looked better. The ability of computers to beat human opponents at chess was the measure of all things AI in the mid-1980s and today chess playing computers programs routinely stomp human chess masters. When we call a customer service center, computers routinely interpret what we say and respond fairly intelligently to our voice commands.

So, are these examples of Real Artificial Human Intelligence or what ? I think the ‘or what’ provides the answer. As an immediate example, do you think any existing program for text understanding could parse the previous sentences and understand what I just said ? I doubt it. But most human beings can, maybe after a second reading.

But the AI sirens keep singing nonetheless, even in modern business rule applications. At times the subject of rule engines and rule-based systems seems to drift upon the rocks of an AI Sirenum Scopuli.  Just stuff your ears and keep rowing.

Here’s a free and open content icon to show in all your business rules applications. :-)

NoAI

Qualitative Modeling at CSIRO

From the CSIRO web site:

CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

They use some interesting Qualitative modeling techniques to predict the impact of ecological changes on biological systems.

In Qualitative Modelling and Bayesian Belief Networks ( big PDF, over 3 Meg ), the author give two practical examples of ecological risk assessment. The first is a simple model of Vegetation–Hare–Lynx interaction that correctly predicts non-linear behavior in the population of hares in an environmental perturbation of increased vegetation. The second is more complex example of interaction between shrimp, detritus, zooplankton, fish and benthic invertebrates, generally producing realistic results.

The reader can find an introduction to the modeling framework and mathematical methods in Qualitative Modeling of Complex Biological and Social Systems.

The document Flying Insect Agents in Complex Chemical Plumes uses a different approach to qualitative techniques, using agents as the fundamental organizing principle. The purpose is to build a realistic model of luring insects with pheromone attractors to control their numbers. The difference to the previous models is using simple rules for software agents to model the behavior of insects tracking the pheromone “plume”.

From the document:

We use a model agent (an insect ‘in the mood’) that obeys the following rules:

i) Fly at a constant speed;
ii) Fly upwind if it detects pheromone above a threshold;
iii) Fly in random crosswind directions when not detecting pheromone;
iv) Fly at a constant height (known source height).

This set of rules is perhaps the simplest set of instructions perating within the capability and processing power of an insect. The capability is not trivial and reflects utilisation of both optical signals (for sensing spatial location and determining wind direction) and chemical signals for decision making.

The result is a significant variation from the drunkards walk used by classical modeling techniques. The author notes that:

This process is a novel version of the drunkards walk – a particularly apt example because insect pheromone is often dominated by alcohol.

Hmmm. Not that different from human ecological systems, at least in urban environments. In the conclusion, they state:

It is a complex system, coupling the properties of a scalar field in turbulence (itself quantitatively complex) with an intelligent biological agent …

The archives of CSIRO contain a little classic of complex systems in Evaluating Team Performance at the Edge of Chaos. It makes a very lucid case for self-organization in complex systems, one of the clearest explanations for a counter-intuitive phenomena.

As pointed out in the literature [8, 5], emergent self-organisation or extropy may seem to contradict the second law of thermodynamics that captures the tendency of systems to disorder. The ‘paradox’ has been gracefully explained in terms of multiple coupled levels of dynamic activity (the Kugler-Turvey model [5]) self-organisation and the loss of entropy occurs at the macro level, while the system dynamics on the micro level generates increasing disorder.

One convincing example is described by Parunak and Brueckner [8] in context of pheromone-based coordination. Their work defines a way to measure entropy at the macro level (agents’ behaviours lead to orderly spatiotemporal patterns) and micro level (chaotic diffusion of pheromone molecules). In other words, the micro level serves as an entropy ’sink’ – it permits the overall system entropy to increase, while allowing self-organisation to emerge and manifest itself as coordinated multi-agent activity on the macro level.

For more about the Kugler-Turvey model, See Entropy and Self-Organization in Multi-Agent Systems, which may link into Stuart Kauffman’s ideas about catalytic hypercycles and pre-biotic life.

Complexity & Artificial Life Research Concept

From the site:

CALResCo was set up in 1996 to fulfil a perceived need on the Internet to integrate the information about Complex Systems, in all its various guises, and present it in a way useful to both beginners and those already familiar with one or more of the fields.

A very complete set of definitions on their Self-Organizing Systems FAQS page.

What is self-organization ?

a) The evolution of a system into an organized form in the absence of external pressures.

b) A move from a large region of state space to a persistent smaller one, under the control of the system itself. This smaller region of state space is called an attractor.

c) The introduction of correlations (pattern) over time or space for previously independent variables operating under local rules.

  • Typical features include (in rough order of generality):
  • Absence of external control (autonomy)
  • Dynamic operation (time evolution)
  • Fluctuations (noise/searches through options)
  • Symmetry breaking (loss of freedom/heterogeneity)
  • Global order (emergence from local interactions)
  • Dissipation (energy usage/far-from-equilibrium)
  • Instability (self-reinforcing choices/nonlinearity)
  • Multiple equilibria (many possible attractors)
  • Criticality (threshold effects/phase changes)
  • Redundancy (insensitivity to damage)
  • Self-maintenance (repair/reproduction metabolisms)
  • Adaptation (functionality/tracking of external variations)
  • Complexity (multiple concurrent values or objectives)
  • Hierarchies (multiple nested self-organized levels)

A Seriously Eclectic Bookmark Page

Jeff Thompson’s bookmark page shows a taste for the bizarre. AI isn’t the half of it – he’s even got Charles Peirce in there.

He is also the author of Yield Prolog, which has an implementation for Javascript.