Women, Fire and Dangerous Things
May 1st, 2004
That’s the title of one of the most fascinating books I ever attempted to read (attempted because I’m only at chapter 2 and it’s already blowing me away!). George Lakoff (professor of linguistics at UCB) describes a new model behind the theory of cognition in regard to categorization.
Are you familiar with that foggy feeling of knowing there is something wrong with a model or description of something but it’s too vague or fuzzy to really crystallize in a consistent and describable thought? and then when you learn something else and everything becomes so clear and evident that you wondered how it was possible it was fuzzy before? well, that’s exactly how I’m feeling right now.
Lakoff suggests a new model that he names experiential realism or experientialism. In really short terms (and I apologize if I don’t give justice with my words), categorizing is not discovery but invention.
Since Plato’s idealism, the classical theory indicated that categorizes existed independently and that the human brain was just a tool that was able to break the barrier of the physical world and reach this abstract ontological level.
Lakoff shows several examples where this model simply fails to describe what happens in the real world. Fascinating is the description of the categorization skills of children as they grow older: two-years-old are capable of categorizing two cats as being part of the category kitty, but they are not able to categorize a cat and a dog as being part of the category animal. A year later, half of them can. Two years later, all of them can.
The interpretation of this doesn’t show that the classic model is wrong, as the brain might be learning its way thru the metaphysical level, but there are even more fascinating studies (for example, about color categorization in New Guinea languages that have only two or three terms for color!) that show that the classical model is, indeed, wrong in stating that concept exists independently of the human brain.
The heart of the new theory is that thought is embodied. There seems to be no way to separate body and mind without failing to describe what’s really happening in the act of categorization (which is the core of almost anything we describe as ‘intelligent’ thinking and what machines normally are terribly poor at).
Now, this widely resonates with my own perceptions, especially after having experienced very closely what body damages (physical or chemical) do to your ability to think and process. This is not just the inability to connect to the metaphysical level, it’s a true and deep change in perception, but that only outsiders perceive, while the people suffering from these shifts don’t see themselves acting in a different way than normal.
The question that keeps resonating in my head these days is: can the semantic web happen or not? There are severe limitations (both technological and economical) on how the semantic web is being prepared, but ultimately, if it doesn’t fit with the human dynamics (as the first web did), it simply won’t happen.
The book suggests that the ability to categorize is critical for the existence of any form of communication and conceptual elaboration, but that, to put it in AI terms, this appears not to be symbolic, unlike the classic theory.
This is where the semantic web draws the hardest criticism: many people believe that the semweb it’s just symbolic AI repainted with a different color. I find myself oscillating back and forth about that.
Lakoff suggests that there are universal categories and universal is the way we categorize things. While all human individuals are different, their motor functionalities are mostly equivalent (when healthy and young), evidence suggests that this is also true for mental abilities, such as categorization.
I can’t stop thinking about how stupid the entire concept of a Q.I. becomes under this light: true, while everybody can run, some people will run faster… but some will because they are trained to do so and some will just because they already do. And some ethnic groups have specific behavioral traits. But how much is it behavior and how much is genetics?
Anyway, I firmly believe that if we get there technologically, the problem of the semantic web will be ontology harmonization and Lakoff’s model suggests that while there are concepts like category prototypes (individuals that seems to represent their classes better than others), these don’t seem to represent directly the way we reason, but just an external projection of something more complex that happens inside.
Now, if this is true, harmonization has to happen at the sub-symbolic level in order to work (as other evidence seems to suggest, see my previous ramblings about this), otherwise people will just keep fighting and will require an external force in order for convergence to happen, and this won’t scale at a global level, or falls apart as soon as these external forces disappear (such as funding, for example).
The semantic web does not follow any cognition theory, it does not force URIs to describe symbols or conceptual prototypes (unlike some naive evangelists go around saying) and it does not state what kind of mechanical reasoning it should do with these identified things.
It does show, however, how relationships are critical for an information processing model that goes beyond the one we are currently using for the web. This categorization by functional properties is exactly what the modern theories of cognition suggests. The semweb simply aims at making it more explicit.
But this explicitness is often confused with a “projection on a standardized conceptual model” and this is what is hurting the most. It’s like going around saying that democracy is good but there must be only one political party. Like those who advocate that the delete key in your keyboard is useless because you should learn not to make mistakes.
It feels wrong. It feels patronizing. It feels silly and artificial.
But the more I get familiar with it (and with the people behind it) the more I realize how the model behind it is general enough to describe a sub-symbolic view of the world. The obvious drawbacks of such system will be incredibly high computational needs, non-monothonic reasoning and lack of predictability. Different systems might tell you different stories, yield different results.
In the classic model of cognition, computer scientists believed they could invent a machine which enough reasoning ability to overcome the human limitations and grasp the metaphysical level, reaching the truth. Deus ex machina. eheh
I’ve always thought there is no such thing as “the truth”, because everything is projected, everything is filtered by our mental models (that’s when I stopped being interested in what physicists call “the theory of everything”). There is what a single individual believes to be true. This changes with time, as memory shifts and new information is acquired.
We are measuring the world with a changing meter, but our intrinsic ego-centrism makes us believe that the results are based on the fact that the world changes, not us!
The semantic web is not trying to reach the truth because there is no underlying assumption that one truth exists.
Still, it cannot cope with provenance of statements, or mistakes, or disagreement.
TimBL answered my question about this on friday by saying that “trust” is the thing that is missing in the semweb architecture to address those issues.
I personally think that either his notion of the semweb is much more limited in scope than mine (and I would be very surprised in knowing so), he is seriously underestimating the socio-economical problems of this “trust” approach, or I don’t get it.
Probably this last one, so that’s why I’m reading so much these days
No matter what, this is fun. Real fun.