Mike Bergman has a nice post about the lack of appropriate means for representing ambiguity on the Web that is worth a look.
It reminded me again of Shakey the robot - its promise and its ultimate failure. Shakey was one of the first 'autonomous' robots; capable of making its own decisions to guide its actions based on data from its sensors. In its heyday in the 60's and 70's it stirred up Jetsonian imaginations of robot vacuum cleaners and so forth but in the end, finally failed to deliver anything of the kind. Shakey's mind was trapped in a strict, orderly world of logical rules. Much like the character Brooks Hatlen from the Shawshank Redemption, when faced with the uncertain reality of the our world he simply couldn't cope. (Watch him shake in trepidation!)
The robot minds that did end up making it into our vacuum cleaners are based on very different kinds of 'minds'. In a movement initiated by Rod Brooks in the late 80's, now personified at home by the iRobot vaccum cleaners, in the academy by the field of 'probabilistic robotics' and on the podium by Stanley, the rules and central planning centers of good old fashioned AI are gone. In their place, we find statistical models that - like the world they were built to deal with and perhaps a bit more like human minds - constantly change to reflect the uncertainty of reality.
I fear that our current approach to "reasoning" on the semantic web bears much more in common with old Shakey than it does with Stanley.