I'm trying to understand the relationship between tags used to describe academic citations by researchers and the descriptors used to describe the same citations by the pros. To do so, I've been trying to get a grasp of the differences at a level up from strings using tools from Unified Medical Language System. Right now I'm having my doubts about whether this is really going anywhere. There are differences, to be sure, but I'm not convinced right now that they are really meaningful. If the data you see below says something to you, I'd be obliged if you could tell me what it said. You should be able to access it in raw form in my newly generated area of the Many Eyes website.
The following tag clouds were generated for the terms used to describe about 87,000 Pubmed citations using Many Eyes. Each tag in the cloud corresponds to a UMLS semantic type. For the first cloud, the types are related to the MeSH descriptors associated with the citations. For the second cloud, the types are related to the tags added by Connotea users to the same citations. There is some bias here for the Connotea tags because I only considered tags for which I could rapidly, computationally identify a matching UMLS concept with high precision - so many of the tags aren't actually represented in the cloud. 'Functional concept' seems to be a standout for Connotea. It represents "a concept which is of interest because it pertains to the carrying out of a process or activity". There is a wordle with the Connotea tags that linked to functional concepts at the bottom.
Saturday, December 6, 2008
UMLS semantic type clouds
Connotea tags below, MeSH terms above
Connotea tags for functional concepts below.
Subscribe to:
Post Comments (Atom)
4 comments:
Initially, I would have expected from connotea manual taggers to be more specific than the MeSh ones. On the other hand, iirc the Pubmed MeSH-annotators are professional annotators, whereas connotea users are not. Is there some "semantic diff" you can calculate from the taxonomies of terms to find if there is a significant difference?
The third tag cloud is a lot more on topic than the other two. For instance, who would care to search Pubmed on quantitatve, spatial or temporal concept, or something less, but still, generic as "disease or syndrome"? Put differently, the third one contains terms that are more specific (finer-grained) than the top two.
In addition, because the 'wordle' one is more to the point than the first two and giving the details as to what is behind "functional concept", maybe one has to approach the tag cloud generation procedure as layers of detail and that the first two aggregate to much (so as not to have a tremendously huge tag cloud) and thereby become laregely meaningless. Would there be a way to make the tag cloud generation a step-wise procedure? E.g., first the near-useless mesh/connotea one, then, say, the tag cloud/wordle for "pharmacological substance", and then from that cloud selecting one term and go again a level deeper. Also, then you would be able to make a more fine-grained comparison between what's behind "pharmacological substance" in the first cloud compared to "pharmacological substance" in the connotea cloud and maybe then you can see a clearer difference between the two tagging systems.
According to simple difference of proportions tests, there are many statistically significant differences between the proportions of semantic types present in the two sets. Its not hard to achieve "significance" with sample sizes this large... I'm still not sure if I really think its a meaningful result.
The strongest difference at the level of semantic types is for the 'functional concept' type which covers about 7% of the concepts in the connotea tags but only 0.2% of the concepts linked to the MeSH terms. Thats why I expanded it for the last cloud. So, to your suggestion about a stepwise exploration of the concept hierarchy via the cloud representation, I agree and thats what I was starting to do here. I think I should do another post that clarifies the process a little bit.
The first major concern I have is really to do with the data lost in the automatic mapping from tag to UMLS concept step. Based on some rounds of validation, I think most of the concepts I identified are there, but when I'm trying to determine proportions of concept representation for the whole set, the concepts that couldn't be mapped due to spelling errors, lack of existence in the UMLS metathesaurus or whatever, loom with the potential of rendering the observed proportions meaningless. The other major concern is that I'm not really sure what to do with such a result should it be proven. What action can/should be taken if I discover that Connotea users (and Citeulike users it seems) really do consistently employ functional concepts to describe things more often than MeSH indexers. The two things are different, so what?
>> I think I should do another post that clarifies the process a little bit.
yes, that would be interesting (to me at least, but then, I like granularity)
>>What action can/should be taken if I discover that Connotea users (and Citeulike users it seems) really do consistently employ functional concepts to describe things more often than MeSH indexers. The two things are different, so what?
would an action be necessary? one aspect is of course the psycho-cognitive aspect (about which I can say little), another one is to look at which one is better in retrieving the desired Pubmed articles. Perhaps the researchers seach more often by functional concept and actually not in the MeSH-way? Why would volunteers start re-tagging an already annotated corpus anyway? Who knows, maybe the MeSh-way can be improved upon to meet their users' needs thanks to bottom-up feedback, or a new branch with kinds of keywords, such as the "functional concept" one (or one with less philosophical baggage), could be added.
> would an action be necessary?
Necessary, no but desirable yes. My current favorite definition of knowledge is that it is "is information with guidance for action based upon insight and experience". Though I'm not really against basic information gathering and dispersal, for writing purposes I'd much rather write about knowledge gained. So, if I 'discover' something, I'd like to be able to make a strong statement about how that information can be used.
> one aspect is of course the psycho-cognitive aspect (about which I can say little),
Sure. Though this is much harder than it looks because of the huge variety of ways that tags can get entered into the system. The API and and upload feature in particular make things complicated.
> another one is to look at which one is better in retrieving the desired Pubmed articles.
Maybe, but at the moment the coverage of pubmed by social taggers is so thin I doubt the experiment would really be worth doing right now. Maybe in about a year..
> Perhaps the researchers seach more often by functional concept and actually not in the MeSH-way?
Maybe again. I'd love to get my hands on the query logs for Pubmed, the answer is in there.
> Why would volunteers start re-tagging an already annotated corpus anyway?
That one I have answers for. A) they want personalized reference collections, B) they have personal organizational needs (to_read..), but most importantly I think, C) the MeSH/other annotations are totally hidden from their view! If Connotea etc. sucked them in automatically and displayed them as tags, I bet people would be quite happy to use them. I've been meaning to write about that and to add it as a feature to ED...
> Who knows, maybe the MeSh-way can be improved upon to meet their users' needs thanks to bottom-up feedback, or a new branch with kinds of keywords, such as the "functional concept" one (or one with less philosophical baggage), could be added.
That would be very cool. Lots of work to do to justify that and then more to make it happen, but its a possibility.
Post a Comment