The Scientific Activist (Archives)





-->

Feb 17, 2006

Science Gets Googly

How do you know how important a website is? You probably already have a good idea without looking at any numbers, but if you want to get all quantitative about it, one way would be to look at how many hits it gets per day. The more commonly-used measure, though, is its Google PageRank, which is based not only on how many other webpages link to that page, but also how important the pages are that are doing the linking (based on who links to those pages).

Editorial note: I hope that we don’t put too much weight on PageRanks, because for some reason The Scientific Activist doesn’t have one yet, despite boasting over 100,000 visitors in its first month! What’s wrong, Google? Was it something I said? I can make it up to you. I promise. Pleeeeeaaaaaase….

Anyways, where was I?

Oh, yes, so today Nature magazine reported that some researchers are pushing to use this same PageRank technology to rate scientific journals. Currently, the importance of a journal is quantitatively measured by a number called the ISI Impact Factor. The Impact Factor of a journal is the number of times articles from that journal are cited by other articles, divided by the number of articles it publishes (citations per article). Not everyone is satisfied with this system, and in another entry in the eternal debate on quality versus quantity, Nature explains:
Now Johan Bollen and his colleagues at the Research Library of Los Alamos National Laboratory in New Mexico are focusing on Google's PageRank (PR) algorithm. The algorithm provides a kind of peer assessment of the value of a web page, by counting not just the number of pages linking to it, but also the number of pages pointing to those links, and so on. So a link from a popular page is given a higher weighting than one from an unpopular page.

The algorithm can be applied to research publications by analysing how many times those who cite a paper are themselves cited. Whereas the IF measures crude 'popularity', PR is a measure of prestige, says Bollen. He predicts that metrics such as the PR ranking may come to be more influential in the perception of a journal's status than the traditional IF. "Web searchers have collectively decided that PageRank helps them separate the wheat from the chaff," he says.

So, why do we even need to measure the relative importance of scientific journals at all? For bragging rights, of course!

Well, maybe there’s a little more to it than that:
Ranking journals and publications is not just an academic exercise. Such schemes are increasingly used by funding agencies to assess the research of individuals and departments. They also serve as a guide for librarians choosing which journals to subscribe to. All this puts pressure both on researchers to publish in journals with high rankings and on journal editors to attract papers that will boost their journal's profile.

People in the scientific community already have a basic idea of the relative importance of different journals. For example, a scientist knows that The Journal of Biological Chemistry is more prestigious than Protein and Peptide Letters, just like you probably know that nytimes.com is a more influential website than, well, scientificactivist.blogspot.com. Scientists can already agree on the basic rankings, at least at the top. The vast majority of scientists would consider Science and Nature to be the “best” journals to publish in. There would probably be a pretty strong consensus on the next set of journals as well, which would include Proceedings of the National Academy of Sciences and Cell. The further down you go, though, the messier it gets. That’s where the Impact Factor comes into play.

Near the top, though, the Impact Factor does not line up with this general consensus, but the PageRank does... sort of.


Where the Impact Factor and PageRank fall short, a different measure called the Y-Factor - a combination of the Impact Factor and the PageRank - seems to really get the job done. Of course, this all depends on what exactly we want to measure, but if we are just interested in attaching numbers to what we already know, then the Y-factor puts the other systems to shame. Apparently, Bollen agrees as well:
Bollen, however, proposes combining the two metrics. "One can more completely evaluate the status of a journal by comparing and aggregating the different ways it has acquired that status," he says. Some journals, he points out, can have high IFs but low PRs (perhaps indicating a popular but less prestigious journal), and vice versa (for a high-quality but niche publication). Using information from different metrics would also make the rankings harder to manipulate, he adds. So Bollen and his colleagues propose ranking journals according to the product of the IF and PR, a measure they call the Y-factor….

…But for Bollen, ranking journals more effectively by combining different ranking systems could help protect the integrity of science. He warns that scientists and funding agencies have used the ranking system well beyond its intended purpose. "We've heard horror stories from colleagues who have been subjected to evaluation by their departments or national funding agencies which they felt were strongly influenced by their personal IF," he says. "Many fear this may eventually reduce the healthy diversity of viewpoints and research subjects that we would normally hope to find in the scholarly community."

So, the message here is that if people are going to abuse or overuse these rankings, we might as well make them as accurate as possible. I could buy that. However, if we really want to be scientific about this, we need to see detailed studies that show that the Y-Factor works the best in all parts of the rankings, not just at the top. This could be based on surveys of scientist opinion, or on other less subjective measures. If the Y-Factor really does prove to be a better measure of a journal’s impact, then the scientific community should embrace the improvement.


Update (19 February 15:35 GMT): Interestingly, The Scientific Activist suddenly has a Google PageRank of 5 (out of 10). Coincidence?

Update (19 February 20:25 GMT): This is weird: now my PageRank is back to zero. WTF?

4 Comments:

  • Interesting to see the different top 10 journals, but in order to compare research accross disciplines the impact factors should be normalised with respect to the number of people working in the field. For example, the Y factor does not seem to have any physics journals.

    Of course, this would then raise the issue of determining the boundaries of different disciplines...

    By Anonymous Anonymous, at Fri Feb 17, 03:12:00 PM  

  • This is all interesting, but it still leaves a question in my mind, which is how do you stop companies or researchers from gaming the system? For example, Google Page Ranks were (and probably still are) gamed by creating a script that creates a bunch of fake blogs that just pull content from other blogs, but that have links to pages that the scammers want to have show up at the top. Even though one unknown blog doesn't add a lot to the page rank, when you can create thousands of them in minutes, it becomes easier to get your page ranked higher and higher. I could easily see an unscroupulous person or company trying tactics like that to get their research or journal noticed when it shouldn't be.

    By Anonymous Anonymous, at Fri Feb 17, 06:10:00 PM  

  • Page rank does that sometimes. For that matter a 5/10 isn't worth that much - that's what my personal web page gets. It does seem to matter how long your web page has been up, though. *shrug*

    By Blogger Papa Bear, at Mon Feb 20, 07:35:00 PM  

  • Do librarians actually use Page Rank to make decisions about purchasing journals? It seems a plausible assertion, yet the two major library purchasing decision makers here at Mich. that gave a detailed presentation last semester about how they make choices did not even mention it.

    The current line generally (at least for reference librarians) is the less you base on google (or wikipedia for that matter) the better. Popularity is not accuracy. Though this might be a demonstration of theoretical argument against the already present practical, but that's another story.

    By Anonymous Anonymous, at Tue Feb 21, 02:11:00 PM  

Post a Comment

<< Home