Banner
    Who Is Today's Einstein? An Exercise In Ranking Scientists
    By Johannes Koelman | February 5th 2011 09:22 PM | 39 comments | Print | E-mail | Track Comments
    About Johannes

    I am a Dutchman, currently living in India. Following a PhD in theoretical physics (spin-polarized quantum systems*) I entered a Global Fortune

    ...

    View Johannes's Profile
    Who cites who? Science funding, tenure track appointments, all that is important to young scientists gets more and more dominated by citation analysis. This is certainly true in physics. Physics is very much a cumulative endeavor. Each physicist builds on earlier work, and therefore each new physics publication will cite the papers it builds upon. It is therefore not unreasonable to link the impact of a paper to the number of citations it attracts. 

    With large citation databases such as Google Scholar at everybody's fingertips, a citation impact analysis of an individual takes no more than a few mouse clicks. As a result citations statistics is increasingly seen as a practical means to determine scientific impact of physicists. The citation statistics of individual researchers is commonly summarized in the form of a citation index, a single number that is transparent, objective and supposed to describe the total impact of the individual's published work.

    Tens of different indices have been defined, yet they all have their flaws. In fact, they are flawed to an extend that really makes me wonder. Much better indices can easily be defined, so why has no-one come up with improved, yet simple and elegant citation indices?


    Flawed Indices

    Before I describe such new indices, I need to explain what is wrong with existing citation indices. I will focus on the Hirsch or h-index, as this is by far the most widely used citation index. 

    A scientist can claim to have a h-index of N if (s)he has authored or co-authored N papers that each have been cited at least N times. Defined this way, the h-index attempts to measure both the productivity and impact of the published work of a scientist. However, the h-index is really geared towards scientists with an established career in publishing papers. It is largely meaningless for researchers who are most affected by it: scientists early in their careers. The generic example to explain this is to imagine a new Einstein. A young scientist who went through a 'miraculous period' and published three groundbreaking papers. This person is applying for an academic position. The committee checks the h-index of all candidates, and appoints the candidate with the highest h-index: 'Mr. Mediocre'', a young scientist who co-authored five useless papers that each got quoted five times, mainly by his co-authors and a few other impact-less scientists in his immediate network. 

    What went wrong? 

    The young Einstein published only three papers, and then focused on a challenging new problem that she hopes to crack in a few years time. The three papers in the meantime attracted several hundred citations each, but the h-index is blind to that statistics. Per definition, the h-index of the young Einstein with a total of three publications is capped to three. Her competitor, Mr. Mediocre, has attracted far fewer citations, but these were spread over more publications, and therefore resulted in a higher h-index.

    Later in their careers, provided she will secure an academic position and be able to continue publishing, it is virtually certain that the young Einstein will bypass Mr. Mediocre also in terms of their h-indices. In other words, the h-index does a decent job in measuring total lifetime achievement. However, that doesn't help young Einstein right now. As useful as it might be for measuring full career achievement, for scientists early in their careers, the h-index measures mostly the quantity rather than the quality of their publications.

    Can we define a citation index that is more useful to rank the impact of researchers who are still early in their career? The answer is yes, and it is really easy to come up with such indices. I will give you two examples: first the simple Einstein or E-index, and secondly the more sophisticated Pythagorean or P-index.


    The Einstein Index

    The E- (Einstein) index is geared to correct the young Einstein problem. This index is defined as the sum of the citations to the three most cited publication of the person whose citation impact is being measured. It should be clear that this index does a much better job in comparing the young Einstein with Mr. Mediocre. Mr. Mediocre with five publications that each are cited five times reaches an E-index of 15. The young Einstein, however, amasses an E-index score of many hundreds. In contrast to the h-index, the E-index does measure quality rather than quantity of publications.

    Now you might reason "Ok, this E-index seems to have advantages for early career science impact measurement, but does it not in general lead to unrealistic rankings? Would scientist with one 'lucky article' not get on top?"

    This is not the case.

    Using Google Scholar, I calculated the Einstein index of a number of well-known theoretical physicists.* I have not attempted to generate a complete overview, but the list does include many famous theoreticians including a bunch of Nobel laureates. Below table shows the result.
     

    Einstein-indices for well-known theoretical physicists and cosmologists. The Einstein index is plotted vertically, and each scientist is listed by the search term used in Google Scholar. Nobel laureates are shown in amber, others are listed in blue.


    There might be a few surprises in the picture, but the total ranking looks more than reasonable. The bottom level of this table is chosen at an Einstein index of 1,500 corresponding to three papers attracting 500 citations each. A more than respectable amount. Erik Verlinde, who featured prominently in this blog last year, weighs in at a level of 2,000. Nobel laureates typically have an Einstein index of around 3,000 or higher. Physics and cosmology blogger Sean Carroll has entered the bottom end of this level which, without any doubt, makes him the physics blogger with the highest Einstein index. At levels above 6,000 we see true giants emerging. Well-known names like Stephen Hawking reside at this level. Lay people often expect him to head the list, but most physicists will not be surprised to see that folks like Ed Witten are well ahead of Hawking. Also ahead of Hawking, and highest scoring female is Lisa Randall at a score approaching the 10,000 mark. Just ahead of Randall and highest scoring Nobel laureate is Steven Weinberg. Weinberg is accompanied at a 10,500 Einstein index level by Ed Witten. Flying high above these two giants and heading the list is Juan Maldacena with an astonishing Einstein index of 12,000. It is reassuring to see him come out in the top spot. If there is any contemporary theoretical physicist who deserves the title 'Einstein of today', it's him.** As many of you will know, Maldacena is the guy who demonstrated how nature can be holographic, en-passant proving Hawking wrong on the fundamental nature of Hawking radiation. For a popular account of his amazing work on the AdS/CFT correspondence, you should consult his Scientific American article.


    Zipf And The Pythagorean Index

    For the people listed, their most highly cited papers constitute 40 - 70 % of their Einstein indices. In other words, the feared 'single highly cited outlier' effect is absent. A person who has published a highly cited paper will publish more of these during his/her career. This is in line with what one may expect based on the observed Zipf distribution of citations. This Zipf distribution states that when all publications of a scientist are ranked from top cited paper to lowest cited paper, the citations per paper are inversely proportional to their rank (1 to n) on the list. Averaged over the whole group, this Zipf distribution indeed nicely emerges from the data used to construct above plot. It turns out that on average the second (third) ranking paper for a given person attracts 2.2 (3.1) times fewer votes than his/her top ranking paper. 

    This brings us to a more advanced citation index: the P- or Pythagorean index. This index results when fitting the citation count distributions for a given person to a Zipf distribution. I will not go into any of the math here, but for those interested: the Pythagorean index follows when performing a least squares analysis to citation counts that are close to the Zipf distribution. Note that there is no 'wiggle room' and no possibility for tweaking. The Pythagorean index follows from a rigorous analysis. It is what it is.*** The Pythagorean index that results takes the shape of an equation that lends it its name. For a given person, the square of his/her Pythagorean index equals the sum of the squares of the number of citations to each of his/her papers. So a person with two publications that attracted 4 and 3 citations respectively, will have a square P-index of 16 + 9 = 25, or a P-index of 5.****

    The Pythagorean index can be seen as the distance in 'citation space' between the scientist being checked, and a layperson who has not published any scientific work. This index nicely eliminates the various issues pestering the h-index and related indices. Most importantly, just like the Einstein index, the Pythagorean index places emphasis on the few most highly cited publications of a person. It is therefore an indicator that can be applied to measure early career achievements.


    Tiny Indices, Overhyped Messages

    When you have some time to spare, you might want to cross-plot the Einstein and Pythagorean indices for a group of scientists of your choice. A narrow cloud of data points will be the result. It follows that for most practical purposes the simple straightforward Einstein index suffices.

    The result is that anyone can check the scientific impact of each and every scientist: in Google Scholar look up the name of the person you are interested in, and add the number if citations to his/her highest scoring papers. This gives you a huge advantage when confronted with over-hyped messages. An example to illustrate the point. Left, right and center you witness blogs and articles appearing about a 'surfer dude' who has constructed an 'exceptionally simple theory of everything'. You take a look at his paper and at first sight you are not particularly impressed. Are you wrong in your judgment? Do you overlook somehow the brilliance of this new theory? Should you invest in studying this paper in much more detail? You decide to check the Einstein index of its author A.G. Lisi. You arrive at an Einstein index of 63. You decide to spend your valuable time on something that is likely more rewarding.

    "Wait a second!" I hear you say, "a low Einstein index of the author does not mean this work is crap. It could as well be that the whole physics community is ignoring this brilliant new kid on the block."

    Well, let me tell you a little secret: the global physics community consists of many thousands of folks who have build their careers on showing new avenues and proving others wrong. If there is some merit in a new idea, you can be guaranteed that hordes of these folks jump on it and try to extend it and apply to new areas. If this doesn't happen even after more than three years, chances are the idea has little merit.


    Tiny Indices, Huge Egos

    Scientists like to measure and are fond of numbers. So they must have embraced citation analysis as the next best thing since sliced bread.

    Forget it.

    As unbelievable as it may seem to outsiders, scientists are prone to human emotions. More particularly, the egos of scientist are as inflatable as that of anyone else, and probably a wee bit more. That means that the vast majority of scientist must be convinced their work is better than that of an average scientist. Picture such a scientist vanity-checking his/her Einstein index, and ending up with some miserable double digit figure. Would it be possible that this person falls victim to cognitive dissonance reduction symptoms? Well, I guarantee you this person will use all his/her polemic skills in trying to prove citation analysis being wrong.For this person I have a little advice at the bottom of this blog post.

    Now, I am not saying here that citation analysis is a magic silver bullet. It should be clear that it can be nothing more than a starting point in a judgment process. An objective and practical starting point, but only a starting point. Hirsch, in his paper on the h-index that has emerged as his top-cited paper, makes the remark:

    "Obviously a single number can never give more than a rough approximation to an individual’s multifaceted profile, and many other factors should be considered in combination in evaluating an individual. This and the fact that there can always be exceptions to rules should be kept in mind especially in life-changing decision such as the granting or denying of tenure."

     
    Wise words that also apply to the indices discussed here. One last thing: whether you like it or not, citation analysis is not going to disappear. The thing is: the people who decide on grants love it. It creates a level playing field, and gives them an objective measure. So, accept the wide availability of citation statistics as a given, and focus on delivering top-notch research. Once in a while you will strike gold. If not: be honest to yourself: are you really the creative genius you think you are? Is science your real destination?


    Notes

    * You can do this yourself in Google Scholar. Just enter the name of the scientist whose citation impact you want to determine, and hit the return key. You will see a list of publications for this scientist, together with the number of citations these publications attracted. The top-cited paper appear on top. Now just select the three papers with the highest citations (make sure you include only research papers and no books) and add up the three numbers. This gives you the E-index.

    ** it is not only his top ranking in terms of the Einstein index, that makes Maldacena the right candidate for the title 'Einstein of today'. Like Einstein, Maldacena is an immigrant to the US, and like Einstein he works at the famous Institute for Advanced Study at Princeton. If not literally then at least figuratively speaking, Maldacena is sitting at Einstein's desk.

    *** This is not the case for the h-index which leaves a lot of room for 'tuning'. For instance the h-index can be modified into a 10h-index that is claimed to better balance between measuring quality and quantity.

    **** A useful generalization is to correct for co-authorship and divide the squares of the citations to each of the papers by the number of authors.

    ----------------------
    The Hammock Physicist on: What's Wrong With E=m.c2?, What's Wrong With 'Relativity'? Entropic Gravity, Entropic Force, Shut Down LHC?, Game Theory, Metric Vs Imperial, Big Bang, Dark Energy, Chaos And Time's Arrow, The Grand Arena, Square Root Of The Universe, Physics In A Nutshell, The Longest Path, Hotel Boltzmann, Quantum Telepathy, Quantum Viruses, QHD, Fibonacci Chaos, Counting A Black Hole, Entropic Everything, God, Godel, Gravity, Holographic Automata

    Comments

    Stefano Borini
    Just my 2c. It is important to keep in mind two additional points. One is trivial, the other is more subtle.

    The trivial one is that, of course, all these indexes are not portable across different scientific communities. This is due to the fact that they depend on the size of the scientific community involved in a given topic. Normalizing these data may however be very important when evaluating proposals on very different fields, but in that case, high-level strategic directions are an important factor.

    The second, more subtle one is that citations do not necessarily refer to positive vouching. I've seen many papers where citations were made to say "differently from [ref] we do this instead, because his method may not be well balanced for our problem: [discussion follows]". These cases may be in the minority, but they may skew data for young researchers with a low citation count who produced a method that is fundamentally flawed, but it technically works enough to pass peer review (assuming peer review does its job properly). The impact of the published result may be "useless" in advancement, although it may clearly show a wrong way. This says nothing about the validity of the young researcher, but it may say something about the research environment he grew in. The point is that a lot of citations may just point to a flawed methodology, not to an exceptional researcher.
    @Stefano Borini
    "The second, more subtle one is that citations do not necessarily refer to positive vouching."

    I generally ignore bad papers. I think most people do. Only when a bad paper has been referenced a lot, it will be debunked. This "write really bad paper collect a lot of references" seems like a kind of urban myth.

    Where does somebody like Philip Anderson fit in?

    Johannes Koelman
    Anderson ranks amongst the giants, at the level of Polyakov and Randall: PW Anderson on Google Scholar.
    Oliver Knevitt
    Reminds me of a very observant PhD comic:

    Phd Comics
    i am the real albert einstein. i have come back from the dead. beware.... --___--

    Has anyone used historical data on citations to compare indexes based on how well they predict future performance?

    Today, Einstein does not do very well on the Einstein measure, Scholar turns up less than 10,000 citations. Isaac Newton even much less than that. Francis Crick too remains stuck well below 10,000.

    To get things in perspective:
    Donald Knuth 10,000
    Alan Turing 12,000
    Edward O Wilson 20,000 (includes a book)
    Charles Darwin 23,000 (three most popular books)
    Claude Shannon 44,000

    Johannes Koelman
    It is normal practice not to include books in citation scores (I have excluded citations to books when compiling above table) simply because these summarize the state-of-the-art in a field, but don't add to it. Secondly, you can't compare the citation score of a scientist who published his main articles in 1905 with one who did so in 2005. If you check Einstein (and again ignoring books), you will find out that the score he achieves is due to his EPR paper, ono of his latest articles. Earlier articles are not well represented in citation databases. That is exactly the reason I left Einstein (and other famous physicists from days long gone) out. A last warning: comparing across fields of research is asking for trouble. I deliberately restricted myself to a narrow field of research: theoretical physics and cosmology.
    "It is normal practice not to include books in citation scores (I have excluded citations to books when compiling above table) simply because these summarize the state-of-the-art in a field, but don't add to it."

    I agree completely with your post.

    However, I wanted to make some different points.

    First, you call it the Einstein measure, but it does not work over large times. No one will contest that both Newton and Einstein have been the most important of physicists. But that is something you do not see in a citation count (due to the expansion of papers over time).

    Second, Darwin did not write original articles, but original books. And his high Einstein index even after 150 years tells us something about his influence.

    Third, we cannot compare across fields, but the scores for Edward Wilson and Claude Shannon show rightly how important these people are. So they simply underscore the value of your Einstein index.

    In general, I think your Einstein index models the scale of the Zipf distribution of citations. A Zipf distribution has only a single scale parameter, which can easily be obtained from the first three highest rank papers. It is very unlikely that the Pythagoras measure will show something different.

    I tried Paul Erdos. I think this is a case where the Einstein Index fails to show his importance. And I would not like to compute his Pythagoras index, he is rumored to have published 1200 papers in his life.

    Johannes Koelman
    And I fully agree with everything you say... :) I call it the Einstein index to make a link to his three* annus mirabilis publications as this stresses what I want to achieve: a measure that is more fair to scientists on their early careers. Without corrections, this index (or any other index) is not applicable over large time spans. One simple reason is that it is unlikely you will find a quote to Einstein's 1905 relativity paper in any of today's publications: a quotation on a certain aspect of relativity theory would more likely be to a textbook. Indeed, the Einstein index and the Pythagorean index are measuring one and the same thing as long as citations follow a Zipf distribution. Paul Erdos is an interesting case. He seems to reside somewhere at an Einstein index of 2,500. Haven't investigated other mathematicians, so can't compare him to others (we should not compare him against physicists). Has he really been as influential as many believe? I see some (superficial?) analogies with the case of Hawking. Obviously we have to put aside all the cult (Erdos-numbers) surrounding him. * Yes I know, it is four really...
    Any ranking that shows Maldacena is a better physicist than Weinberg, Wilczek, Glashow, Wilson, etc. clearly fails the basic empirical test of whether it accords with common sense.

    Johannes Koelman
    Bob -- change 'better' into 'more influential'. Following this replacement, do you still stand by your remark?
    You're right that "better" is perhaps not the best word choice.

    But how do you eliminate "crowd mentality" factor, i.e. a well known person from one of the top universities publishes some hallucination he had the other night and everyone jumps on it fearing that the train would leave without them, while the same hallucination published by someone else would not attract even a single citation? There are more than enough examples in the recent literature.

    Johannes Koelman
    If 'crowd mentality' would be as dominant as you suggest, why would you wish to eliminate it from the ranking? If it exist, and one person gathers a crowd while others don't even if they make the same remark, it would be a perfect example of 'being influential' in science.
    Yes, but that is exactly what you want to avoid in a measure that have a real scientific meaning, rather than measuring the social component in science.

    Johannes Koelman
    I don't know what you mean by 'a measure with real scientific meaning'. All we can ask for is that the measure is objective and correlates strongly with other measures of 'scientific impact'. So let me ask you: what do you think causes this 'social component' in science? We have to face the fact that any such 'social component' is based on scientific achievements. An article with Maldacena's name as author would probably draw a few more references than the same article with my name on it. We shouldn't expect this to be in any way different.
    Seriously?

    The reason scientists don't like citation indexes (okay, the reason I don't like them) is:

    * They get gamed

    We all know about crap papers that ourselves or each other have published. We all know about citing papers we haven't read because they are perceived as influential or to pad our bibliographies. We all joke about the minimum publishable unit and the politics involved in who gets to be a coauthor or who gets a cite.

    Furthermore, we all know that the more citation rankings drive hiring and funding, the more petty bullshit and gaming the system replaces actual science.

    You can argue that it's better than having a five minute conversation with someone and deciding whether they're intelligent or not based on that, but you should at least acknowledge the legitimate reasons people don't like them instead of blithlely accusing other scientists of measuring their e-peen and whining when it comes up short.

    @thomas
    "They get gamed"

    Yes, they are. Every single system will get gamed.

    But do you really think Shannon or Witten gamed the system?

    Say you want to game the system and get an Einstein index of over 1000. What does it take to get over a 1000 citations for three papers combined? Ir requires you to get the citations in over 330 publications, to begin with.

    If you can get your citation in published 330 papers, you are not gaming the system. You are then fully part of it.

    that's nice. If widely adopted, this will be used to determine whether Mr. Mediocre with a score of 12 or Mr. Mediocre with a score of 18 gets funding or positions. People with indices of 1000 don't need an index for anything but e-peen.

    Unless you are in a very early career (pre-PhD) phase, an index in the range you quote does not give much hope for the future. A measure like the Einstein index could actually help to turn down both Mr. Mediocre's. Can imagine a selection committee for a tenure track position stating that they will not consider candidates who have not reached at least a three-digit index. That doesn't mean that the position automatically needs to go to the person with the highest index.
    Even within homogeneous groups the dynamic range of indices is much bigger than you might suspect. Based on the remark about Sean Carroll, I checked several other physics bloggers. They differ in scientific impact by an astonishing two orders of magnitude.

    "There might be a few surprises in the picture, but the total ranking looks more than reasonable"
    So we don't really need any citation index h or other, being reasonable just does the job, as you just proofed!

    Johannes Koelman
    "So we don't really need any citation index h or other, being reasonable just does the job, as you just proofed!" You miss my main point. My aim is to derive an index that does a better job than the h-index in ranking scientists in their early careers, whilst being at least as good as the h-index in assessing longer career impact (which is exemplified in the figure).
    you miss my main point: if your index doesn't do anything else but reproduce what you find reasonable from the very beginning you don't need it! just tell us what you find reasonable!

    You miss the point of indexes. Their reason for existence is that they capture some laborious Gold Standard into a simple number.

    It requires a lot of insider knowledge to be able to judge who is an influential scientist and who is not. Some of those in-the-know will check whether this index confirms this Gold Standard. Then you and I who have no clue about who is who can then use the index instead of having to rely on other peoples judgments.

    As long as performance can be captured by a single number, your index is the best I've seen proposed. Congratulations.

    Gauging the cumulative impact of three biggest original hits makes some amount of sense, but your ranking is flawed because you sometimes include review papers/books and sometimes you don't. Maldacena's second ranked paper is a review, so should be excluded IMHO; the same applies to Gubser. After all, you did not include Weinberg's books.
    Another piece of advice: for particle physics the Google Scholar citations counts are much less reliable than from Spires/Inspire.
    I recommend you redo your list using Inspire and excluding the papers which are obviously review papers.

    Johannes Koelman
    You are right: I indeed excluded books and included all articles (including review articles). I deliberately used Google Scholar as this is a database more easily accessible to all (I think it helps lay people to appreciate the huge difference in scientific authority between e.g.some of the pop science authors). But agree fully with you: when using this index (or any index) for grant/career decisions, the highest quality database should be used, and review articles should probably be excluded (along with books).
    I don't agree with this dissing of review papers and books. Your stated aim is to reflect the significance of young scholars. If a young scholar writes a review that is cited a lot, that means a *lot*. It means that they have managed to construct a way of looking at their subject that people in the field find compelling enough to cite their paper instead of that of some long-established researcher. There's no benefit to citing a new book or review unless, in some way that other people find compelling, it does something new. That is quite possibly as valuable as introducing a new way to perform a particular detailed calculation, say. If someone can write a review or a book that is one of their three most highly cited research pieces in their early careers, they should get the credit for it. From the point of view of appointments to jobs, someone who has written a highly cited book or review is likely to be influential in many ways, including being attractive to new students. Finally, if someone in their early career has gone to the trouble of producing a research monograph, that's one of the most unusual and risky ways to go about getting a job in Physics, so it seems horrible to then turn round and tell them that their highly cited book doesn't count. If it's not as highly cited as their best three papers, then it does them no good, anyway. It might possibly be OK to discount books by someone like Stephen Weinberg, but constructing an overall view seems to me to be significantly creative work even for someone like him.

    Alterelli >> kobayashi >> nambu ???
    gell-mann = Carroll???

    I wonder how it ranks old people like Pauli and Dirac wrt newer people. Do they get a boost due to the papers being there so long. What about Newton - I bet he doesn't have many citations!

    Johannes Koelman
    "I wonder how it ranks old people like Pauli and Dirac wrt newer people. Do they get a boost due to the papers being there so long. What about Newton - I bet he doesn't have many citations!" I can't stress often enough: no index is suited to compare 'old' vs 'new'. Citation habits have changed with time, and citation databases are incomplete when it comes to old publications.
    These indices are funny.

    After some point, people stop quoting papers. Einstein's relativity is so well accepted that nobody quotes his papers.

    Of course that doesn't hurt the h-index but the "Einstein index" fails for Einstein himself - people simply don't cite Einstein's papers.

    Johannes Koelman
    "After some point, people stop quoting papers. Einstein's relativity is so well accepted that nobody quotes his papers. Of course that doesn't hurt the h-index but the "Einstein index" fails for Einstein himself - people simply don't cite Einstein's papers." People stop quoting stuff that is well-covered in textbooks. That applies to all papers (not just to those authored by Einstein), and doesn't make an index fail. It just means that citations to a paper don't keep growing over time. This makes it increasingly difficult to reach indices above - say - the 10,000 mark. Both the h-index and the Einstein-index (any index for that matter) 'fail' when applied to scientists long gone, as current citation databases don't cover earlier periods very well.
    oh my, maldacena above 't Hooft. and yeah, maldacena invented the holographic principle you say?

    what a cr.. index.

    Johannes Koelman
    "maldacena invented the holographic principle you say?" Nope. As I said, Maldacena did something more remarkable, he generated an instance of the holographic principle in action. And I can add to that: in doing so he not only proved Hawking wrong, but also moved the holographic principle firmly into mainstream physics. what a cr.. index. The index is creative you say? Thanks. I was just thinking the same about your reading!
    The top three papers for Lisa Randall are written with Raman Sundrum, so both of them get the same index.

    Johannes Koelman
    Indeed. Sundrum not being listed next to Randall doesn't mean he has a different index or that he falls off the scale. As stated in th blog-post: "I have not attempted to generate a complete overview".
    I noticed a recent trend toward larger groups of people publishing together on the same article. Do you think the number of authors adds credibility or increases the number of citations?