Skip to main content
  • Home
  • News
  • 2014
  • 06
  • Two Utah Pharmaceutics Researchers (S.W. Kim, Y-H. Bae) Among the Top 1% Most Cited Researchers in Thomson Reuters "Pharmacology/Toxicology" Category

Two Utah Pharmaceutics Researchers (S.W. Kim, Y-H. Bae) Among the Top 1% Most Cited Researchers in Thomson Reuters "Pharmacology/Toxicology" Category

Jun 24, 2014

Two Utah Pharmaceutics Researchers (S.W. Kim, Y-H. Bae) Among the Top 1% Most Cited Researchers in Thomson Reuters "Pharmacology/Toxicology" Category


Thomson Reuters has generated a new list of Highly Cited Researchers in the sciences and social sciences to update and complement a previously published list that was presented on the website

The old list, first issued in 2001, identified more than 7,000 researchers who were the most cited in one or more of 21 broad fields of the sciences and social sciences, fields similar to those used in the Essential Sciences Indicators database. This analysis considered articles and reviews published in Web of Science-indexed journals from 1981 through 1999. Approximately 250 researchers in each field were selected based on total citations to their papers published during this period. An update in 2004 took into account papers published from 1984 to 2003 and cited during the same period, and additional names were added to supplement the original list.

You Han Bae and Sung Wan Kim

A selection of influential researchers based on total citations gives preference to well-established scientists and social sciences researchers who have produced many publications. It is only logical that the more papers generated, generally the more citations received, especially if the papers have had many years to accumulate citations. Thus, this method of selection favors senior researchers with extensive publication records. It sometimes identifies authors who may, in fact, have relatively few individual papers cited at high frequency. Nonetheless, total citations is a measure of gross influence that often correlates well with community perceptions of research leaders within a field. Such was the nature of the prior lists of highly cited researchers.

Thomson Reuters decided to take a different approach -- and use a different method -- to identify influential researchers, field-by-field, to update the previously published list. First, to focus on more contemporary research achievement, only articles and reviews in science and social sciences journals indexed in the Web of Science Core Collection during the 11-year period 2002-2012 were surveyed. Second, rather than using total citations as a measure of influence or ‘impact,’ only Highly Cited Papers were considered. Highly Cited Papers are defined as those that rank in the top 1% by citations for field and year indexed in the Web of Science, which is generally but not always year of publication. These data derive from Essential Science Indicators℠ (ESI). The fields are also those employed in ESI – 21 broad fields defined by sets of journals and exceptionally, in the case of multidisciplinary journals such as Nature and Science, by a paper-by-paper assignment to a field. This percentile-based selection method removes the citation disadvantage of recently published papers relative to older ones, since papers are weighed against others in the same annual cohort.

Those researchers who, within an ESI-defined field, published Highly Cited Papers were judged to be influential, so the production of multiple top 1% papers was interpreted as a mark of exceptional impact. Relatively younger researchers are more apt to emerge in such an analysis than in one dependent on total citations over many years. To be able to recognize early and mid-career as well as senior researchers was one goal for generating the new list. The determination of how many researchers to include in the list for each field was based on the population of each field, as represented by the number of author names appearing on all Highly Cited Papers in that field, 2002-2012. The ESI fields vary greatly in size, with Clinical Medicine being the largest and Space Science (Astronomy and Astrophysics) the smallest. The square root of the number of author names indicated how many individuals should be selected.

The first criterion for selection was that the researcher needed enough citations to his or her Highly Cited Papers to rank in the top 1% by total citations in the ESI field in which they were considered. Authors of Highly Cited Papers who met the first criterion in a field were ranked by number of such papers, and the threshold for inclusion was determined using the number derived through calculation of the square root of the population. All who published Highly Cited Papers at the threshold level were admitted to the list, even if the final list then exceeded the number given by the square root calculation. In addition, and as concession to the somewhat arbitrary cut-off, any researcher with one fewer Highly Cited Paper than the threshold number was also admitted to the list if total citations to his or her Highly Cited Papers were sufficient to rank that individual in the top 50% by total citations of those at the threshold level or higher. The justification for this adjustment at the margin is, it seemed to work well in identifying influential researchers, in the judgment of Thomson Reuters citation analysts.

Of course, there are many highly accomplished and influential researchers who are not recognized by the method described above and whose names do not appear in the new list. This outcome would hold no matter what specific method was chosen for selection. Each measure or set of indicators, whether total citations, h-index, relative citation impact, mean percentile score, etc., accentuates different types of performance and achievement. Here we arrive at what many expect from such lists but what is really unobtainable: that there is some optimal or ultimate method of measuring performance. The only reasonable approach to interpreting a list of top researchers such as ours is to fully understand the method behind the data and results, and why the method was used. With that knowledge, in the end, the results may be judged by users as relevant or irrelevant to their needs or interests.