Skip to main content
SearchLoginLogin or Signup

Training Scientists in Low and Middle Income Countries

The evidence is thin, but we think it probably works!

Published onNov 22, 2024
Training Scientists in Low and Middle Income Countries
·

Suppose you wanted to build up the scientific capacity of a country that is far from the scientific frontier. There are good reasons you might want to do that, rather than rely on the scientific efforts of countries on the frontier: where researchers are based affects what they choose to work on, and not all research is relevant everywhere. One part of building capacity is training scientists. In this post, we want to look at the evidence on the effects of training programs for scientists in lower and middle income countries (LMICs).  Training can come in two main flavors: domestic training, whereby activities take place in the LMIC itself, and training that leverages the international community, for example through fellowships to support study abroad or allocation of foreign mentors. The literature on the causal impact of training for LMIC scientists is thin, and predominantly focuses on programs that leverage the international community, as opposed to domestic training programs, so that’s what we’ll focus on today. 

The STAARS program 

The Structural Transformation of Agriculture and Rural Spaces (STARS) program’s goal is to improve the productivity and impact of early-career economists in low- and lower-income countries. For the first several years of the program, it focused on African nationals who hold a PhD, with an emphasis on those in or planning to return to Africa, and was named the STAARS program (there’s an extra “A” in there; “AA” was for “African Agricultural”). It is predominantly a mentorship program, where fellows propose a research project and are matched to mentors from Cornell University and some other research organizations in high-income countries. Senior and peer mentors guide fellows on their research project, while the program provides accompanying technical and soft skill training and networking opportunities. The program typically lasts 9-15 months.

Schreiber et al (2022) describes and evaluates the impact of participation in the program on the research output of participating fellows. One persistent challenge in estimating the impact of training programs is that you can’t just compare people who get training to those who don’t, because training isn’t randomly assigned. The people who apply might be different than the ones who don’t; maybe more ambitious, maybe better connected (to learn about the opportunity), etc. Or maybe they apply for training precisely because they are struggling to succeed in their field. Meanwhile, among applicants, admission is not randomly assigned. The STAARS program, for example, has two peer reviewers rate the potential of proposals and selects successful fellows among those scored highly by peer reviewers.

So, to assess how well the program works, Schreiber and coauthors focus on finalists to the program, and compare finalists who participated in the program to those who didn’t. Participants in the program were finalists who additionally matched with a mentor, and non-participants are finalists who did not. Schreiber and coauthors argue this process has a lot of randomness baked into it. It might be that two applicants are equally qualified, but one happens to be working in a sub-field where a mentor is available that year and one is not. To ensure that successful fellows and finalists who didn’t enter into the program are comparable, they also employ propensity score matching using pre-fellowship publication output, gender, PhD institution, location and so on. 

The matched sample is small (31 participants and 31 matched non-participants), but results show that on average the participants received an extra 6 citations per year after the program, compared to matched non-participants. That’s a big effect: for comparison, matched non-participating finalists only average 4.5 citations per year. They also find participants publish more papers, but in this case the increase is too small to be confident the difference is not due to random variation. The paper also gathers some qualitative data, which highlights the value participants put on the professional development aspects of the program, including research ethics and peer review lessons. 

The European and Developing Countries Clinical Trial Partnership (EDCTP)

Training programs can also affect the research focus of participants. Let’s turn now to the impact of the European and Developing Countries Clinical Trial Partnership (EDCTP), whose stated mission is to improve drug development for key diseases affecting African countries (HIV, malaria and TB). This program organizes project-based funding and fellowships for African scientists to participate in cutting edge clinical trials with international teams. The teams apply for funding for a project and then work side-by-side for a number of years on clinical trials. Some of these trials ended up being pretty high impact; for example several trials contributed towards the development of the first malaria vaccines.

Fry and Blomfield (2024) look at how participation in an EDCTP funded trial amongst African scientists affected the direction of their subsequent research. To start, they identify 1,190 African scientists involved in an EDCTP trial between 2005 and 2014. They next merge the details of scientists who are involved in the EDCTP trials to their publication record and clinical trial involvement. To evaluate the impact of participation in a trial on the subsequent research trajectory of scientists, they compare the publications of scientists participating in these trials before and after the trial with those of a group of scientists who are also based in African institutions, but not participating in an EDCTP trial. Their goal is to see if scientists start working more on research related to clinical trials after participating in EDCTP, as compared to their research prior to EDCTP and as compared to otherwise similar scientists who never get a chance to participate. 

Of course, they have the same problem as mentioned above, that participation in an EDCTP research project isn’t decided at random. In particular, it may be that scientists interested in transitioning to work on clinical trials seek to join EDCTP and those not interested do not. If that’s so, then it might be the case that even if the EDCTP has no impact on research trajectories itself, we would still see people who joined start working more on clinical trials after it’s done. Fry and Blomfield follow a couple of strategies to deal with this. In their primary analysis, they try to match participants to scientists who don’t participate but are at similar points in their career, work on similar diseases, have similar levels of prior experience working on clinical trials and so on. The following figure compares involvement in clinical trials before and after participation in an EDCTP grant. About five years after a grant starts, careers of participants and matched non-participants begin to diverge. 

From Fry and Blomfield (2024)

This is informative, to the extent that the non-participating matched scientists are actually very similar to the participants. One way to assess that is to identify some types of scientists who are more likely to be interested in clinical trials, and see if the results are different for that sub-population. To take two examples, Fry and Blomfield look to see if participation in an EDCTP grant has a larger impact for researchers who do more applied research, or who already work on HIV, Malaria, or TB. It doesn’t. Finally, interviews with both foreign and African scientists in these project teams were expressive about how important this collaborative project-based learning was in advancing their careers. There could still be the concern that EDCTP is just selecting people with a high propensity to excel in clinical research, rendering the question of whether or not the trial was effective in its training capacity. Recall however, the primary goal of the program is about increasing drug development, rather than training scientists; it’s possible a program primarily focused on training might have bigger effects.

Spillovers

So far we’ve looked at one paper (with a small sample) that showed participants who participated in training saw some increases in quantitative research outcomes, plus another that indicated participation in a program affected research interests. Next, we widen our scope to look for impacts on people who did not receive training themselves, but work with, or are exposed to, people who did. If training programs help individuals build better networks and skillsets, perhaps they can share those with their peers and build research capacity more broadly. Indeed, some other papers, such as those discussed in the post Local learning, suggest working physically together is an important channel for the dissemination of new ideas. On the other hand, as discussed in An example of successful innovation by distributed teams: academia, a number of papers have also looked at whether it helps a researcher’s productivity when they have other great researchers in their academic department. In the era of the internet, it’s not clear this makes much of a difference. 

In LMICs in particular, there are additional reasons why spillovers to other people might be higher or lower than they are in high income countries. On the one hand, it’s possible spillover effects will be unusually large, as the impact of new ideas and networks in a resource poor and constrained environment might just be bigger on the margin. On the other hand, it may be that spillover effects are unusually small, given constraints that make it difficult to leverage the potential benefits from these programs. For example, scientists might be limited in their capacity to take advantage of new connections or knowledge coming from colleagues who participated in advanced training programs. 

That said, two studies of foreign training programs imply that spillovers on peers at home are large. 

The NIH FIC AITRP 

The National Institutes of Health Fogarty International Center’s flagship program was the AIDS International Research Training Program: a training program for scientists from LMICs. Started in 1988, the goal of this program was to control the AIDS epidemic by training the next generation of leaders in research and development, on the ground where the epidemic was most devastating. The training offered to LMIC scientists was varied, but the focal program offered short- and long-term training (in the form of masters and PhDs) from public health departments and medical schools at top US institutions. This provided participating scientists the opportunity to train with some of the leading HIV researchers in the US, with explicit incentives to return home after training (like re-entry funding and visa restrictions). 

Fry and Ganguli (2024) study the impact of this program on HIV-AIDS research capacity in Africa. The basic idea is to look at what happens in academic departments among faculty working on diseases related to the research of the faculty members who participated in the training program. They evaluate the impact of the return of a trainee to an African institution across a variety of metrics: publications, grants, clinical trials and contributions to policy documents. To assess the impact of having a peer who participated in the training, they set up two comparison groups: peers working in the same institution, but in fields unrelated to the trainee’s field, and peers working in other institutions, but in fields closely related to the field of trainees (neglected tropical diseases - of which HIV is one). Comparing peers in related fields, at the same institution, to peers in unrelated fields at the same institution or scientists in related fields at different institutions, they document that the impact of the return of a trainee is large: peers publish more, particularly in HIV (with some peers moving into HIV who were not already publishing in the disease), and more often in journals with higher impact factor ratings. They also show an increase in grants obtained, HIV clinical trials, and contributions to policy documents. 

Most of the increase in publishing involves papers with international collaborators, not papers with the returning trainee. That suggests networking is an important channel through which these programs have spillovers. Indeed, another paper studying the same program, Fry 2023, studies the same program using a similar method, albeit exploring alternative outcomes. This paper documents specifically that returnees from the program impacted their peers through connecting them to the US based researchers in the training institution. In that paper, networking for previously isolated peers in Africa seemed to have the largest impact. If networking is in fact the primary spillover of training programs, that has implications for the optimal design of training programs. 

The Fulbright foreign student program 

Tying together those direct and indirect (spillover) effects of these foreign training programs, a series of papers from Kahn and MacGarvie study the impact of the well known US international training program: the Fulbright Foreign Student Program. This program, established in 1946, provides scholarships to students from other countries to pursue graduate study in the United States but requires them to return to their home country upon completion of their studies.

The first paper in the series, Khan and MacGarvie (2016), documents that Fulbright students were less productive upon return to their home country than comparable peers who were able to remain in the USA after graduation, especially if their home country had a lower GDP per capita (this paper is discussed more in the post Innovators who immigrate). But another 2016 paper by Kahn and MacGarvie explores the impact of return of these trainees on the diffusion of knowledge in their home countries. They find that articles by Fulbright Fellows who return home are cited more frequently in their home countries than articles by similar scientists who train in the USA but do not return home. This effect is especially strong among scientists from countries with a weak science base. This implies that the potential spillover benefits from the fellows who might be producing cutting edge research during and after their studies in the US are stronger if they return home. This is in contrast to the private research productivity benefits of remaining in the US, documented in that Kahn and MacGarvie’s first paper. 

Summing Up

While these latter papers focus on spillovers, rather than the impact of training on the trainee, it seems likely that if people who did not attend a training benefit from being around someone who did, then the training is probably doing something for the participant as well. More broadly, another reason to be a bit less skeptical that training programs really do change participants’ research trajectory comes from evidence on the impacts of training programs in other contexts. The post Teachers and the transmission of excellence surveys a few studies that indicate students of great researchers tend to have more successful research careers than otherwise similar students who do not. Another post, Students get interested in what their mentors are interested in looks at other studies that explore how who you study under affects the topics a researcher chooses to study. And Teaching innovative entrepreneurship looks at the (mixed) evidence that you can train science and engineering students to be good entrepreneurs. Viewed in that light, the hypothesis that training programs for LMIC scientists simply do what they seem to do - help participants improve their skills and networks - seems reasonable!

Still, there’s a lot more we don’t know. We’ve focused on training organized by foreign groups, but “training” is a bucket for a lot of different kinds of programs: mentorship, project-based learning, short-term workshops, and so on. Most of the studies we’ve examined focus on a bundle of programmatic elements, rather than isolating the impact of their specific elements, though we have seen some suggestive evidence that networking might be an especially important channel for impact. Moreover, while the outcomes of many of the empirical exercises in the above studies center on publication output in the years after training, Schreiber et al (2022) (which studied the STAARS program) notes that the goals of training programs can vary widely, and it would be valuable to have more careful evaluations of some of these other goals.

New articles and updates to existing articles are typically added to this site every three weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.


Cited in the Above

Geography and what gets researched

When research over there isn’t helpful here

Local learning

An example of successful innovation by distributed teams: academia

Innovators who immigrate

Teachers and the transmission of excellence

Students get interested in what their mentors are interested in

Teaching innovative entrepreneurship

Cites the Above

Indexed at

The rest


Articles cited

Schreiber, Kelsey L., Christopher B. Barrett, Elizabeth R. Bageant, Abebe Shimeles, Joanna B. Upton, and Maria DiGiovanni. 2022. Building research capacity in an under-represented group: The STAARS program experience. Applied Economic Perspectives and Policy 44(4):1925-1941. https://doi.org/10.1002/aepp.13310

Fry, Caroline V., and Michael Blomfield. 2023. If you build it, they will come: The impact of clinical trial experience on African science. SSRN Working Paper. http://dx.doi.org/10.2139/ssrn.4629654

Fry, Caroline, and Ina Ganguli. 2023. Return on returns: Building scientific capacity in AIDS endemic countries. NBER Working Paper 31374. https://doi.org/10.3386/w31374

Fry, Caroline Viola. 2023. Bridging the gap: Evidence from the return migration of African scientists. Organization Science 34(1). https://doi.org/10.1287/orsc.2022.1580

Kahn, Shulamit, and Megan J. MacGarvie. 2016. How Important is U.S. Location for Research in Science? The Review of Economics and Statistics 98(2): 397-414. https://doi.org/10.1162/REST_a_00490

Kahn, Shulamit, and Megan MacGarvie. 2016. Do return requirements increase international knowledge diffusion? Evidence from the Fulbright program. Research Policy 45(6):1304-1322. https://doi.org/10.1016/j.respol.2016.02.002

Comments
0
comment
No comments here
Why not start the discussion?