This chapter explores how politics shape numerical indicators and the data used to measure and monitor progress in global health governance. In recent years, data-driven decision-making has been on the rise in development agencies, including the use of modeling and recommendations generated by algorithms. Digitization will further accelerate this trend. Numerical indicators have normative power in part because they appear objective and value-neutral. But in fact, they are embedded in social and political contexts, and their production, interpretation, and use are fluid. As signs with immense communicative and normative powers, indicators can mobilize diverse social actors who debate and contest both the definition of an indicator and the data reported against it. Through examining three cases—including the Sustainable Development Goal (SDG) 5.2 on violence against women, data used to prioritize interventions in the HIV response, and indicators and data used to determine eligibility of countries for development assistance for health—this chapter explores the power of health metrics and data, and how they can be shaped by larger power relations.
Numerical indicators and data have become increasingly central to decision-making in global health governance. 1 They appear in numerous places, such as on websites and annual reports, as backdrops to speakers at donor-pledging conferences, and on slides in conference rooms and webinars where diverse global and national health actors meet to align on targets, allocate funds, and assess progress (or lack thereof) toward shared objectives. Global health indicators and data, while appearing objective, are also signs that can communicate success or failure, hope or despair, blame or praise. Thus, they are also sites of debate and contestation among diverse social actors, ranging from United Nations (UN) agencies, to bilateral donors, civil society groups, private enterprises, and many other stakeholders, who frequently gather to negotiate over priorities and funding in global health.
As the global experience of Covid-19 has shown, indicators and data relating to health, disease, and death can be politically explosive, and have significant power to shape political futures. For example, we have seen infectious disease modeling move from academic journals and think tank papers to the mass media, and predictive models used to justify lockdowns or their easing; spurring debates over rises in incidence, what they mean, and the appropriate response to them. There has even been pressure on health officials to reclassify mortality data in order to minimize the impact of Covid-19 on countries (Dyer 2020).
Public health has always been a field dominated by quantitative data (for example, epidemiology, demography, economic analysis, and so on). But over the past several decades, there has been a rapid growth in the prominence of data used for purposes of accountability and in communications as well as in the elevation of mathematical models to serve as decision tools at the center of strategic planning processes in the management offices of global health agencies. The growing demand for indicators, modeling, projections, and data represents a rapid expansion of the mechanisms of audit, transparency, and measurement (Riles 2013). Global health is also increasingly digital health; a transformation was underway before Covid-19 and has further accelerated in the pandemic. Digital technologies used in global health are said to offer the hope of better data and the ability to better target resources and interventions with military precision, even down to the level of clinics. The reality, of course, is that no matter how much data in global health grows, there never seems to be enough.
One perspective on this, as Sally Engle Merry aptly noted is “the process of translating the buzzing confusion of social life into neat categories that can be tabulated risks distorting the complexity of social phenomena” (2016: 1). In other words, global health data, while being used to convey authority, objectivity, and precision, is in reality plagued with gaps and risks.
Medical anthropologists and human rights scholars have tended to view the rise of quantitative measurement with a critical eye, pointing out that metrics are abstractions from complex social contexts that oversimplify these settings in order to create the illusion of commensurability (Davis and Kruse 2007; Merry 2011, 2016; Rosga and Satterthwaite 2009). They reduce social phenomena to make them visible and commensurable across diverse contexts (Bartl, Papilloud, and Terracher-Lipinski 2019; Merry and Wood 2015; Winkler, Satterthwaite, and De Albuquerque 2014). To do this, indicators include and exclude data, and aggregate data. They can be constructed in ways that are often distortive and be used to create oversimplified rank-ordering of complex phenomena (Davis et al. 2012: 73–75).
These critics have shown how numerical indicators can take on the status of law in their operations, though the indicators may be ill-suited to the diverse contexts in which they impose these norms: “The production of indicators is itself a political process, shaped by the power to categorize, count, analyze, and promote a system of knowledge that has effects beyond the producers” (Merry, Davis and Kingsbury 2015: 2). Similarly, big data—in fact, any data—is produced by and embedded in institutional, social, political, and economic contexts (Mosco 2015). Vincanne Adams argues that the trend toward metrics in global health has devalued local health specificities and other ways of knowing (Adams 2016). Part of the problematic work done by indicators is the false sense of transparency they create: the information used to shape them and measure progress may be comprehensible to a small group of experts, while remaining incomprehensible to the public (Bradley 2015).
Critics have further pointed out that indicators and data used in global governance reinforce unequal power relationships within and between countries, especially aid donors and recipients (Escobar 1995). Sally Engle Merry and Summer Wood (2015) show how human rights indicators are selected and defined by officials in high-income countries, such as Switzerland, who may fail to grasp how poorly those indicators work in low- or middle-income countries, such as Tanzania, which are nonetheless expected to report on their progress on these inapt metrics to the UN.
This chapter builds on this critique to argue that indicators used in global health governance function as communicative signs, which, while they do contain the somewhat Orwellian traits outlined above, are not always fixed: there can sometimes be a continuing process of contest and critique. This is because indicators used in global health are a form of signification that is constantly emergent through a process of debate and contest among diverse actors: such as scientists, UN agencies, government officials, civil society activists, or private enterprises. Ferdinand de Saussure says that linguistic signs (such as words) that link concepts and sound-images are inherently arbitrary; there is no link between the concept of “sister” and the sound of the word itself (1959: 67). Moreover, forces are constantly shifting the relationship between signified and signifier (1959: 75); signs are inherently mutable and language shifts over time. For Jacques Derrida, shifts in signification are inherently a form of play: discourse is “a system in which the central signified, the original or transcendental signified, is never absolutely present outside a system of differences” (1978: 280). Indicators used for global health governance are similarly arbitrary abstractions of complex phenomena, interpreted through diverse lenses.
In practice, most of the work done by indicators in governance relies more literally on signs—in the sense of visual displays within meetings, workshops, or webinars where diverse actors collaborate or contend over power, funding, and accountability. Both indicators and charts used to report on progress against targets are often displayed as part of collective speech events, such as presentations, and are embedded in verbal narratives as a form of suasion (Bauman 1977). In medieval times, Buddhist monks traveled in Asia, using scrolls and murals to narrate the life of the Buddha and persuade the masses to follow their edicts and principles (Mair 1988). Today, global health officials travel the world with slide presentations which they use to convince others, in particular health officials in low- and middle-income countries, of the merits of aligning with the current global health strategy and of meeting its related targets.
The question then becomes who is in the meeting and who feels empowered to challenge the speaker: whether the presentation is simply an authoritarian handing-down of norms and standards to a passive audience, or a conversation grounded in multivocality that enables different perspectives, even different epistemologies, to enter the meeting and challenge or contribute to the definition of indicators. Either way, through speech events such as presentations and meetings, indicators as signs are further embedded in histories of expression that are constantly remade, in an infinite chain of communication (Bakhtin 1981).
These contests include debates over what quantitative metrics capture and what (or who) they leave out. Given a global landscape of inequalities that shapes access to health services, some data gaps reflect these inequalities; and, in turn, biased data sets in health can amplify inequalities and have other political effects. When priority-setting or rationing decisions are grounded in quantitative analysis, and fail to adequately consider equity, ethics, or human rights dimensions, those decisions may reinforce existing inequalities.
The following three case examples serve to illustrate and elucidate the complex and sensitive power negotiations that can shape indicators, data, and the work of interpretation. These examples include debates linked to the Sustainable Development Goal (SDG) 5.2 on violence against women; the problems linked to data about the needs of marginalized groups in the HIV response, particularly in setting priorities for resource allocations; and finally, the challenge of eligibility criteria that determine which countries can receive global health funding.
In global health, indicators are frequently used to set targets that help facilitate coordination of work among diverse actors. Indicators are also important signs for public communications, signaling a level of commitment to addressing a given problem, especially for one that requires intensified efforts to make progress. In every case, an objective, linked to an indicator, helps to shine light on a problem. But this can inadvertently obscure adjacent problems that are not included in the indicator; at the same time, the light shone by the indicator may attract critics who disagree with the priorities it highlights, who interpret it in varying ways, or who may differ on the target and reporting methodology.
An example of this is the indicator-related challenges raised by the SDG on violence against women. Establishing a clear target to catalyze greater action on this issue has helped to give this urgent problem greater visibility. But in the process, the indicator creates new questions related to data-gathering and analysis.
Sexual and gender-based violence (SGBV) is widespread across the world and has a profound impact on the physical and mental health of individuals, families, and communities. Nevertheless, despite being widely recognized as a problem, this stigmatized issue is still not included in the healthcare policies of health agendas of many countries (WHO 2013: 10). For many years, gender-based violence has been denied and ignored, minimized, and deprioritized, swallowed up in the “silences in international discourses” (Anholt 2016: 1). Women’s rights advocates pushed for years to break this silence, to put the issue on the global agenda, and to catalyze action by the UN agencies and member states.
The advocates’ efforts to get a relevant indicator included in the Millennium Development Goals for 2000–2015 were stymied in part because critics said that gender-based violence was too difficult to measure (Anderson 2013; Ellsberg 2006). They questioned the conceptual coherence of gender-based violence and the plausibility of metrics that track the concept. Despite such objections, advocates continued to push for including gender-based violence in the MDGs by organizing high-level summits and lobbying senior diplomats. And when the UN member states approved the SDGs for 2015–2030, they did include an indicator, SDG 5.2, which committed to “eliminate all forms of violence against all women and girls in the public and private spheres, including trafficking and sexual and other types of exploitation” (UN General Assembly 2015).
As Yadlapalli Kusuma and Bontha Babu (2017) note, the setting of an indicator such as SDG 5.2 is a tacit acknowledgment both that violence against girls and women exists, and that it can be prevented through concerted efforts such as mobilizing resources, designing and implementing interventions, and scaling them up. The global indicator thus offers a tool for national-level advocates to use in advocacy and in accountability work: they could use its normative power to push for programming, resourcing, and to demand updates on progress by states.
However, the problems of reporting on and analyzing the violence against women indicator are shaped significantly by politics: especially, by widespread denialism and gender inequalities. As the World Health Organization (WHO) has noted, the available data on cases of SGBV is almost always a small percentage of the actual incidence. The incomplete picture is “an inevitable result of survivors’ well-founded anxiety about the potentially harmful social, physical, psychological and/or legal consequences of disclosing their experience of sexual violence” (WHO 2007: 1). To address fears of reporting, WHO recommends health services and others take a series of steps to protect those who disclose SGBV from harm, and to establish the trust needed to elicit disclosure, in order to link survivors to essential health services.
However, this insight into the dynamics that can contribute to hiding a stigmatized group and also result in low reported incidence of SGBV is not widely shared in the general public. Those less familiar with the complex negotiations and trust-earning required in order to elicit disclosure could simply perceive officially reported statistics of gender-based violence not as the tip of the iceberg, but as the full iceberg. Where reports are few, officials who would prefer to minimize, negate, or deny that violence against women is widespread within their domain of responsibility or authority may use those low reports to justify their position.
This points to a second problem created by many indicators like the one created for violence against women: in the absence of contextual information, data reflected in a single indicator can be so abstracted from situational reality that its meaning becomes open to contesting interpretations of the phenomenon being measured.
This is best exemplified by considering the problem of how to understand a change in reported incidence (Davis, Schopper and Epps 2018; Merry 2016): Should we interpret an increase in reported incidence as an indication of an increase in actual violence? Or, more encouragingly, does an increase in incidence suggest that reporting mechanisms are working better, because more girls and women feel more trusting and empowered to report their experience of violence?
Conversely, does a decrease in reported incidence mean that efforts to reduce violence are failing, because fewer women feel safe to report? Or does a drop in cases mean that there is actually less violence? The purpose of an indicator is to abstract away from the contextual information while retaining essential facts about the relevant phenomenon, and to signal the importance of that phenomenon to the public. However, contradictory conclusions can be drawn from a single indicator without contextual information.
If it is decontextualized, an indicator, especially if it is based on reported data of incidence of a stigmatized problem, can subject a communicative sign to political pressures within the larger discourse. That is, those with a stake in reporting positive progress interpret reported data in ways that reinforce their optimism, perhaps even using it as propaganda to promote their progress. Meanwhile, those with reasons to be critical can present the same data to argue that efforts are failing.
For example, in March 2018, the UN tweeted a link to a report showing that the measures it implemented to end sexual exploitation and abuse by UN staff were having a positive effect and resulting in a reduced number of allegations against UN staff from 165 reports to 148. The responses to this on Twitter included an acerbic critique from a women’s rights campaigner, Danielle L. Spencer: “Unfortunately, all this shows is a drop in reporting, not a drop in incidents—they know this and this is a PR stunt” (@daniellewas, March, 14, 2018).
In other words, a lone indicator may not tell the whole story. The idea of measuring a complex and politically sensitive phenomenon such as gendered violence through a lone indicator has been widely critiqued by social scientists and legal scholars.
Furthermore, by shining a light on one problem, an indicator may inadvertently reinforce invisibility of related problems. By exclusion from the indicator, other related issues are rendered less important than the particular phenomena measured in the indicator. There is a good example of this. The SDG 5.2 on violence against women, the related public summits, and other high-level efforts have “brought partial sight to some of the previously gender blind, and generated some political discussion and action aimed at preventing such violence,” argues Chris Dolan (2014: 486). Nonetheless, Dolan continues: “The range of victims and survivors that are not just recognized but also addressed needs to be more inclusive—most urgently male and lesbian, gay, bisexual, transgender and intersex (LGBTI) victims and survivors.” Dolan is among those who have worked to draw attention to the lived experiences as well as lack of services for male survivors of conflict-related sexual violence (Edström et al. 2016).
As Dolan implies, the invisibility or exclusion of male and LGBTI survivors in the SDG indicator on violence against women in effect makes the indicator a sign that discursively constructs cisgender women as a priority. This can reinforce denialism, invisibility, and lack of services for non-female victims of SGBV. Because of stigma and homophobia, including laws criminalizing same-sex sexual behavior in many countries, gathering data on the extent of sexual violence against men and LGBTIQ+ people is even more challenging than doing so on violence against women; these victims are the uncounted among the uncounted.
Thus, while the SDG 5.2 on violence against women is an important step forward in greater recognition of the problem, and toward reducing the stigma and promoting action on this issue, a single indicator monitoring progress on a stigmatized and political problem can open up conceptual challenges and contestations of interpretations. At the same time, it may unintentionally, even in good faith efforts, contribute to rendering adjacent problems invisible, or suggest that they are somehow less urgent or important. The indicator does important work by setting up a flag that gathers diverse actors who can work together to address the problem signaled by the indicator; at the same time, it draws the attention of critics, who may critique the indicator, the methodology, or use the reported data to press for greater progress.
While the actors engaged in addressing SGBV are relatively sparse, the global HIV response has mobilized many different stakeholders to set goals and collaborate to address them, including through redistributing resources among countries. Here, the political forces shaping health indicators and data are more complex, and are frequently critiqued and contested, as discussed in the next section.
Over the past 40 years, the global HIV response has excelled in coordinating diverse actors globally to provide health services and engage in policy advocacy. A critical part of this mobilization has been the development every few years of political declarations on HIV and AIDS by the UN General Assembly. Work following on from these declarations with targets is led by the Joint UN Programme on HIV and AIDS (UNAIDS), which also gathers data from countries to monitor their progress.
The work of priority-setting done by national health officials is closely tied to these targets and to national HIV strategic plans. Health officials collaborate with UNAIDS, WHO, and donors of Development Assistance for Health, such as the Global Fund to Fight AIDS, TB and Malaria (“the Global Fund”), and the US President’s Emergency Plan for AIDS Relief (PEPFAR). This work of national health resource allocation is increasingly based on cost-effectiveness principles: given a limited bucket of funds for health in most countries, health policymakers and planners must prioritize investments in those interventions that deliver the greatest impact for the largest number of people.
However, this approach can inadvertently disfavor small or marginalized groups who may otherwise need to be prioritized for both ethical and efficacy reasons. This challenge is especially clear when considering the problem of reaching key populations who are most at risk of HIV: gay men and other men who have sex with men; sex workers; people who use drugs; and transgender people (WHO 2016). As of 2020, over 60% of new HIV infections globally occur among these populations (UNAIDS 2020). However, in contrast to global estimates, many countries lack adequate or current data on the needs of these key populations within their countries, and also lack services that are geared to reach these key populations (Sabin et al. 2016).
A significant body of human rights research has shown that key populations tend to avoid HIV studies and clinics in countries where their behaviors are also criminalized; they fear being publicly exposed or reported to police (Amon and Kasambala 2009; Booth et al. 2013; Global Commission on HIV and the Law 2012, 2018; Schwartz et al. 2015). Like survivors of SGBV, and for similar reasons, criminalized and stigmatized key populations often evade participating in data and data collecting efforts that might identify them and expose them to families, friends, employers, or the police.
This avoidance leads to what Stef Baral and Matt Greenall (2013) call the “data paradox” for key populations: “Decision-makers deny that most affected populations exist […] so no research gets done on these populations; the lack of data feeds the denial; and so on.” The data paradox for key populations is a vicious cycle of invisibility, in which, once again, absence of data is taken as evidence of absence of the issue by officials who may not be willing to acknowledge the existence of key populations, such as men who have sex with men (Narrain and Vance 2018).
This data paradox becomes clear in reviewing size estimates reported by countries to UNAIDS as part of the agency’s routine data-gathering process to measure progress toward shared global targets: criminalization of same-sex sexual behavior is statistically associated with reports of implausibly low (or indeed, entirely missing) population size estimates for men who have sex with men. Especially low population size estimates are found in countries that impose the death penalty (Davis et al. 2017). As a result of these low denominators of key population size, some countries then significantly overestimate their rate of success in reaching men who have sex with men, such as with HIV tests. In reality, they are missing an unknown number of uncounted people.
When these biased data sets are fed into algorithms, the results may magnify existing inequalities that created the biases in the first place. To understand how biased data sets that omit uncounted people are used by and shape decision-making, it may be helpful to briefly examine the use of cost-effectiveness analysis software for decision-making processes, particularly in priority-setting or rationing resources.
Since the first UN Political Declaration on HIV, cost-effectiveness as a discourse and frame for prioritization in financing the HIV response has quickly risen to prominence. This has meant that anyone engaged in developing HIV finance plans, whether at the global or the country level, increasingly needs cost-effectiveness data in order to make the case for financing any intervention.
In one type of cost-effectiveness analysis, a health official inputs data about the costs and the typical health outcomes of each service into mathematical models. These models then generate and compare future potential scenarios of disease transmission. This may include an analysis of allocative efficiency: which populations or interventions a health official might invest in, in order to obtain the highest impact for the lowest expense. By inputting available data on HIV transmission among different populations (for instance, men who have sex with men, sex workers and their partners, or the general population), and the cost of programs that meet the specific needs of each group, as well as national targets for the HIV response, health officials can then receive and review projections produced by the software and select the scenario that delivers maximum health impacts within a fixed budget.
These software tools thus offer users an evidence-based visualization to support a frank discussion about how best to allocate or ration limited resources. However, the software tools create an “epistemic object” representing the future by selectively focusing on specific aspects of that future to display visually. That is, the tools edit out the political, legal, and social contexts in which HIV flourishes: in which negation, laws criminalizing HIV transmission, and other contextual issues create a data paradox that warps data sets and leaves key populations uncounted.
While the decision is framed as an evidence-based choice between health interventions, in effect, the final charts produced by cost-effectiveness software visually pit (interventions for) key populations against one another in a competition for limited resources. Interventions for smaller populations are generally more expensive per person, and thus less likely to be cost-effective. For key populations, as shown above, official size estimates are often implausibly low or missing altogether. Because cost-effectiveness software does not explicitly address the ways in which criminalization may distort the data, the scenarios the software generates may lead health planners to inadvertently deprioritize criminalized groups that appear smaller or do not appear at all. Worse, the software in effect disfavors populations for whom there is no data, reinforcing the historical discrimination that rendered them invisible (Davis 2020).
In a context of flatlining development assistance budgets, these forms of cost-effectiveness analysis do offer important benefits: they support the reduction of waste and maximizing of resources for health services. Failure to use economic evaluation in decision-making could itself be considered unethical in conditions of scarcity (Dudley, Silove, and Gale 2012). For example, to uphold the human right to health, states are obligated to dedicate maximum available resources to progressive fulfillment of the right to health (UN CESCR 2000). Cost-effectiveness analysis can help states to fulfill this human rights obligation.
But at the same time, cost-effectiveness analysis always requires “some form of rationing”: “Any way you cut it […] developing a benefits package will produce winners and losers, especially in poor countries with large populations and small budgets for health. Losers in this context are the group of people that inevitably will get less, in terms of benefits or services, than others” (Yazbeck 2002: 7–8).
Considering historical discrimination against some smaller populations, however, those who lose out on health services may be those who have always lost out in the past. The processes in which these tools are used must be informed by the structural factors that shape the patchy and limited data on marginalized groups and should ensure the value of cost-effectiveness is balanced against other values, such as equity and the human right to non-discrimination. This becomes especially crucial in the digital age, as health institutions shift toward machine learning and algorithmic decision-making. Algorithms such as those used in cost-effectiveness analysis may reflect and shape broader patterns of meaning (Seaver 2017) and even reinforce mistaken assumptions and amplify biases (Duclos 2019).
Criminalization, stigma, and discrimination may distort data about marginalized groups; and when these biased forms of data are crucial to decision-making, they can wreak multiple kinds of havoc. Countries may fail to prioritize resources for uncounted or undercounted populations, may overestimate their levels of success in coverage of key services that, in turn, create data (incidence rates, in this case) which further misrepresent reality (Davis et al. 2017). When absence of evidence is used as evidence of absence, uncounted populations may find themselves caught in a cascade of bad decisions about their health.
However, because the HIV response has incorporated civil society and community representation in many levels of decision-making, in some cases, community activists have been empowered to question the data used to make these decisions. African key populations advocates surveyed about their experiences participating in global health governance meetings described challenging the lack of data on transgender people and refusing to accept size estimates that they found implausibly small (Esom et al. 2016). For example, Peter Njane, a gay rights activist in Kenya, described arguing with PEPFAR over a size estimate of 10,000 men who have sex with men in Kenya: “We had disputes over how the data was collected. We questioned where they got their information. Donors were there, we’re shown the final product, the money has been used, and the community didn’t accept it” (in Esom et al. 2016: 18).
It is fortunate that in some cases activists feel empowered to challenge global health donors and national officials about the biased data sets. Such dialogic processes of defining indicators and data used to make decisions, when they incorporate diverse voices, offer the potential for indicators to be debated, critiqued, and revised.
The stakes for these contests can be high: as discussed in the third example, below, these local gaps in data can shape high-level decision-making that has life and death effects.
In 2016, a group of people living with HIV in Venezuela wrote a letter to the Global Fund headquarters in Geneva, Switzerland, to request urgent help. This letter, and the subsequent responses and debates regarding the contents of the letter, revealed cracks in the global health financing architecture—cracks created by a set of apparently objective numerical indicators, and the data that is used to report on progress against them. The letter led to high-level debates over what the indicators failed to capture, and ultimately, some small but significant changes in use of these indicators.
The Venezuelan activists wrote to request urgent shipments of antiretroviral treatment for people living with HIV in a context of rapid economic collapse. As they explained in their letter, Venezuela’s national currency had depreciated by 900%, inflation was 700%, and Venezuelans faced long supermarket lines for basics such as rice or milk: “Literally, we are not only suffering hunger, we are also dying, because our health system is totally collapsed” (RVG+ 2016: 2).
At this time, Venezuela’s economic collapse was still being widely denied by its political leaders. Venezuela had long been seen as an oil-rich country and one with a robust and flourishing public health system, which had largely eliminated malaria (Griffing, Villegas and Udhayakumar 2014). Venezuela also had never received Global Fund assistance before. The leadership of the Global Fund delayed their reply to the letter for some time, and finally responded negatively, explaining: “The Board is guided by its approved Eligibility Policy, which annually determines the countries eligible for Global Fund funding. Eligibility is determined by a country’s income level, measured by an appropriate economic indicator of the World Bank, and official disease burden data” (Hauser and Dybul 2017).
Indeed, this was strictly accurate. After many years of debating how best to allocate resources, the Global Fund Board, a vast parliament made up of hundreds of government and civil society representatives, had finally developed and approved an Eligibility Policy that outlined that its resources would be disbursed to low-income countries and countries with a high burden of HIV, TB, and malaria (Global Fund 2016). The Eligibility Policy aligned with existing overseas development aid criteria used by many bilateral donors that also contribute to the Global Fund and sit on its Board. However, these criteria had not been developed to respond to a situation that was occurring in a country like Venezuela: one in the midst of rapid economic collapse, and in which government leaders were covering up a failing health system.
First and foremost, guided by the World Bank (which has a seat on the Global Fund Board), the Board uses Gross National Income per capita (GNIpc) as the first sorting indicator to identify the countries in greatest economic need. The World Bank classifies all countries as either low-income, lower-middle-income, upper-middle-income, or high-income.
However, the World Bank never intended GNIpc to be used as an indicator to determine eligibility for health financing. GNIpc is calculated once a year and does not capture income and other social inequalities. It only shows the average income that existed in the country the previous year; not how it is taxed, whether it is subject to debt, or how it is allocated (or otherwise) to health services. As such, GNIpc is at best a crude indicator of national economic capacity for health. This was reflected in the Venezuela case.
In 2016, at the time of the initial urgent appeal from Venezuelan activists to the Global Fund, Venezuela was still classified as “high-income” by the World Bank, thus making it ineligible for aid from the Global Fund. By the time the Global Fund leadership finally responded to the appeal with their own letter pointing that their policy, hyperinflation, and plunging oil prices were sending Venezuela’s national income into a tailspin. Shortly, thereafter, Venezuela was reclassified by the World Bank as an “upper-middle-income country.”
The 2016 Global Fund Eligibility Policy did permit the agency to fund HIV, TB, and malaria programs in some “middle-income” countries, provided those countries met other criteria. One of these was “disease burden”: the percentage of people living with HIV in the general population, or the percentage living with HIV among key populations.
However, when it came to data on health, Venezuelans had a second problem: the political leadership of the country actively censored health data. Venezuela’s president adamantly denied that there was any crisis for people living with HIV and refused overseas assistance of any kind. He forbade publication of any official information that might paint a different picture: when the Ministry of Health published government bulletins showing increases in infant and maternal mortality, the health minister was promptly fired (BBC 2017).
As an international organization with close UN ties, the Global Fund relies on official health data reported by countries to the UN in order to determine eligibility in compliance with its policy. UNAIDS gathers data on HIV from countries on a regular basis, verifies where it can, and normally shares this official data with the Global Fund to use in decision-making. But the most recent data UNAIDS had for Venezuela at the time of the letter was already several years out of date. Thus, without data on disease burden, the second possibility of eligibility was foreclosed, and Venezuela was again deemed to be ineligible for the Global Fund.
While these debates continued in Geneva and elsewhere, Venezuelan physicians formed an informal network to gather and share health data from clinics and hospitals, passing it on to international allies. Journalists managed to record the stark horrors of hospitals lacking basic equipment and supplies (Faiola 2017). A report from Human Rights Watch (2016) drew on these to publish a report on the dire health crisis unfolding in Venezuelan hospitals.
The Global Fund had one final indicator that could make Venezuela eligible: disease burden in key populations. In cases where countries had low general HIV prevalence but had concentrated epidemics of 5% or higher rates of HIV among key populations, the country could be eligible for funding.
Hoping to assist the Venezuelans, experts at UNAIDS found articles by anthropologists and others showing shockingly high rates of HIV and malaria transmission among the indigenous Warao people and shared these with the Global Fund. But Global Fund managers pointed out that indigenous people were not among the key populations affected by HIV that were officially recognized by the WHO and UNAIDS (ICASO and ACCSI 2018: 24).
So, was there data on the officially recognized key populations? Certainly, neighboring countries had high rates of HIV among men who have sex with men and transgender women. If Venezuela had similar data, that would have made Venezuela eligible for the Global Fund. However, due to homophobia and denialism, official national data on men who have sex with men did not exist in Venezuela. When countries had no official data on key populations, the Global Fund’s policy in 2016 was to treat that lack of data as a zero.
In sum, a country in the midst of economic and social collapse, in desperate need for HIV and malaria prevention, treatment, and care, could not receive aid from a fund set up specifically for that purpose. The reasons were because of a set of narrowly defined global indicators, and the country’s own politically created gaps in data, including data about stigmatized and marginalized groups. In effect, people living with HIV in Venezuela were being swallowed by a crack in the global health architecture—a structure built on what appeared to be robust and rational systems of quantification. But the structure failed to account for political realities shaping data, and how simple indicators designed in an international organization for use in governance might omit or obscure complex realities in another country.
Venezuelan activists, and their allies in international NGOs, UN agencies and on the Board of the Global Fund persisted in their advocacy. In May 2017, consistent with its Eligibility Policy, the Board voted down a proposal to send emergency funding to the country. However, it also created a working group to find another solution. In May 2018, nearly two years after the urgent appeal for lifesaving medications was sent to them, the Board approved a new policy on countries in crisis, which then allowed a small amount of emergency funding to go to Venezuela, channeled by UN agencies and civil society groups (The Global Fund 2018b).
At the same time, the Board went through its periodic review and update of the Eligibility Policy, and approved a small change to a footnote in the policy. 2 As a result of proposals put forward by civil society constituencies that have permanent seats and votes on the Global Fund Board, the revised 2018 Eligibility Policy clarified that in cases where there was no official government HIV-prevalence data for key populations, or if a change in data led to a change in country eligibility, the Secretariat of the Global Fund was authorized to seek other data from UNAIDS to inform their eligibility determination (The Global Fund 2018a). In practice, this data could include data from peer-reviewed studies, civil society reports, or other data that UNAIDS experts felt was sufficiently credible to inform an eligibility determination.
That these debates happened and unfolded the way they did (including this highly contentious but potentially game-changing footnote) shows how diverse actors—government officials, activists, physicians, and UN agencies—can come together to debate and negotiate over numerical indicators and the data reported against them, as well as over the implications of decisions made with this data. As imperfect as this process of responding was, it was only able to happen at all because in the Global Fund’s governance structure, civil society activists have a seat at the table as equal peers with government officials, including voting powers. Without their active lobbying for Venezuela’s eligibility, it is likely that the appeal would have ended with the original refusal by the Global Fund leadership. Their advocacy further enabled a revision to the footnote that could open up access to aid for more countries, and establish space for more diverse and credible forms of data to be used in decision-making.
Thus, indicators and data can be valuable as sites of contest that sometimes lead to progressive change when they are part of processes that incorporate transparency, accountability, and debate by diverse actors.
All forms of knowledge are partial and emergent. Health indicators and data are signs abstracted from complex realities, and have far-reaching system effects; but as this chapter has attempted to show, they can also be made open to contest. All communicative signs should be understood as abstractions. One indicator, or one text, cannot capture the prismatic and complex contexts in which ill-health flourishes, or do the complex work needed to inform or function as a sophisticated decision-making or accountability tool.
As critics have observed, indicators used in global health governance exercise normative power that shapes decision-making, priority-setting, and financing in ways that may distort local realities and override local expertise. Moreover, biases in data used to set targets or priorities can be produced by social and political inequalities, such as stigma, discrimination, marginalization, and criminalization. If these biases are not explicitly recognized, they may lead to biased decision-making that amplifies inequalities instead of promoting equity.
At the same time, global health indicators create a rallying point for diverse actors, and can open up space for engagement, contestation, and social accountability work that can make them powerful tools in the hands of activists. It is for this reason that advocates for those most marginalized continue to demand that they be used in order to establish commitments and enable independent monitoring of progress. For example, in the aftermath of a 2019 UN High-Level Meeting on Universal Health Coverage, the Global Network of People Living with HIV (GNP+) issued a statement that criticized the gathering for its failure to set measurable indicators, expressing the view that the commitments at the meeting without indicators meant that they were purely rhetorical (GNP+ 2019). The statement warned that failure to set indicators “will lead to minimal progress and maximum self-congratulation.”
The political authority of indicators and data is significant, and so in contexts where diverse actors have the space, the expertise, and the right to challenge authorities, indicators in global health governance do offer some hope for measurable progress.
This chapter draws from an earlier published book (Davis 2020).
As a consultant working for the three civil society delegations on the Global Fund Board, the author contributed to developing this footnote.