In 2010, I wrote that “exposure leads to the dose that makes the poison,” to augment Paracelsus’ famous statement.1,2 The concept has, however, been at the heart of the field since Wayne Ott3 noted that exposure requires “contact” with a chemical, physical or biological agent. Unfortunately, exposure science today still contributes only a minor increment to ongoing environmental health research in comparison to toxicology, and is only beginning to make real inroads in the area of primary prevention approaches used by engineers and architects to reduce toxicant exposures in our foods, consumer products, cars we drive, and the homes we live in.

I maintain that the continued lack of sufficient appreciation of exposure science is due to the overemphasis on the hazard component of risk assessment/risk management. The imbalance between exposure and hazard in the evaluation of potential toxicant risk was quite evident in the first NRC risk assessment report published ~31 years ago (The Red Book4). It was still evident in the Silver Book.5 Fortunately, the Silver Book did emphasize simultaneous evaluation of exposure and effects in problem formulation. We need to revisit the Integrated Risk Information System process to take more advantage of our current capabilities in exposure assessment (science)6. We must always keep in mind that if there is little or no exposure then there is no meaningful dose!

Mention was made in the Silver Book of the Food Quality Protection Act of 1996, which had a directive to examine cumulative and aggregate risk and led to modeling and measurement studies on aggregate and cumulative exposures.7 The EPA did, actually, conduct such studies for pesticides, but their work was abruptly interrupted by the misplaced controversy over design the Children’s Environmental Exposure Research Study. To date, EPA exposure research has not been able to effectively recover from the fallout, though well-articulated criteria and administrative safeguards for human studies have been published by the Agency.

Some say that there has, in fact, been an uptick in exposure characterization through the more common application of biomonitoring.8 The National Health and Nutrition Examination Survey program,9 started by the CDC and now mimicked by others, has provided substantial amounts of data, and population-based analyses on the internal exposures of the US population to many toxicants. They, and others, deserve applause for these efforts. Unfortunately, there are still many gaps in our ability to accurately interpret these data since there are too few, if any, companion measurements of external exposure made by the CDC to accompany the biomonitoring measurements. One must understand that a single biomarker measurement can only indicate that a person has been exposed; however, a negative result does not preclude exposure, due to clearance and metabolic considerations. Moreover, biomarker measurements that are not complemented by pertinent environmental, microenvironmental, and receptor activity information do not readily lead to accurate risk assessment or management of a toxicant.10 Thus, a valuable resource has a major deficiency: the lack of personal exposure monitoring and detailed activity/behavior information, especially for individuals with the highest risks for exposure and for adverse health effects, since biomarker measurements are not readily matched with actual exposure intensity, frequency, duration, or route. Unfortunately, no other agency has been able to fill in the research gaps due to financial constraints.

The “exposome” concept took the idea of biomonitoring to a potentially new level by proposing the plausibility of using “-omic” tools to assess internal exposures and early markers of disease. At the same time, it inadvertently relegated external markers of exposure to secondary status by emphasizing the use of biomarkers.11 This was eventually modified to include external markers in order to address the need for intervention in situations where one can mitigate the exposure.12,13 The recently started Human Early-Life Exposome Project will use both approaches to evaluate the early-life “exposome”.14

The neglect of exposure science must not persist, especially in light of the fact that the public wants more realistic and better risk information. Thus, there is a need to collect data for both external and internal markers of exposures, based upon the principles used with old and new tools, for example, sensors and biomonitoring that quantify short- and long-term exposures. In each case, the intensity of exposure still cannot be accurately characterized without understanding human behavior and activity patterns, a major requirement that differentiates exposure measurement/modeling from environmental toxicant risks.

The 2012 NRC report, Exposure Science in the 21st Century, The Gold Book,15 built upon the process continuum that the environmental health science field has used to define the basic principles, and places exposure measurement and modeling science as a core discipline that bridges sources with health outcomes. The Gold Book made a number of recommendations about the need for exposure science research and its use in policy decision making and implementation.15 In addition, it made recommendations about the need for simple sensors, and for the engagement of communities at risk.

The federal agencies are beginning to coordinate efforts on exposure science at the national level, a welcomed move forward. I attended an EPA summit on Exposure in April 2014 and it was good to hear and see that EPA wants to implement many of the ideas presented in Exposure Science in the 21st Century and continue a dialog with 18 other federal agencies on exposure science. However, the financial investment in exposure science research still lags.

One critical problem remains. It is that exposure science is still not sufficiently recognized as a necessary first step toward addressing a suspected environmental health issue. Further, it seems that reviewers for health research granting agencies continue to first focus on a suspected or actual health outcome. While that is a suitable approach for an old or known problem area of concern, for others it is like looking for a dime under the lamp post when the dime is more than 6 feet way from the lighted area. In science you do not always get the right answer or the most important answer with the first hypothesis. Exposure science, when implemented properly, can ensure that a health outcome hypothesis is well grounded in reality. The process outlined by the continuum must be used as a guide for health science research and applications.1,15 Thus, one must first determine the “contaminant(s) and sources of greatest concern,” and then determine which routes of contacts would be of greatest concern to health. However, one cannot stop there. The next step, which is often weakly addressed through “guesses,” is the examination of the “contacts” that lead to actual acute or chronic exposures of concern for health. The completion of exposure science measurement and modeling projects can guide the development of hypotheses for health effects studies (epidemiology) and risk characterization that support regulations and strategies to reduce or eliminate contacts and exposures that increase the risk of disease.

In the late 1990s environmental health sciences actually embraced the idea of first characterizing exposure to address an issue within the recommendations made by the NRC committee on PM, and their implementation.16,17 One of the important first steps in obtaining a better understanding of exposure to the newly regulated pollutant PM2.5 was to collect data on human contact with ambient air fine particulate matter (PM2.5). The committee recommended, and various organizations (e.g., EPA) committed, the funds for research needed to quantify “contacts of concern” before completing new Epidemiological studies (see Table 5.1 in NRC 1998,16 The Committee’s Research Investment Portfolio: Timing and Estimated Costs (5 $ million/year in 1998 dollars) or Recommended Research on Particulate Matter). This was a logical and scientifically defensible plan. It was stated on page 101 of the report that:

…the committee's research plans for human exposure assessment appear first, front-loaded into the early years of the portfolio, because there is an urgent need to characterize actual exposures of potentially susceptible persons to particulate matter and to characterize the biological consequences of those exposures. Methods are already in hand to assess personal exposures to particulate matter and their outcomes, but little investigation has been done or is yet planned by EPA to investigate the particulate-matter exposures of susceptible persons, …Further, there is a serious lack of understanding of the relationship of concentrations measured at fixed outdoor monitoring sites with the actual personal exposures of such individuals. Such gaps in knowledge should be addressed immediately in a 3-year program beginning in 1998.

The results from the recommended studies led to both better epidemiological and toxicological investigations, and a tightening of the PM2.5 standard.17 The type of structure and process used to recommend and conduct research on a specific environmental health issue, such as that for particulate matter, does not exist today, but should.

A current example of this need involves the so called “Unconventional Natural Gas Development (UNG)” or hydrofracking, commonly referred to as “fracking.” A review of the situation was recently published by Adgate et al. in 2014.18 In addition, in December 2013 the NRDC conducted a workshop on monitoring needs.19 To be transparent, UNG activities have been around for years, but not on the scale being pursued today, or projected in the years ahead. As with PM, after completing UNG contaminant identification, one of the first orders of business should be to conduct well-defined exposure studies in communities and workplaces following the determination of transport on- and off-site. There was discussion at the workshop about monitoring of air, water, etc., but very little on human exposure monitoring. The recommendations from both documents tried to address exposure, but primarily as it relates to health effects, which remain ill defined except for some worker populations. I suggest that all stakeholders read the PM committee charge, or at least the summaries for all four reports, to better understand how to systematically address the issues on exposure and health for UNG that were presented by Adgate et al.18

We see similar compartmentalization and fragmented research on other issues. An important example involves consumer products containing engineered nanomaterials. The first questions that need to be addressed are: Is there any exposure to those nanomaterials and, if so, which components, amounts and forms, and for what uses?20 There are many other examples of situations and conditions that demand better exposure characterizations. Another example is phthalates. I have been on a Consumer Product Safety Commission (CPSC) panel since about 2010 that is addressing the impact of phthalates in toys and/or personal care products on children and women of child-bearing age. The major finding is that although we understand the toxicity of many phthalates a major gap in knowledge is quantitative information on dominant routes of exposure and personal activities (e.g., food ingestion). These data are needed to place phthalate exposures from toys, etc., into a proper context.

I could provide more examples, but it is clear that the “ready, shoot, aim” approach is all too common and inadequate. We in the exposure science community know that merely combining environmental concentration measurements with hazard does not provide estimates of actual or even potential human exposures. We must find effective ways to share this knowledge with our colleagues in engineering, biomedicine and toxicology in order to get a better handle on “contact” with chemicals and then exposure. Thus, it is incumbent upon agencies, academia and industry to conduct meaningful exposure studies to fill information gaps. I hope these issues are addressed by the government and other stakeholders, including the Interagency Task Force on Exposure Science.