Monitoring the efficacy of treatment strategies to remove pathogens in decentralized systems remains a challenge. Evaluating log reduction targets by measuring pathogen levels is hampered by their sporadic and low occurrence rates. Fecal indicator bacteria are used in centralized systems to indicate the presence of fecal pathogens, but are ineffective decentralized treatment process indicators as they generally occur at levels too low to assess log reduction targets. System challenge testing by spiking with high loads of fecal indicator organisms, like MS2 coliphage, has limitations, especially for large systems. Microbes that are endogenous to the decentralized system, occur in high abundances and mimic removal rates of bacterial, viral and/or parasitic protozoan pathogens during treatment could serve as alternative treatment process indicators to verify log reduction targets. To identify abundant microbes in wastewater, the bacterial and viral communities were examined using deep sequencing. Building infrastructure-associated bacteria, like Zoogloea, were observed as dominant members of the bacterial community in graywater. In blackwater, bacteriophage of the order Caudovirales constituted the majority of contiguous sequences from the viral community. This study identifies candidate treatment process indicators in decentralized systems that could be used to verify log removal during treatment. The association of the presence of treatment process indicators with real-time, continuous measurements made with on-line physical and chemical sensors will be discussed as a framework to monitor treatment integrity in decentralized systems.
Traditional toxicity testing provides insight into the mechanisms underlying toxicological responses but requires a high investment in a large number of resources. The new paradigm of testing approaches involves rapid screening studies able to evaluate thousands of chemicals across hundreds of biological targets through use of in vitro assays. Endocrine disrupting chemicals (EDCs) are of concern due to their ability to alter neurodevelopment, behavior, and reproductive success of humans and other species. A recent integrated computational model examined results across 18 ER-related assays in the ToxCast in vitro screening program to eliminate chemicals that produce a false signal by possibly interfering with the technological attributes of an individual assay. However, in vitro assays can also lead to false negatives when the complex metabolic processes that render a chemical bioactive in a living system might be unable to be replicated in an in vitro environment. In the current study, the influence of metabolism was examined for over 1,400 chemicals considered inactive using the integrated computational model. Over 2,000 first-generation and over 4,000 second-generation metabolites were generated for the inactive chemicals using in silico techniques. Next, a consensus model comprised of individual structure activity relationship (SAR) models was used to predict ER-binding activity for each of the metabolites. Binding activity was predicted for 8-10% of the metabolites within each generation. Additionally, it was found that approximately 20% of the inactive parents have at least one potentially active metabolite. The approaches presented here can be used to identify potential parents that are inactive under in vitro conditions but that might become metabolically active in a living organism.
Increasing awareness about endocrine disrupting chemicals (EDCs) in the environment has driven concern about their potential impact on human health and wildlife. Tens of thousands of natural and synthetic xenobiotics are presently in commerce with little to no toxicity data and therefore uncertainty about their impact on estrogen receptor (ER) signaling pathways and other toxicity endpoints. As such, there is a need for strategies that make use of available data to prioritize chemicals for testing. One of the major achievements within the EPA’s Endocrine Disruptor Screening Program (EDSP), was the network model combining 18 ER in vitro assays from ToxCast to predict in vivo estrogenic activity. This model overcomes the limitations of single in vitro assays at different steps of the ER pathway. However, it lacks many relevant features required to estimate safe exposure levels and the composite assays do not consider the complex metabolic processes that might produce bioactive entities in a living system. This problem is typically addressed using in vivo assays. The aim of this work is to design a computational and in vitro approach to prioritize compounds and perform a quantitative safety assessment. To this end, we pursue a tiered approach taking into account bioactivity and bioavailability of chemicals and their metabolites using a human uterine epithelial cell (Ishikawa)-based assay. This biologically relevant fit-for-purpose assay was designed to quantitatively recapitulate in vivo human response and establish a margin of exposure. In order to overcome the overwhelming number of metabolites to test, a prioritization workflow was developed based on ToxCast chemicals (1677) and their predicted metabolites (15,406). A scoring function was used to rank the metabolic trees of the considered chemicals combining in vitro data from ToxCast and the literature in addition to in silico data from the Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) consensus and five of its single QSAR models. The bioavailability of the parent chemicals as well as the metabolites and their structures were predicted using ChemAxon metabolizer software. The designed workflow categorized the metabolic trees into true positives, true negatives, false positives and false negatives. The final output was a top priority list of 345 ranked chemicals and related metabolites from the ToxCast library as well as an additional list of 593 purchasable chemicals with known CASRNs. We are currently moving forward to test the highest-priority metabolic trees in the Ishikawa assay and are using a liver bioreactor to confirm important metabolites.
The goal of this presentation is to explore how HIA can help inform hazardous waste permitting regulations and incorporate community vulnerability and cumulative impacts to their potential health risks into permitting decision making by the California Department of Toxic Substances Control.
The National Science Foundation (NSF) today realized the initial phase of its $30 million investment to upgrade the nation’s computational research infrastructure through the dedication of Stampede2, one of the most powerful supercomputing systems in the world. Based at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, this strategic national resource will …
This is an NSF News item.
The behavior and fate of pharmaceutical ingredients in coastal marine ecosystems are not well understood. To address this, the spatial and temporal distribution of 15 high-volume pharmaceuticals were measured over a 1-yr period in Narragansett Bay (RI, USA) to elucidate factors and processes regulating their concentration and distribution. Dissolved concentrations ranged from below detection to 313 ng/L, with 4 pharmaceuticals present at all sites and sampling periods. Eight pharmaceuticals were present in suspended particulate material, ranging in concentration from below detection to 44 ng/g. Partitioning coefficients were determined for some pharmaceuticals, with their range and variability remaining relatively constant throughout the study. Normalization to organic carbon content provided no benefit, indicating other factors played a greater role in regulating partitioning behavior. Within the upper bay, the continuous influx of wastewater treatment plant effluents resulted in sustained, elevated levels of pharmaceuticals. A pharmaceutical concentration gradient was apparent from this zone to the mouth of the bay. For most of the pharmaceuticals, there was a strong relationship with salinity, indicating conservative behavior within the estuary. Short flushing times in Narragansett Bay coupled with pharmaceuticals’ presence overwhelmingly in the dissolved phase indicate that most pharmaceuticals will be diluted and transported out of the estuary, with only trace amounts of several compounds sequestered in sediments. The present study identifies factors controlling the temporal and spatial dynamics of dissolved and particulate pharmaceuticals; their partitioning behavior provides an increased understanding of their fate, including bioavailability in an urban estuary. Environ Toxicol Chem 2017;36:1846–1855. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.
Exposure to environmental contaminants is well documented to adversely impact the development of the nervous system. However, the time, animal and resource intensive EPA and OECD testing guideline methods for developmental neurotoxicity (DNT) are not a viable solution to characterizing potential chemical hazards for the thousands of untested chemicals currently in commerce. Thus, research efforts over the past decade have endeavored to develop cost-effective alternative DNT testing methods. These efforts have begun to generate data that can inform regulatory decisions. Yet there are major challenges to both the acceptance and use of this data. Major scientific challenges for DNT include development of new methods and models that are “fit for purpose”, development of a decision-use framework, and regulatory acceptance of the methods. It is critical to understand that use of data from these methods will be driven mainly by the regulatory problems being addressed. Some problems may be addressed with limited datasets, while others may require data for large numbers of chemicals, or require the development and use of new biological and computational models. For example mechanistic information derived from in vitro DNT assays can be used to inform weight of evidence (WoE) or integrated approaches to testing and assessment (IATA) approaches for chemical-specific assessments. Alternatively, in vitro data can be used to prioritize (for further testing) the thousands of chemicals used in commerce for which there is no data at all on their potential to cause DNT. The focus of this problem-dictated strategy is that testing is driven by decision-making needs, and the amount of resource utilization is adjusted to provide efficient and timely data to address the needs. As the health and environmental impacts of the decision increase, data needs increase, resource use increases, and the need increases for reduced scientific uncertainty in estimates of risk. Recent advances in testing methods and models hold great promise for the development and use of efficient testing strategies for DNT that are capable of initial prioritization and screening, hazard characterization, and hazard prediction. This abstract does not necessarily reflect U.S. EPA policy.
NTIA submits this report pursuant to Section 207 of the Commercial Spectrum Enhancement Act (CSEA), Title II of Pub. L. 108-494, which requires annual reporting on federal agencies’ progress to relocate radio communications systems from spectrum or share spectrum that has been reallocated to commercial use. This report provides details on two separate spectrum auctions conducted by the Federal Communications Commission (FCC) that included: 1) the 1710 to 1755 megahertz (MHz) band, and 2) the 1695-1710 MHz and 1755-1780 MHz bands.
Increased precipitation from a changing climate could pollute U.S. waterways with excess nitrogen, increasing the likelihood of severe water quality impairment from coast to coast, according to a new study by scientists Eva Sinha and Anna Michalak of the Carnegie Institution for Science and Venkatramani Balaji of Princeton University.
The results are published in this week’s issue of the journal Science.
The effects will be especially strong in the Midwest and …
This is an NSF News item.
Urban populations continue to increase globally and cities have become the dominant human habitat. However, the growth of cities is not universal. Shrinking cities face decreased income, reduced property values, and decreased tax revenue. Fewer people per unit area creates inefficiencies and higher costs for infrastructure maintenance and the provision of public amenities. However, population losses and economic distress are not equal in all neighborhoods, and in fact are quite heterogeneously distributed across the landscape. Broader statements about the trajectory of a shrinking city may mask underlying differences in economic, cultural, and environmental impacts as well as the ability of some neighborhoods to be resilient and adaptive to economic changes as well as climate change and other environmental stressors. This paper examines the recent impact of population loss in neighborhoods in the Río Piedras watershed in San Juan, Puerto Rico, on the provision of ecosystem services, material and energy flows, and ecological impacts, using public data and data collected previously in two household surveys. Using scenarios, we estimate future population changes and their potential positive and negative impacts on the environment and human well-being in these neighborhoods.
Mangrove systems are known carbon (C) and greenhouse gas (GHG) sinks, but this function may be affected by global change drivers that include (but are not limited to) eutrophication, climate change, species composition shifts, and hydrological changes. In Puerto Rico’s San Juan Bay Estuary, mangrove wetlands are characterized by anthropogenic impacts, particularly tidal restriction due to infilling of the Martin Pena Canal and eutrophication. The objective of our research is to measure carbon sequestration and carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) fluxes in the San Juan Bay Estuary to understand the sustainability and role in global climate of this urban mangrove ecosystem. Cores for C sequestration measurements were collected and GHG fluxes were measured during rainy and dry seasons at 5 sites along a gradient of development and nitrogen loading in the San Juan Bay Estuary. At each site, paired GHG flux measurements were performed for mangrove wetland soil and estuarine water using static and floating chambers. Our results suggest a positive relationship between urban development and CH4 and N2O emissions, and demonstrate that in this system, estuarine waters are a major methane source. In addition to providing characterization of GHG fluxes in an urban subtropical estuary, these data provide a baseline against which future states of the estuary (after planned hydrological restoration has been implemented) may be compared.
A database of embryo-fetal developmental toxicity (EFDT) studies of 379 pharmaceutical compounds in rat and rabbit was analyzed for species differences based on toxicokinetic parameters of area under the curve (AUC) and maximum concentration (Cmax) at the developmental adverse effect level (dLOAEL). For the vast majority of cases (83% based on AUC of n=283), dLOAELs in rats and rabbits were within the same order of magnitude (less than 10-fold different) when compared based on available data on AUC and Cmax exposures. For 13.5% of the compounds the rabbit was more sensitive and for 3.5% of compounds the rat was more sensitive when compared based on AUC exposures. For 12% of the compounds the rabbit was more sensitive and for 1.3% of compounds the rat was more sensitive based on Cmax exposures. When evaluated based on human equivalent dose (HED) conversion using standard factors, the rat and rabbit were equally sensitive. The relative extent of embryo-fetal toxicity in the presence of maternal toxicity was not different between species. Overall effect severity incidences were distributed similarly in rat and rabbit studies. Individual rat and rabbit strains did not show a different general distribution of systemic exposure LOAELs as compared to all strains combined for each species. There were no apparent species differences in the occurrence of embryo-fetal variations. Based on power of detection and given differences in the nature of developmental effects between rat and rabbit study outcomes for individual compounds, EFDT studies in two species have added value over single studies.
Urban water systems convey complex environmental and man-made flows. The relationships among water flows and networked storages remains difficult to comprehensively evaluate. Such evaluation is important, however, as interventions are designed (e.g, conservation measures, green infrastructure) to modify specific flows of urban water (e.g. drinking water, stormwater) that may have systemic effects. We have developed a general model that specifies the relationships among urban water system components, and a set of tools for evaluating the model for any city as the R package CityWaterBalance. CityWaterBalance provides a reproducible workflow for assessing urban water system(s) by facilitating the retrieval of open data, largely via web services, and analysis of these data using open-source R functions. It allows the user to 1) quickly assemble a quantitative, unified picture of flows thorough an urban area, and 2) easily change the spatial and temporal boundaries of analysis to match scales relevant to local decision-making. We used CityWaterBalance to evaluate the water system in the Chicago metropolitan area on a monthly basis for water years 2001-2010. Results, including the relative magnitudes and temporal variability of major water flows in greater Chicago, are used to consider 1) trade-offs associated with management alternatives for stormwater and combined sewer overflows and 2) the significance of future changes in precipitation, which is the largest term in the Chicago water balance.
In vivo studies have long been considered the gold standard for toxicology screening. Often time models developed in silico and/or using in vitro data to estimate points of departures (POD) are compared to the in vivo data to benchmark and evaluate quality and goodness of fit. However, it is not certain what a good model is and how well is well before it turns into overfitting. Here we estimate the amount of variance that can be expected within systemic in vivo data. The present study was done using the US EPA’s Toxicity Reference Database (ToxRefDB). The database incorporates over 5,000 in vivo toxicity studies from the Office of Pesticide Programs (registrant-submitted studies), National Toxicology Program, pharmaceutical industries, and publically available literature covering over 1,000 chemicals. Using multilinear regression to calculate the residual sum or squares, we accounted for known variability in study conditions to quantify the unexplained variance of the log10(POD) to be about 0.35. The leave one out method was used to assess the amount of variance explained by each study condition and chemicals were found to be the biggest contributor. Stratifying the dataset by species and administration methods showed similar results, indicating stability of the unexplained variance. Considering and quantifying the unexplained variance will provide a benchmark and lower bounds on the mean-square-error for predictive toxicity model development.
Global climate change is likely to affect both temperature and resource availability in aquatic ecosystems. While higher temperatures may result in increased food consumption and increased mercury accumulation, they may also lead to increased growth and reduced mercury accumulation through somatic dilution. Dynamic energy budget (DEB) theory provides a broad and generalizable framework based on first principles of energy metabolism that is well suited to understand these interactions, allowing joint acquisition and interpretation of chemical exposure and stressor effect information to be translated into demographic rate changes. In the current study, we conducted growth and bioaccumulation experiments to examine the interaction of temperature and resource availability on mercury accumulation and effects in the estuarine fish Fundulus heteroclitus (mummichog). In the first experiment, juvenile mummichog were fed tuna naturally contaminated with Hg at either 3.3% or 10% of their dry body weight/day and held at either 15 or 27 °C for 28 days. Growth was low in most treatments, except in fish fed 10% body weight held at 27 °C (40% weight and 12% length increase). However, methylmercury bioaccumulation was similar across feeding conditions but increased with temperature (~17-fold increase in methylmercury concentration at 27 °C and ~7-fold increase at 15 °C, regardless of feeding rate). In the second experiment, mummichogs from two wild populations with differing native mercury exposures were fed either a high or low methylmercury diet. Fish were strip-spawned every two weeks during the feeding period. Adults were sampled for total mercury concentration at the start and end of the experiment, and egg methylmercury concentration was measured in unfertilized eggs from each spawning event. Danioscope software was used to assess the heart rate of developing embryos at 10 days post fertilization. A dark:light movement assay determined differences in behavior of larvae between treatments at three and 10 days post hatch using Ethovision software. Tissue analysis indicated successful maternal transfer of mercury to eggs in the high mercury feed treatment. Heart rate and movement assays indicated potential population level differences in baseline behavior. The use of these data in a DEB model may greatly aid in understanding how temperature and resource availability affect mercury bioaccumulation. Overall, this work contributes to the ongoing development of an ecological modeling framework in a fish with an extensive toxicological and genomic background. Ultimately, we are working to connect molecular mechanistic, physiological, reproductive, and behavioral responses to population level fitness.
Data gap filling techniques are commonly used to predict hazard in the absence of empirical data. The most established techniques are read-across, trend analysis and quantitative structure-activity relationships (QSARs). Toxic equivalency factors (TEFs) are less frequently used data gap filling techniques which are applied to estimate relative potencies for mixtures of chemicals that contribute to an adverse outcome through a common biological target. For example, The TEF approach has been used for dioxin-like effects comparing individual chemical activity to that of the most toxic dioxin: 2,3,7,8-tetrachlorodibenzo-p-dioxin. The aim of this case study was to determine whether integration of two data gap filling techniques: QSARs and TEFs improved the predictive outcome for the assessment of a set of polychlorinated biphenyl (PCB) congeners and their mixtures. PCBs are associated with many different adverse effects, including their potential for neurotoxicity, which is the endpoint of interest in this study. The dataset comprised 209 PCB congeners, out of which 87 altered in vitro Ca(2+) homeostasis from which neurotoxic equivalency values (NEQs) were derived. The preliminary objective of this case study was to develop a QSAR model to predict NEQ values for the 122 untested PCB congeners. A decision tree model was developed using the number of position specific chlorine substitutions on the biphenyl scaffold as a fingerprint descriptor. Three different positional combinations were explored on the basis of equivalence between ortho, meta and para positions. Five different decision trees were developed on the basis of restrictions on tree growth. The training dataset of 87 tested PCBs was evaluated using 5-fold cross validation and leave-one-out (LOO) internal validation to ultimately predict NEQ values for the 122 untested PCBs. The evaluation statistics of the “best” decision tree model resulted in LOOCV RMSE: 0.29, 5-fold CV test RMSE: 0.34, and R2: 0.79. The results demonstrate the utility of using the TEF approach as an alternative data gap filling technique.
Ecological models of mummichogs (Fundulus heteroclitus) provide valuable tools to link controlled laboratory experiments to field observations. Mummichogs are useful study organisms due to their amenability to laboratory conditions, the availability of well-developed molecular tools, and their rich history in ecology and evolution. To understand the effects of chemicals and other stressors on population persistence, we are developing and testing a mathematical model of fecundity and dynamic energy budget which will ultimately be integrated into individual-based model of population persistence. Utilizing data from laboratory-based pair and small group spawning, an oocyte growth and spawning model developed for fathead minnow (Pimephales promelas) was modified for mummichog. To determine the comparability of laboratory experiments using small numbers of individuals to natural populations, we assessed the reproductive status and energetic reserves of mummichogs held in the lab, and those collected in Succotash Marsh, Jerusalem, RI during breeding season (July 2016). Fish were collected by seine at the same tidal stage on multiple days throughout the lunar cycle, and the abundance of individuals in four size-classes (80 mm) was recorded. Consistent with a resource-rich environment, the laboratory-bred fish were more likely to have mature oocytes than the field-caught 41-60 mm fish, and they had a much greater mass of body fat and greater hepatosomatic indices. These data, along with an ongoing laboratory-based growth and reproduction experiments, are being used to refine the fecundity and bioenergetics models. This combination of laboratory experiments, field study and ecological models provide a testable system to predict the effects of stressors on the tradeoffs throughout the mummichog lifecycle between energy storage, growth, and reproduction.
This multi-year pilot study evaluated a proposed field method for its effectiveness in the collection of a benthic macroinvertebrate sample adequate for use in the condition assessment of streams and rivers in the Neuquén Province, Argentina. A total of 13 sites, distributed across three rivers, were sampled. At each site, benthic macroinvertebrates were collected at 11 transects. Each sample was processed independently in the field and laboratory. Based on a literature review and resource considerations, the collection of 300 organisms (minimum) at each site was determined to be necessary to support a robust condition assessment, and therefore, selected as the criterion for judging the adequacy of the method. This targeted number of organisms was collected at all sites, at a minimum, when collections from all 11 transects were combined. Subsequent bootstrapping analysis of data was used to estimate whether collecting at fewer transects would reach the minimum target number of organisms for all sites. In a subset of sites, the total number of organisms frequently fell below the target when fewer than 11 transects collections were combined.Site conditions where <300 organisms might be collected are discussed. These preliminary results suggest that the proposed field method results in a sample that is adequate for robust condition assessment of the rivers and streams of interest. When data become available from a broader range of sites, the adequacy of the field method should be reassessed.