Why are pregnant women in Nepal gaining more weight?

A study on the factors driving a rise in weight gain among pregnant woman in Nepal rules out poor diet quality in the first trimester as one of the major causes, say the researchers.

Historically, one of the greatest challenges facing pregnant women in Nepal and other low-income countries was undernourishment, a result of poverty. While that continues to be a concern, doctors are seeing some of the same issues confronting women in western nations: excessive weight gain and the health risks that come with it, such as high blood pressure and gestational diabetes.

Obstacles to addressing the problem included a lack of data, prompting a pilot study on gestational weight gain among pregnant women in Nepal by Shristi Rawal, an assistant professor of nutritional sciences at the Rutgers School of Health Professions; Kelly Martin, a 2021 graduate of the doctor in clinical nutrition program and an assistant professor at the State University of New York College at Oneonta; and other faculty members.

Their findings appear in the journal BMC Nutrition.

Rawal, who is from Nepal, says the impact of diet quality has been studied in wealthier countries, but had not been investigated in the context of many low-income countries, including Nepal.

“Studies on perinatal complications have largely been based on Caucasian samples from high income countries, and there has been a lack of diversity in general in terms of women represented in these studies,” she says. “Pregnancy complications are increasing in Nepal, and no one was doing this work there. This is a first step.”

The study tracked 101 pregnant women receiving prenatal care at Dhulikhel Hospital at Kathmandu University. Rawal and her colleagues administered a 21-item questionnaire to assess intake of foods from groups categorized either as healthy (such as whole grains, fruits, and vegetables) or unhealthy (such as desserts, refined grains, and red meats), to the participants.

The study looked at diet quality in the first trimester and the rate of gestational weight gain from second to the third trimester but found no link between diet quality in early pregnancy and rate of gestational weight gain. It did find that a high intake of red meat could be a potential factor in driving up weight.

“The most striking result is that so many had excessive rate of gestational weight gain,” says Rawal. “If diet quality is not it, it could be daily caloric intake, physical activity, or sleep that could be associated with gestational weight gain. It could be other diet, lifestyle, or clinical factors. The next step is collecting more data and in a larger sample. ”

The pilot study established the need to conduct a larger birth cohort study with hundreds to thousands of women seeking antenatal care at Dhulikhel Hospital.

A key part of the pilot study also was to evaluate the efficacy of a novel dietary screening tool in capturing valid dietary data in the target population of Nepalese pregnant women.

In a paper published in the Maternal and Child Health Journal, the researchers conclude that the 21-question dietary screen tool modified for use by pregnant Nepalese woman is a valid and reliable instrument for assessing the dietary intake of pregnant women in Nepal.

“This adds credence to the tool, and we now know that it has cultural applicability to the setting and that it measures what it is intended to measure,” says Martin, who was the first author of both papers. “This is important for conducting further studies on diet quality in this population.”

Rawal is in the midst of a study testing a mobile app that supports Nepalese women with gestational diabetes by providing them with information and tools to adopt diet and lifestyle modifications needed to self-manage their condition.

Source: Rutgers University

source

Work-from-home parents watched kids more in COVID’s first year

A dramatic shift toward remote work during the COVID-19 pandemic caused telecommuting parents in the United States to spend significantly more time “parenting” their children in the first year of the pandemic than they did before, according to a new study.

In the study in the Journal of Marriage and Family, the researchers found that parents working remotely, particularly mothers, significantly increased the amount of time they spent on supervisory parenting—or “watching” their children as they did other activities, such as their job-related duties, not focused on childcare.

Mothers, both those working remotely and on-site, also altered their schedules more often during the pandemic to extend the paid workday.

However, the findings show no overall increase in the amount of time working parents spent on primary childcare duties—feeding, bathing, and other basic care—during the pandemic, regardless of whether they commuted to their jobs or worked remotely.

“The lack of increase in time devoted to basic childcare activities is much less surprising given the spike in telecommuting parents working while in their children’s presence or supervising them,” says coauthor Emma Zang, an assistant professor of sociology, biostatistics, and global affairs at Yale University.

“Our study demonstrates that parenting during the pandemic’s first year, particularly for moms working from home, often required multi-tasking and adjusting work schedules. This suggests that while remote work provides parents greater flexibility, there are potential negative effects on work quality and stress that are disproportionately faced by mothers.”

The study is the first to utilize time-diary data in the United States—records of individuals’ daily activity—to examine the association between parents’ work arrangements during the pandemic and how they use their time. Specifically, Zang and her coauthors—Thomas Lyttelton of Copenhagen Business School and Kelly Musick of Cornell University—analyzed nationally representative data from the 2017–2020 American Time Use Survey to estimate changes in paid work, childcare, and housework among parents working remotely and on site from before the pandemic and after its onset.

Time parents spent with their children present, but not directly supervising them, increased by more than an hour per day among telecommuting mothers and fathers during the pandemic, and supervisory parenting increased over the same period by 4.5 hours among mothers and 2.5 among fathers, on average, over the same period. (A 104% increase over pre-pandemic levels for moms, and an 87% increase for dads.) The much steeper increase in the amount of time spent by mothers on supervisory duties suggests they have disproportionate responsibility for childcare relative to fathers, the researchers say.

The study also revealed that most of the time telecommuting parents spent in their children’s presence or supervising them on workdays during the pandemic in 2020 occurred while they were simultaneously engaged in job-related activities. Moms and dads spent just under an additional hour of work time with children present; mothers spent four additional hours of work time supervising children, compared to two more among fathers.

Parents who commuted to work did not see a statistically significant increase in these areas, suggesting that they were constrained in how they could respond to rising childcare demands during the pandemic, the researchers note.

“Remote work allowed parents to triage during the disruptions of daycare closures and online schooling, even if the burden fell disproportionately on mothers,” says Lyttelton. “Commuting parents had even less leeway in their schedules.”

There is evidence of a reduction in the gender gap concerning household labor between telecommuting mothers and fathers during the pandemic. The study found that parents, particularly fathers, working from home increased the amount of time they spent on household chores, such as laundry and cleaning, during the pandemic. Fathers spent an additional 30 minutes per day on housework—up from 44 minutes per day pre-pandemic—while mothers logged an extra 16 minutes of chores.

The researchers also found a disparity between telecommuting mothers and fathers in the amount of time they spent playing with their children, as opposed to time spent with children that didn’t involve play. Moms working from home spent an additional 16 minutes per day playing with their kids while dads across both work arrangements played with their children an extra six minutes per day. Mothers working on-site saw no increase during the pandemic, according to the study.

The findings on housework and time spent playing with children differ from evidence collected prior to the pandemic, which had showed that remote work is associated with large gender disparities in housework and smaller disparities in childcare, the researchers note.

Mothers working remotely and on-site both reported altering their schedules during the pandemic, working during non-standard hours presumably to meet the increased demands of parenting, the researchers say.

“Our work provides insights into important dimensions of inequality during the pandemic between mothers and fathers and parents who work from home and on-site workers,” Zang says. “The pandemic underscored that our work culture is unaccommodating toward the demands parents face and a policy infrastructure ill-suited to support working parents.

“We need change at the public and private levels to better serve the wellbeing of working families.”

Source: Yale University

source

Fertilizer could be made much more sustainably

Researchers have shown how nitrogen fertilizer could be produced more sustainably.

This is necessary not only to protect the climate, but also to reduce dependence on imported natural gas and to increase food security.

Intensive agriculture is possible only if the soil is fertilized with nitrogen, phosphorus, and potassium. While phosphorus and potassium can be mined as salts, nitrogen fertilizer has to be produced laboriously from nitrogen in the air and from hydrogen. And, the production of hydrogen is extremely energy-intensive, currently requiring large quantities of natural gas or—as in China—coal.

Besides having a correspondingly large carbon footprint, nitrogen fertilizer production is vulnerable to price shocks on the fossil fuels markets.

Paolo Gabrielli, senior scientist at the Laboratory of Reliability and Risk Engineering at ETH Zurich, has collaborated with Lorenzo Rosa, principal investigator at Carnegie Institution for Science at Stanford University, to investigate various carbon-neutral production methods for nitrogen fertilizer.

In their study, the two researchers conclude that a transition in nitrogen production is possible and that such a transition may also increase food security. However, alternative production methods have advantages and disadvantages. Specifically, the two researchers examined three alternatives:

  • Producing the necessary hydrogen using fossil fuels as in the business-as-usual, only instead of emitting the greenhouse gas CO2 into the atmosphere, it is captured in the production plants and permanently stored underground (carbon capture and storage, CSS). This requires not only an infrastructure for capturing, transporting, and storing the CO2 but also correspondingly more energy. Despite this, it is a comparatively efficient production method. However, it does nothing to reduce dependence on fossil fuels.
  • Electrifying fertilizer production by using water electrolysis to produce the hydrogen. This requires averagely 25 times as much energy as today’s production method using natural gas, so it would take huge amounts of electricity from carbon-neutral sources. For countries with an abundance of solar or wind energy, this might be an appealing approach. However, given plans to electrify other sectors of the economy in the name of climate action, it might lead to competition for sustainable electricity.
  • Synthesizing the hydrogen for fertilizer production from biomass. Since it requires a lot of arable land and water, ironically this production method competes with food production. But the study’s authors point out that it makes sense if the feedstock is waste biomass—for example, crop residues.

The researchers say that the key to success is likely to be a combination of all these approaches depending on the country and on specific local conditions and available resources.

In any case, it is imperative that agriculture make a more efficient use of nitrogen fertilizers, as Rosa stresses: “Addressing problems like over-fertilization and food waste is also a way to reduce the need for fertilizer.”

In the study, the researchers also sought to identify the countries of the world in which food security is currently particularly at risk owing to their dependence on imports of nitrogen or natural gas. The following countries are particularly vulnerable to price shocks in the natural gas and nitrogen markets: India, Brazil, China, France, Turkey, and Germany.

Decarbonizing fertilizer production would in many cases reduce this vulnerability and increase food security. At the very least, electrification via renewables or the use of biomass would reduce the dependence on natural gas imports. However, the researchers put this point into perspective: all carbon-neutral methods of producing nitrogen fertilizer are more energy intensive than the current method of using fossil fuels. In other words, they are still vulnerable to certain price shocks—not on natural gas markets directly, but perhaps on electricity markets.

Decarbonization is likely to change the line-up of countries that produce nitrogen fertilizer, the scientists point out in their study. As things stand, the largest nitrogen exporting nations are Russia, China, Egypt, Qatar, and Saudi Arabia. Except for China, which has to import natural gas, all these countries can draw on their own natural gas reserves. In the future, the countries that are likely to benefit from decarbonization are those that generate a lot of solar and wind power and also have sufficient reserves of land and water, such as Canada and the United States.

“There’s no getting around the fact that we need to make agricultural demand for nitrogen more sustainable in the future, both for meeting climate targets and for food security reasons,” Gabrielli says.

The war in Ukraine is affecting the global food market not only because the country normally exports a lot of grain, but also because the conflict has driven natural gas prices higher. This in turn has caused prices for nitrogen fertilizers to rise. Even so, some fertilizer producers are known to have ceased production, at least temporarily, because the exorbitant cost of gas makes production uneconomical for them.

The research appears in Environmental Research Letters.

Source: ETH Zurich

source

Moms’ lack of fiber can boost obesity risk in baby mice

The offspring of lactating mice missing fiber in their diet may be highly prone to developing obesity, a new study shows.

Those offspring lack microbial diversity in their gut and have low-grade inflammation, the researchers report.

The findings in the journal Cell Host & Microbe could help explain why obesity is increasing, especially in children. However, because the experiment was conducted in mice, the researchers can only speculate how much the results translate to humans.

“As long as young mice were maintained on a standard diet, there was no difference in their weight or other metabolic parameters, regardless of whether or not their mother ate fiber,” says senior author Andrew Gewirtz, professor in the Institute for Biomedical Sciences at Georgia State University.

“But striking differences occurred when they were exposed to a Western style diet. The mice from the fiber-deprived mothers gained striking amounts of weight. The mice from the mothers who had the fiber diet gained only small amounts of weight on this diet.”

A Western style diet, also known as a fast food diet or obesogenic diet, is high in fat and low in fiber. The standard diet that young mice are raised on is a relatively healthy, mostly plant-based diet with a small amount of animal products.

If these results translate to humans, it could help explain cases in which adolescents have very easy access to fast food diets, but some exhibit large increases in adiposity while others remain fit and lean.

The study also found that if mothers were not consuming fiber, then the offspring didn’t get particular bacteria. If offspring don’t have those bacteria, or unless the bacteria are deliberately administered, the fiber by itself doesn’t provide a health benefit. The fiber is only beneficial if bacteria are there to metabolize it, Gewirtz explains.

The researchers studied the offspring’s fecal matter to determine the bacteria they were missing.

“They’re missing beneficial bacteria that help keep out inflammatory bacteria,” says lead author Jun Zou, a research assistant professor in the Institute for Biomedical Sciences. “The beneficial bacteria do two particular things. They can metabolize the fibers to produce beneficial products such as short-chain fatty acids and exclude bacteria that are pro-inflammatory.”

One limitation of the study was the way the experiments were performed. The mice were kept in cages in a research facility, so they didn’t have other ways of acquiring these beneficial bacteria, unless they were deliberately administered to them. This differs from human experience. Even if a child’s mother didn’t eat fiber, that child might be able to play with other children at daycare and acquire these bacteria.

“That’s one reason that our findings might not apply to humans, but we just don’t know,” Gewirtz says.

Next, the researchers want to understand the mechanism behind why some mice are so prone to gain weight when exposed to obesogenic diets and then develop simple approaches to prevent passing along an unhealthy microbiome. For instance, perhaps a pregnant woman could be given dietary supplements of fiber, probiotics, or a combination of the two.

Additional authors are from the Center for Inflammation, Immunity and Infection at the Institute for Biomedical Sciences. The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) of the National Institutes of Health and American Diabetes Association funded the work.

Source: Georgia State University

source

Blood test detects marker of Alzheimer’s neurodegeneration

A new test detects a novel marker of Alzheimer’s disease neurodegeneration in a blood sample, researchers report.

The biomarker, called “brain-derived tau,” or BD-tau, outperforms current blood diagnostic tests used to detect Alzheimer’s-related neurodegeneration clinically. It is specific to Alzheimer’s disease and correlates well with Alzheimer’s neurodegeneration biomarkers in the cerebrospinal fluid (CSF).

“At present, diagnosing Alzheimer’s disease requires neuroimaging,” says Thomas Karikari, assistant professor of psychiatry at the University of Pittsburgh and senior author of the study in the journal Brain.

“Those tests are expensive and take a long time to schedule, and a lot of patients, even in the US, don’t have access to MRI and PET scanners. Accessibility is a major issue.”

Currently, to diagnose Alzheimer’s disease, clinicians use guidelines set in 2011 by the National Institute on Aging and the Alzheimer’s Association. The guidelines, called the AT(N) Framework, require detection of three distinct components of Alzheimer’s pathology—the presence of amyloid plaques, tau tangles, and neurodegeneration in the brain—either by imaging or by analyzing CSF samples.

Unfortunately, both approaches suffer from economical and practical limitations, dictating the need for development of convenient and reliable AT(N) biomarkers in blood samples, collection of which is minimally invasive and requires fewer resources.

The development of simple tools detecting signs of Alzheimer’s in the blood without compromising on quality is an important step toward improved accessibility, says Karikari.

“The most important utility of blood biomarkers is to make people’s lives better and to improve clinical confidence and risk prediction in Alzheimer’s disease diagnosis,” Karikari says.

Current blood diagnostic methods can accurately detect abnormalities in plasma amyloid beta and the phosphorylated form of tau, hitting two of the three necessary checkmarks to confidently diagnose Alzheimer’s.

But the biggest hurdle in applying the AT(N) Framework to blood samples lies in the difficulty of detecting markers of neurodegeneration that are specific to the brain and aren’t influenced by potentially misleading contaminants produced elsewhere in the body.

For example, blood levels of neurofilament light, a protein marker of nerve cell damage, become elevated in Alzheimer’s disease, Parkinson’s and other dementias, rendering it less useful when trying to differentiate Alzheimer’s disease from other neurodegenerative conditions. On the other hand, detecting total tau in the blood proved to be less informative than monitoring its levels in CSF.

By applying their knowledge of molecular biology and biochemistry of tau proteins in different tissues, such as the brain, Karikari and colleagues developed a technique to selectively detect BD-tau while avoiding free-floating “big tau” proteins produced by cells outside the brain.

To do that, they designed a special antibody that selectively binds to BD-tau, making it easily detectible in the blood. They validated their assay across over 600 patient samples from five independent cohorts, including those from patients whose Alzheimer’s disease diagnosis was confirmed after their deaths, as well as from patients with memory deficiencies indicative of early-stage Alzheimer’s.

The tests showed that levels of BD-tau detected in blood samples of Alzheimer’s disease patients using the new assay matched with levels of tau in the CSF and reliably distinguished Alzheimer’s from other neurodegenerative diseases. Levels of BD-tau also correlated with the severity of amyloid plaques and tau tangles in the brain tissue confirmed via brain autopsy analyses.

Scientists hope that monitoring blood levels of BD-tau could improve clinical trial design and facilitate screening and enrollment of patients from populations that historically haven’t been included in research cohorts.

“There is a huge need for diversity in clinical research, not just by skin color but also by socioeconomic background,” says Karikari. “To develop better drugs, trials need to enroll people from varied backgrounds and not just those who live close to academic medical centers.

A blood test is cheaper, safer, and easier to administer, and it can improve clinical confidence in diagnosing Alzheimer’s and selecting participants for clinical trial and disease monitoring.”

Karikari and his team are planning to conduct large-scale clinical validation of blood BD-tau in a wide range of research groups, including those that recruit participants from diverse racial and ethnic backgrounds, from memory clinics, and from the community.

Additionally, these studies will include older adults with no biological evidence of Alzheimer’s disease as well as those at different stages of the disease. These projects are crucial to ensure that the biomarker results are generalizable to people from all backgrounds, and will pave the way to making BD-tau commercially available for widespread clinical and prognostic use.

Additional coauthors are from the University of Gothenburg; Bioventix Plc, in the UK; the University of California, San Diego; the University of Brescia in Italy; and RCCS Istituto Centro San Giovanni di Dio Fatebenefratelli in Brescia, Italy.

The research had support in part from the Alzheimer’s Association and the Swedish Research Council.

Source: University of Pittsburgh

source

COVID infection messes up healthy gut bacteria balance

In an intensive look at the effects of the virus causing COVID-19 on patients’ microbiome, researchers found that acute infection disrupts a healthy balance between good and bad microbes in the gut, especially with antibiotic treatment.

The microbiome is the collection of microorganisms that live in and on the human body.

The new work may lead to the development of probiotic supplements to redress any gut imbalances in future patients, the researchers say.

Reporting in the scientific journal Molecular Biomedicine, the researchers described the first results of an ongoing study examining the microbiome of patients and volunteers at Robert Wood Johnson University Hospital in New Brunswick.

“These findings may help identify microbial targets and probiotic supplements for improving COVID-19 treatment.”

The study, which began in May 2020, the early days of the pandemic, was designed to zero in on the microbiome because many COVID-19 patients complained of gastrointestinal issues—both during the acute phases of their illness and while recuperating.

“We wanted to gain a deeper understanding by looking at specimens that would give us an indication about the state of the gut microbiome in people,” says Martin Blaser, chair of the human microbiome at Rutgers University, director of the Center for Advanced Biotechnology and Medicine (CABM) at Rutgers, and an author on the study.

“What we found was that, while there were differences between people who had COVID-19 and those who were not ill, the biggest difference from others was seen in those who had been administered antibiotics,” Blaser says.

Early in the pandemic, before the introduction of vaccines and other antiviral remedies, it was a common practice to treat COVID-19 patients with a round of antibiotics to attempt to target possible secondary infections, says Blaser, who also is a professor of medicine and pathology and laboratory medicine at Rutgers Robert Wood Johnson Medical School.

Humans carry large and diverse populations of microbes, Blaser says. These microorganisms live in the gastrointestinal tract, on the skin and in other organs, with the largest population in the colon. Scientists such as Blaser have shown over recent decades that the microbiome plays a pivotal role in human health, interacting with metabolism, the immune system and the central nervous system.

The microbiome has many different functions. “One is to protect the human body against invading pathogens, whether they’re bacteria or viruses or fungi,” Blaser says. “That goes deep into evolution, maybe a billion years of evolution.”

Medical problems often arise when the balance between beneficial and pathogenic microbes in a person’s microbiome is thrown off, a condition known as dysbiosis.

The scientists studied microbiomes by measuring populations of microorganisms in stool samples taken from 60 subjects. The study group consisted of 20 COVID-19 patients, 20 healthy donors, and 20 COVID-19-recovered subjects. They found major differences in the population numbers of 55 different species of bacteria when comparing the microbiomes of infected patients with the healthy and recovered patients.

The researchers plan to continue to test and track the microbiomes of patients in the study to ascertain the long-term effect on individual microbiomes from COVID-19.

“Further investigation of patients will enhance understanding of the role of the gut microbiome in COVID-19 disease progression and recovery,” Blaser says. “These findings may help identify microbial targets and probiotic supplements for improving COVID-19 treatment.”

Support for the study came from Danone and by the National Institutes of Health (National Institute of Allergy and Infectious Diseases).

Source: Rutgers University

source

Some guts get more energy from the same food

New findings are a step towards understanding why some people gain more weight than others, even when they eat the same diet.

The research indicates that some Danish people have a composition of gut microbes that, on average, extracts more energy from food than do the microbes in the guts of their fellow Danes. Part of the explanation could be related to the composition of their gut microbes.

Researchers at the University of Copenhagen’s department of nutrition, exercise, and sports studied the residual energy in the feces of 85 Danes to estimate how effective their gut microbes are at extracting energy from food. At the same time, they mapped the composition of gut microbes for each participant.

The results show that roughly 40% of the participants belong to a group that, on average, extracts more energy from food compared to the other 60%. The researchers also observed that those who extracted the most energy from food also weighed 10% more on average, amounting to an extra nine kilograms (about 20 pounds).

“We may have found a key to understanding why some people gain more weight than others, even when they don’t eat more or any differently. But this needs to be investigated further,” says associate professor Henrik Roager.

The results indicate that being overweight might not be related to how healthily a person eats or the amount of exercise they get. It may also have something to do with the composition of their gut microbes.

As reported in the journal Microbiome, participants were divided into three groups, based on the composition of their gut microbes. The so-called B-type composition (dominated by Bacteroides bacteria) is more effective at extracting nutrients from food and was observed in 40% of the participants.

Following the study, the researchers suspect that having gut bacteria that are more effective at extracting energy may result in more calories being available for the human host from the same amount of food.

“The fact that our gut bacteria are great at extracting energy from food is basically a good thing, as the bacteria’s metabolism of food provides extra energy in the form of, for example, short-chain fatty acids, which are molecules that our body can use as energy-supplying fuel. But if we consume more than we burn, the extra energy provided by the intestinal bacteria may increase the risk of obesity over time,” says Roager.

From mouth to esophagus, stomach, duodenum, and small intestine, large intestine, and finally to rectum, the food we eat takes a 12-to-36-hour journey, passing several stations along the way, before the body has extracted all the food’s nutrients.

The researchers also studied the length of this journey for each participant, all of whom had similar dietary patterns. Here, the researchers hypothesized that those with long digestive travel times would be the ones who harvested the most nutrition from their food. But the study found the exact opposite.

“We thought that there would be a long digestive travel time would allow more energy to be extracted. But here, we see that participants with the B-type gut bacteria that extract the most energy, also have the fastest passage through the gastrointestinal system, which has given us something to think about,” says Roager.

The new study in humans confirms earlier studies in mice. In these studies, researchers found that germ-free mice that received gut microbes from obese donors gained more weight compared to mice that received gut microbes from lean donors, despite being fed the same diet.

Even then, the researchers proposed that the differences in weight gain could be attributable to the fact that the gut bacteria from obese people were more efficient at extracting energy from food. The new research confirms this theory.

“It is very interesting that the group of people who have less energy left in their stool also weigh more on average. However, this study doesn’t provide proof that the two factors are directly related. We hope to explore this more in the future,” says Roager.

Source: University of Copenhagen

source

Can machine learning predict the next big disaster?

A new study shows how machine learning could predict rare disastrous events, like earthquakes or pandemics.

The research suggests how scientists can circumvent the need for massive data sets to forecast extreme events with the combination of an advanced machine learning system and sequential sampling techniques.

When it comes to predicting disasters brought on by extreme events (think earthquakes, pandemics, or “rogue waves” that could destroy coastal structures), computational modeling faces an almost insurmountable challenge: Statistically speaking, these events are so rare that there’s just not enough data on them to use predictive models to accurately forecast when they’ll happen next.

But the new research indicates it doesn’t have to be that way.

In the study in Nature Computational Science, the researchers describe how they combined statistical algorithms—which need less data to make accurate, efficient predictions—with a powerful machine learning technique and trained it to predict scenarios, probabilities, and sometimes even the timeline of rare events despite the lack of historical record on them.

Doing so, the researchers found that this new framework can provide a way to circumvent the need for massive amounts of data that are traditionally needed for these kinds of computations, instead essentially boiling down the grand challenge of predicting rare events to a matter of quality over quantity.

“You have to realize that these are stochastic events,” says study author George Karniadakis, a professor of applied mathematics and engineering at Brown University. “An outburst of pandemic like COVID-19, environmental disaster in the Gulf of Mexico, an earthquake, huge wildfires in California, a 30-meter wave that capsizes a ship—these are rare events and because they are rare, we don’t have a lot of historical data.

“We don’t have enough samples from the past to predict them further into the future. The question that we tackle in the paper is: What is the best possible data that we can use to minimize the number of data points we need?”

The researchers found the answer in a sequential sampling technique called active learning. These types of statistical algorithms are not only able to analyze data input into them, but more importantly, they can learn from the information to label new relevant data points that are equally or even more important to the outcome that’s being calculated. At the most basic level, they allow more to be done with less.

That’s critical to the machine learning model the researchers used in the study. Called DeepOnet, the model is a type of artificial neural network, which uses interconnected nodes in successive layers that roughly mimic the connections made by neurons in the human brain.

DeepOnet is known as a deep neural operator. It’s more advanced and powerful than typical artificial neural networks because it’s actually two neural networks in one, processing data in two parallel networks. This allows it to analyze giant sets of data and scenarios at breakneck speed to spit out equally massive sets of probabilities once it learns what it’s looking for.

The bottleneck with this powerful tool, especially as it relates to rare events, is that deep neural operators need tons of data to be trained to make calculations that are effective and accurate.

In the paper, the research team shows that combined with active learning techniques, the DeepOnet model can get trained on what parameters or precursors to look for that lead up to the disastrous event someone is analyzing, even when there are not many data points.

“The thrust is not to take every possible data and put it into the system, but to proactively look for events that will signify the rare events,” Karniadakis says. “We may not have many examples of the real event, but we may have those precursors. Through mathematics, we identify them, which together with real events will help us to train this data-hungry operator.”

In the paper, the researchers apply the approach to pinpointing parameters and different ranges of probabilities for dangerous spikes during a pandemic, finding and predicting rogue waves, and estimating when a ship will crack in half due to stress. For example, with rogue waves—ones that are greater than twice the size of surrounding waves—the researchers found they could discover and quantify when rogue waves will form by looking at probable wave conditions that nonlinearly interact over time, leading to waves sometimes three times their original size.

The researchers found their new method outperformed more traditional modeling efforts, and they believe it presents a framework that can efficiently discover and predict all kinds of rare events.

In the paper, the research team outlines how scientists should design future experiments so that they can minimize costs and increase the forecasting accuracy. Karniadakis, for example, is already working with environmental scientists to use the novel method to forecast climate events, such as hurricanes.

Ethan Pickering and Themistoklis Sapsis from the Massachusetts Institute of Technology led the study. Karniadakis and other Brown researchers introduced DeepOnet in 2019. They are currently seeking a patent for the technology.

Support for the study came from the Defense Advanced Research Projects Agency, the Air Force Research Laboratory, and the Office of Naval Research.

Source: Juan Siliezar for Brown University

source

Is it raining? Turn off the automatic sprinklers

People who don’t habitually turn off their automatic sprinklers are wasting water, say researchers.

In Florida, with a population of 22 million, a figure projected to hit 27.8 million by 2050, residents need to conserve the precious resource.

Preserving water boils down to good habits. It can be as simple as whether you intend to save water or not, say researchers, who would like to change the behavior of those who leave their sprinklers on when it’s raining.

“A lot of people do not think about how rain plays into the total amount of water a yard receives,” says Laura Warner, an associate professor of agricultural education and communication at the University of Florida Institute of Food and Agricultural Sciences (UF/IFAS).

“So, if someone wants to apply a half-inch of water to their lawn, they may set their irrigation system to do so regularly, and then if it rains, that is just ‘extra.’ It would be advantageous to shift people’s mindsets, so they consider rain first and irrigation supplemental.”

Warner and John Diaz, associate professor of agricultural education and communication, are coauthors of a new study that examines whether homeowners intend to turn off their irrigation when it rains—the so-called “intenders.”

The researchers conducted an online survey of 331 Florida residents who identified themselves as users of automated sprinkler systems.

They wanted to know whether homeowners intended to turn off their water, based on recent or current rain.

To find their answer, researchers asked questions such as, “How likely are you to use local weather data to turn off your irrigation when recent rainfall is adequate for your yard in the next month?” Respondents could select from “very unlikely” to “very likely” for all three questions.

Researchers labeled respondents with the highest score as “intenders.” In other words, the person plans to turn off the irrigation system when it’s raining. They also intend to turn off their water if it’s rained a lot recently and see no reason to water their lawn.

Warner and Diaz want to change the habits of “non-intenders.”

“Rather than just providing information on how to conserve water or why—which is not terribly effective—we want to connect with people and have them connect with water resources on a deeper psychological level,” Warner says. In other words, homeowners must feel a sense of obligation to conserve water.

To move toward wiser lawn-irrigation habits, UF/IFAS Extension agents, governments, homeowners’ associations, and neighbors can nudge residents to stop or reduce irrigating when it’s raining. One way to do that is by pointing out that other homeowners are turning off their sprinklers, Warner says.

Sometimes, reminders such as signs at the entrance to a subdivision help. They tell residents how much rain fell the week before and ask residents if they need to irrigate, she says.

“Considering irrigation water is potable—meaning it is the same limited source of drinking water we share—the stakes are pretty significant,” she says. “Not to mention the incredible quantity of water that can be saved and the related monetary savings on utility bills.”

The study appears in Urban Water Journal.

Source: University of Florida

source

Breast cancer drugs face a ‘whack-a-mole’ problem

Researchers have discovered for the first time how deadly hard-to-treat breast cancers persist after chemotherapy.

The findings reveal why patients with these cancers don’t respond well to immunotherapies designed to clear out remaining tumor cells by revving up the immune system.

Thanks to advances in cancer therapies, most forms of breast cancer are highly treatable, especially when caught early.

But the last frontier cases—those that can’t be treated with hormone or targeted therapies and don’t respond to chemotherapy—remain the deadliest and hardest to treat.

The process of surviving chemotherapy triggers a program of immune checkpoints that shield breast cancer cells from different lines of attack by the immune system. It creates a “whack-a-mole” problem for immunotherapy drugs called checkpoint inhibitors that may kill tumor cells expressing one checkpoint but not others that have multiple checkpoints, according to a new study published in the journal Nature Cancer.

“Breast cancers don’t respond well to immune checkpoint inhibitors, but it has never really been understood why,” says corresponding author James Jackson, associate professor of biochemistry and molecular biology at Tulane University School of Medicine.

“We found that they avoid immune clearance by expressing a complex, redundant program of checkpoint genes and immune modulatory genes. The tumor completely changes after chemotherapy treatment into this thing that is essentially built to block the immune system.”

Researchers studied the process in mouse and human breast tumors and identified 16 immune checkpoint genes that encode proteins designed to inactivate cancer-killing T-cells.

“We’re among the first to actually study the tumor that survives post-chemotherapy, which is called the residual disease, to see what kind of immunotherapy targets are expressed,” says first author Ashkan Shahbandi, an MD/PhD student in Jackson’s lab.

The tumors that respond the worst to chemotherapy enter a state of dormancy—called cellular senescence—instead of dying after treatment. Researchers found two major populations of senescent tumor cells, each expressing different immune checkpoints activated by specific signaling pathways. They showed the expression of immune evasion programs in tumor cells required both chemotherapy to induce a senescent state and signals from non-tumor cells.

They tested a combination of drugs aimed at these different immune checkpoints. While response could be improved, these strategies failed to fully eradicate the majority of tumors.

“Our findings reveal the challenge of eliminating residual disease populated by senescent cells that activate complex immune inhibitory programs,” Jackson says.

“Breast cancer patients will need rational, personalized strategies that target the specific checkpoints induced by the chemotherapy treatment.”

Source: Tulane University

source