Dry eye changes how injured cornea heals itself

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

source

Study may explain why too much of a good smell can stink

New research reveals an added layer of nuance in our sense of smell.

The delicate fragrance of jasmine is a delight to the senses. The sweet scent is popular in teas, perfumes and potpourri. But take a whiff of the concentrated essential oil, and the pleasant aroma becomes almost cloying.

Part of the flower’s smell actually comes from the compound skatole, a prominent component of fecal odor.

“Consider for instance the smell of a ripe banana from a distance (sweet and fruity) versus up-close (overpowering and artificial).”

Our sense of smell is clearly a complex process; it involves hundreds of different odorant receptors working in concert. The more an odor stimulates a particular neuron, the more electrical signals that neuron sends to the brain.

But the new research reveals that these neurons actually fall silent when an odor rises above a certain threshold. Remarkably, this was integral to how the brain recognized each smell.

“It’s a feature; it’s not a bug,” says Matthieu Louis, an associate professor in the department of molecular, cellular, and developmental biology at the University of California, Santa Barbara.

The paradoxical finding, published in Science Advances, shakes up our understanding of olfaction.

“The same odor can be represented by very different patterns of active olfactory sensory neurons at different concentrations,” Louis says. “This might explain why some odors can be perceived as very different to us at low, medium, and very high concentrations. Consider for instance the smell of a ripe banana from a distance (sweet and fruity) versus up-close (overpowering and artificial).”

Humans have several million sensory neurons in our noses, and each of these has one type of odorant receptor. Altogether, we have about 400 different types of receptors with overlapping sensitivity. Each chemical compound is like a different shoe that the receptor is trying on. Some shoes fit snugly, some fit well, while others don’t fit at all. A better fit produces a stronger response from the receptor. Increasing an odor’s concentration recruits neurons with receptors that have are less sensitive to that substance. Our brain uses the combination of activated neurons to distinguish between odors.

Scientists thought that neurons would effectively max out above certain odor concentrations, at which point their activity would plateau. But the team led by Louis’ graduate student, David Tadres, found the exact opposite: Neurons actually fall silent above a certain level, with the most sensitive ones dropping off first.

Looking at flies

Fruit fly larvae make an ideal model for studying olfaction. They have as many types of odorant receptors as the number of sensory neurons—namely, 21. This one-to-one correspondence makes it simple to test what each neuron is doing.

For the study, Tadres examined larvae with a mutation that entirely eliminated their sense of smell. He then selectively turned that sense back on in a single sensory neuron, enabling the larvae to detect only odors that activated that specific receptor. He placed them next to an odor source and watched.

Even with a single functioning olfactory channel, the larvae could still move toward the stronger smell. But remarkably, they stopped a certain distance away from the source, and just circled it in a fixed orbit. Tadres repeated the experiment with a neuron slightly less sensitive to the odor he was testing, and found that the larvae got closer to the source before stopping.

Puzzled by this behavior, Tadres used electrodes to measure the activity of the sensory neuron. As expected, signaling increased as the odor became more concentrated. But rather than plateau above a certain level, the activity crashed to zero. That’s why the mutant larvae circled the odor source; above a certain concentration, the smell simply disappeared.

“The silencing of the olfactory sensory neuron could easily explain the circling behavior, which was mysterious before,” Tadres says. “From there it wasn’t hard to extrapolate that the current view of how odors are encoded at different concentrations needed to be updated.”

Researchers knew that excessive stimulation can cause nerves to go silent, an effect called “depolarization block.” However, the consensus was that this sort of overload doesn’t occur under natural, healthy conditions. Indeed, this response is associated with issues like epilepsy when it occurs in the central brain. But when Tadres observed it affecting the larvae’s behavior, he suspected that it wasn’t merely an artifact of the experiment.

Digging deeper

Tadres and Louis began investigating the cause of the depolarization block. For assistance, they reached out to Professor Jeff Moehlis, chair of the mechanical engineering department, and Louis’ doctoral student Philip Wong (co-advised by Moehlis), who started constructing a mathematical model of the system.

The voltage across a neuron’s membrane can be described by a system of equations. This model was a breakthrough finding in 1952, and earned a Nobel Prize for its discoverers, Alan Hodgkin and Andrew Huxley. For this case study, Wong added a mathematical representation of the odorant receptor, the “trigger” that initiates the rest of the model. He also included a modification from the field of epilepsy research wherein high stimulation turns off certain ion channels in the cell membrane, preventing a neuron from firing.

Wong’s model was able to fit and predict Tadres’ measurements of the neuron’s electrical activity. “This was quite useful because the electrophysiology data was difficult to collect and very time consuming to analyze,” Wong says.

In addition to corroborating the experimental results, the model is guiding the team as they continue investigating this effect. “This model may tell us exactly how each neuron is responding to different odors,” Wong says.

The model’s success points to a possible source of the depolarization block: a specific ion channel present in neurons across the animal kingdom. If true, this suggests that most sensory neurons might fall silent following strong and sustained stimulation. The team hopes to validate this hypothesis in an upcoming study.

What’s more, the model predicted that the system would behave differently going up from low odor concentrations versus coming down from high concentrations. Measuring the voltage of the larvae’s neurons confirmed this. When going down, the neuron did not reactivate below the threshold where it had fallen silent. In fact, it largely remained silent until the odor concentration came back down to zero before returning to normal activity.

Our complex sense of smell

This study demonstrated that high odor concentrations can silence the most sensitive receptors. This counterintuitive result marks a fundamental shift in our understanding of smell.

“As you increase the concentration of an odor, you’ll start recruiting more and more odorant receptors that aren’t as sensitive to that compound,” Louis explains. “And so, the common view until our work was that you just kept adding active odorant receptors to the picture.”

This makes sense, until you consider the system as a whole. If this were the case, then a compound should activate pretty much all of the receptors above a certain level. “So it would be impossible for you to distinguish between two different odors at very high concentrations,” Tadres says. “And that’s clearly not the case.”

Having certain sensory neurons drop out as others join in might help preserve the distinction between odors at high concentrations. And this could prove important for survival. It might prevent poisons and nutrients that share certain compounds from smelling alike when you take a big whiff of them.

It could also have consequences for how we perceive odors. “We speculate that removing successive high-sensitivity olfactory sensory neurons is like removing the root of a musical chord,” Louis says. “This omission of the root is going to alter the way your brain perceives the chord associated with a set of notes. It’s going to give it a different meaning.”

A subtle floral note suggests an orchard may be in bloom nearby, useful information for a hungry animal. Meanwhile, the same compounds in higher concentrations could produce the pungent ripeness of decaying fruit or even sewage: something best avoided. Studies like this reveal ever more complexities to our sense of smell, which evolved to help us navigate an equally complex chemical landscape.

Source: UC Santa Barbara

source

Why are pregnant women in Nepal gaining more weight?

A study on the factors driving a rise in weight gain among pregnant woman in Nepal rules out poor diet quality in the first trimester as one of the major causes, say the researchers.

Historically, one of the greatest challenges facing pregnant women in Nepal and other low-income countries was undernourishment, a result of poverty. While that continues to be a concern, doctors are seeing some of the same issues confronting women in western nations: excessive weight gain and the health risks that come with it, such as high blood pressure and gestational diabetes.

Obstacles to addressing the problem included a lack of data, prompting a pilot study on gestational weight gain among pregnant women in Nepal by Shristi Rawal, an assistant professor of nutritional sciences at the Rutgers School of Health Professions; Kelly Martin, a 2021 graduate of the doctor in clinical nutrition program and an assistant professor at the State University of New York College at Oneonta; and other faculty members.

Their findings appear in the journal BMC Nutrition.

Rawal, who is from Nepal, says the impact of diet quality has been studied in wealthier countries, but had not been investigated in the context of many low-income countries, including Nepal.

“Studies on perinatal complications have largely been based on Caucasian samples from high income countries, and there has been a lack of diversity in general in terms of women represented in these studies,” she says. “Pregnancy complications are increasing in Nepal, and no one was doing this work there. This is a first step.”

The study tracked 101 pregnant women receiving prenatal care at Dhulikhel Hospital at Kathmandu University. Rawal and her colleagues administered a 21-item questionnaire to assess intake of foods from groups categorized either as healthy (such as whole grains, fruits, and vegetables) or unhealthy (such as desserts, refined grains, and red meats), to the participants.

The study looked at diet quality in the first trimester and the rate of gestational weight gain from second to the third trimester but found no link between diet quality in early pregnancy and rate of gestational weight gain. It did find that a high intake of red meat could be a potential factor in driving up weight.

“The most striking result is that so many had excessive rate of gestational weight gain,” says Rawal. “If diet quality is not it, it could be daily caloric intake, physical activity, or sleep that could be associated with gestational weight gain. It could be other diet, lifestyle, or clinical factors. The next step is collecting more data and in a larger sample. ”

The pilot study established the need to conduct a larger birth cohort study with hundreds to thousands of women seeking antenatal care at Dhulikhel Hospital.

A key part of the pilot study also was to evaluate the efficacy of a novel dietary screening tool in capturing valid dietary data in the target population of Nepalese pregnant women.

In a paper published in the Maternal and Child Health Journal, the researchers conclude that the 21-question dietary screen tool modified for use by pregnant Nepalese woman is a valid and reliable instrument for assessing the dietary intake of pregnant women in Nepal.

“This adds credence to the tool, and we now know that it has cultural applicability to the setting and that it measures what it is intended to measure,” says Martin, who was the first author of both papers. “This is important for conducting further studies on diet quality in this population.”

Rawal is in the midst of a study testing a mobile app that supports Nepalese women with gestational diabetes by providing them with information and tools to adopt diet and lifestyle modifications needed to self-manage their condition.

Source: Rutgers University

source

Work-from-home parents watched kids more in COVID’s first year

A dramatic shift toward remote work during the COVID-19 pandemic caused telecommuting parents in the United States to spend significantly more time “parenting” their children in the first year of the pandemic than they did before, according to a new study.

In the study in the Journal of Marriage and Family, the researchers found that parents working remotely, particularly mothers, significantly increased the amount of time they spent on supervisory parenting—or “watching” their children as they did other activities, such as their job-related duties, not focused on childcare.

Mothers, both those working remotely and on-site, also altered their schedules more often during the pandemic to extend the paid workday.

However, the findings show no overall increase in the amount of time working parents spent on primary childcare duties—feeding, bathing, and other basic care—during the pandemic, regardless of whether they commuted to their jobs or worked remotely.

“The lack of increase in time devoted to basic childcare activities is much less surprising given the spike in telecommuting parents working while in their children’s presence or supervising them,” says coauthor Emma Zang, an assistant professor of sociology, biostatistics, and global affairs at Yale University.

“Our study demonstrates that parenting during the pandemic’s first year, particularly for moms working from home, often required multi-tasking and adjusting work schedules. This suggests that while remote work provides parents greater flexibility, there are potential negative effects on work quality and stress that are disproportionately faced by mothers.”

The study is the first to utilize time-diary data in the United States—records of individuals’ daily activity—to examine the association between parents’ work arrangements during the pandemic and how they use their time. Specifically, Zang and her coauthors—Thomas Lyttelton of Copenhagen Business School and Kelly Musick of Cornell University—analyzed nationally representative data from the 2017–2020 American Time Use Survey to estimate changes in paid work, childcare, and housework among parents working remotely and on site from before the pandemic and after its onset.

Time parents spent with their children present, but not directly supervising them, increased by more than an hour per day among telecommuting mothers and fathers during the pandemic, and supervisory parenting increased over the same period by 4.5 hours among mothers and 2.5 among fathers, on average, over the same period. (A 104% increase over pre-pandemic levels for moms, and an 87% increase for dads.) The much steeper increase in the amount of time spent by mothers on supervisory duties suggests they have disproportionate responsibility for childcare relative to fathers, the researchers say.

The study also revealed that most of the time telecommuting parents spent in their children’s presence or supervising them on workdays during the pandemic in 2020 occurred while they were simultaneously engaged in job-related activities. Moms and dads spent just under an additional hour of work time with children present; mothers spent four additional hours of work time supervising children, compared to two more among fathers.

Parents who commuted to work did not see a statistically significant increase in these areas, suggesting that they were constrained in how they could respond to rising childcare demands during the pandemic, the researchers note.

“Remote work allowed parents to triage during the disruptions of daycare closures and online schooling, even if the burden fell disproportionately on mothers,” says Lyttelton. “Commuting parents had even less leeway in their schedules.”

There is evidence of a reduction in the gender gap concerning household labor between telecommuting mothers and fathers during the pandemic. The study found that parents, particularly fathers, working from home increased the amount of time they spent on household chores, such as laundry and cleaning, during the pandemic. Fathers spent an additional 30 minutes per day on housework—up from 44 minutes per day pre-pandemic—while mothers logged an extra 16 minutes of chores.

The researchers also found a disparity between telecommuting mothers and fathers in the amount of time they spent playing with their children, as opposed to time spent with children that didn’t involve play. Moms working from home spent an additional 16 minutes per day playing with their kids while dads across both work arrangements played with their children an extra six minutes per day. Mothers working on-site saw no increase during the pandemic, according to the study.

The findings on housework and time spent playing with children differ from evidence collected prior to the pandemic, which had showed that remote work is associated with large gender disparities in housework and smaller disparities in childcare, the researchers note.

Mothers working remotely and on-site both reported altering their schedules during the pandemic, working during non-standard hours presumably to meet the increased demands of parenting, the researchers say.

“Our work provides insights into important dimensions of inequality during the pandemic between mothers and fathers and parents who work from home and on-site workers,” Zang says. “The pandemic underscored that our work culture is unaccommodating toward the demands parents face and a policy infrastructure ill-suited to support working parents.

“We need change at the public and private levels to better serve the wellbeing of working families.”

Source: Yale University

source

Fertilizer could be made much more sustainably

Researchers have shown how nitrogen fertilizer could be produced more sustainably.

This is necessary not only to protect the climate, but also to reduce dependence on imported natural gas and to increase food security.

Intensive agriculture is possible only if the soil is fertilized with nitrogen, phosphorus, and potassium. While phosphorus and potassium can be mined as salts, nitrogen fertilizer has to be produced laboriously from nitrogen in the air and from hydrogen. And, the production of hydrogen is extremely energy-intensive, currently requiring large quantities of natural gas or—as in China—coal.

Besides having a correspondingly large carbon footprint, nitrogen fertilizer production is vulnerable to price shocks on the fossil fuels markets.

Paolo Gabrielli, senior scientist at the Laboratory of Reliability and Risk Engineering at ETH Zurich, has collaborated with Lorenzo Rosa, principal investigator at Carnegie Institution for Science at Stanford University, to investigate various carbon-neutral production methods for nitrogen fertilizer.

In their study, the two researchers conclude that a transition in nitrogen production is possible and that such a transition may also increase food security. However, alternative production methods have advantages and disadvantages. Specifically, the two researchers examined three alternatives:

  • Producing the necessary hydrogen using fossil fuels as in the business-as-usual, only instead of emitting the greenhouse gas CO2 into the atmosphere, it is captured in the production plants and permanently stored underground (carbon capture and storage, CSS). This requires not only an infrastructure for capturing, transporting, and storing the CO2 but also correspondingly more energy. Despite this, it is a comparatively efficient production method. However, it does nothing to reduce dependence on fossil fuels.
  • Electrifying fertilizer production by using water electrolysis to produce the hydrogen. This requires averagely 25 times as much energy as today’s production method using natural gas, so it would take huge amounts of electricity from carbon-neutral sources. For countries with an abundance of solar or wind energy, this might be an appealing approach. However, given plans to electrify other sectors of the economy in the name of climate action, it might lead to competition for sustainable electricity.
  • Synthesizing the hydrogen for fertilizer production from biomass. Since it requires a lot of arable land and water, ironically this production method competes with food production. But the study’s authors point out that it makes sense if the feedstock is waste biomass—for example, crop residues.

The researchers say that the key to success is likely to be a combination of all these approaches depending on the country and on specific local conditions and available resources.

In any case, it is imperative that agriculture make a more efficient use of nitrogen fertilizers, as Rosa stresses: “Addressing problems like over-fertilization and food waste is also a way to reduce the need for fertilizer.”

In the study, the researchers also sought to identify the countries of the world in which food security is currently particularly at risk owing to their dependence on imports of nitrogen or natural gas. The following countries are particularly vulnerable to price shocks in the natural gas and nitrogen markets: India, Brazil, China, France, Turkey, and Germany.

Decarbonizing fertilizer production would in many cases reduce this vulnerability and increase food security. At the very least, electrification via renewables or the use of biomass would reduce the dependence on natural gas imports. However, the researchers put this point into perspective: all carbon-neutral methods of producing nitrogen fertilizer are more energy intensive than the current method of using fossil fuels. In other words, they are still vulnerable to certain price shocks—not on natural gas markets directly, but perhaps on electricity markets.

Decarbonization is likely to change the line-up of countries that produce nitrogen fertilizer, the scientists point out in their study. As things stand, the largest nitrogen exporting nations are Russia, China, Egypt, Qatar, and Saudi Arabia. Except for China, which has to import natural gas, all these countries can draw on their own natural gas reserves. In the future, the countries that are likely to benefit from decarbonization are those that generate a lot of solar and wind power and also have sufficient reserves of land and water, such as Canada and the United States.

“There’s no getting around the fact that we need to make agricultural demand for nitrogen more sustainable in the future, both for meeting climate targets and for food security reasons,” Gabrielli says.

The war in Ukraine is affecting the global food market not only because the country normally exports a lot of grain, but also because the conflict has driven natural gas prices higher. This in turn has caused prices for nitrogen fertilizers to rise. Even so, some fertilizer producers are known to have ceased production, at least temporarily, because the exorbitant cost of gas makes production uneconomical for them.

The research appears in Environmental Research Letters.

Source: ETH Zurich

source

Moms’ lack of fiber can boost obesity risk in baby mice

The offspring of lactating mice missing fiber in their diet may be highly prone to developing obesity, a new study shows.

Those offspring lack microbial diversity in their gut and have low-grade inflammation, the researchers report.

The findings in the journal Cell Host & Microbe could help explain why obesity is increasing, especially in children. However, because the experiment was conducted in mice, the researchers can only speculate how much the results translate to humans.

“As long as young mice were maintained on a standard diet, there was no difference in their weight or other metabolic parameters, regardless of whether or not their mother ate fiber,” says senior author Andrew Gewirtz, professor in the Institute for Biomedical Sciences at Georgia State University.

“But striking differences occurred when they were exposed to a Western style diet. The mice from the fiber-deprived mothers gained striking amounts of weight. The mice from the mothers who had the fiber diet gained only small amounts of weight on this diet.”

A Western style diet, also known as a fast food diet or obesogenic diet, is high in fat and low in fiber. The standard diet that young mice are raised on is a relatively healthy, mostly plant-based diet with a small amount of animal products.

If these results translate to humans, it could help explain cases in which adolescents have very easy access to fast food diets, but some exhibit large increases in adiposity while others remain fit and lean.

The study also found that if mothers were not consuming fiber, then the offspring didn’t get particular bacteria. If offspring don’t have those bacteria, or unless the bacteria are deliberately administered, the fiber by itself doesn’t provide a health benefit. The fiber is only beneficial if bacteria are there to metabolize it, Gewirtz explains.

The researchers studied the offspring’s fecal matter to determine the bacteria they were missing.

“They’re missing beneficial bacteria that help keep out inflammatory bacteria,” says lead author Jun Zou, a research assistant professor in the Institute for Biomedical Sciences. “The beneficial bacteria do two particular things. They can metabolize the fibers to produce beneficial products such as short-chain fatty acids and exclude bacteria that are pro-inflammatory.”

One limitation of the study was the way the experiments were performed. The mice were kept in cages in a research facility, so they didn’t have other ways of acquiring these beneficial bacteria, unless they were deliberately administered to them. This differs from human experience. Even if a child’s mother didn’t eat fiber, that child might be able to play with other children at daycare and acquire these bacteria.

“That’s one reason that our findings might not apply to humans, but we just don’t know,” Gewirtz says.

Next, the researchers want to understand the mechanism behind why some mice are so prone to gain weight when exposed to obesogenic diets and then develop simple approaches to prevent passing along an unhealthy microbiome. For instance, perhaps a pregnant woman could be given dietary supplements of fiber, probiotics, or a combination of the two.

Additional authors are from the Center for Inflammation, Immunity and Infection at the Institute for Biomedical Sciences. The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) of the National Institutes of Health and American Diabetes Association funded the work.

Source: Georgia State University

source

Blood test detects marker of Alzheimer’s neurodegeneration

A new test detects a novel marker of Alzheimer’s disease neurodegeneration in a blood sample, researchers report.

The biomarker, called “brain-derived tau,” or BD-tau, outperforms current blood diagnostic tests used to detect Alzheimer’s-related neurodegeneration clinically. It is specific to Alzheimer’s disease and correlates well with Alzheimer’s neurodegeneration biomarkers in the cerebrospinal fluid (CSF).

“At present, diagnosing Alzheimer’s disease requires neuroimaging,” says Thomas Karikari, assistant professor of psychiatry at the University of Pittsburgh and senior author of the study in the journal Brain.

“Those tests are expensive and take a long time to schedule, and a lot of patients, even in the US, don’t have access to MRI and PET scanners. Accessibility is a major issue.”

Currently, to diagnose Alzheimer’s disease, clinicians use guidelines set in 2011 by the National Institute on Aging and the Alzheimer’s Association. The guidelines, called the AT(N) Framework, require detection of three distinct components of Alzheimer’s pathology—the presence of amyloid plaques, tau tangles, and neurodegeneration in the brain—either by imaging or by analyzing CSF samples.

Unfortunately, both approaches suffer from economical and practical limitations, dictating the need for development of convenient and reliable AT(N) biomarkers in blood samples, collection of which is minimally invasive and requires fewer resources.

The development of simple tools detecting signs of Alzheimer’s in the blood without compromising on quality is an important step toward improved accessibility, says Karikari.

“The most important utility of blood biomarkers is to make people’s lives better and to improve clinical confidence and risk prediction in Alzheimer’s disease diagnosis,” Karikari says.

Current blood diagnostic methods can accurately detect abnormalities in plasma amyloid beta and the phosphorylated form of tau, hitting two of the three necessary checkmarks to confidently diagnose Alzheimer’s.

But the biggest hurdle in applying the AT(N) Framework to blood samples lies in the difficulty of detecting markers of neurodegeneration that are specific to the brain and aren’t influenced by potentially misleading contaminants produced elsewhere in the body.

For example, blood levels of neurofilament light, a protein marker of nerve cell damage, become elevated in Alzheimer’s disease, Parkinson’s and other dementias, rendering it less useful when trying to differentiate Alzheimer’s disease from other neurodegenerative conditions. On the other hand, detecting total tau in the blood proved to be less informative than monitoring its levels in CSF.

By applying their knowledge of molecular biology and biochemistry of tau proteins in different tissues, such as the brain, Karikari and colleagues developed a technique to selectively detect BD-tau while avoiding free-floating “big tau” proteins produced by cells outside the brain.

To do that, they designed a special antibody that selectively binds to BD-tau, making it easily detectible in the blood. They validated their assay across over 600 patient samples from five independent cohorts, including those from patients whose Alzheimer’s disease diagnosis was confirmed after their deaths, as well as from patients with memory deficiencies indicative of early-stage Alzheimer’s.

The tests showed that levels of BD-tau detected in blood samples of Alzheimer’s disease patients using the new assay matched with levels of tau in the CSF and reliably distinguished Alzheimer’s from other neurodegenerative diseases. Levels of BD-tau also correlated with the severity of amyloid plaques and tau tangles in the brain tissue confirmed via brain autopsy analyses.

Scientists hope that monitoring blood levels of BD-tau could improve clinical trial design and facilitate screening and enrollment of patients from populations that historically haven’t been included in research cohorts.

“There is a huge need for diversity in clinical research, not just by skin color but also by socioeconomic background,” says Karikari. “To develop better drugs, trials need to enroll people from varied backgrounds and not just those who live close to academic medical centers.

A blood test is cheaper, safer, and easier to administer, and it can improve clinical confidence in diagnosing Alzheimer’s and selecting participants for clinical trial and disease monitoring.”

Karikari and his team are planning to conduct large-scale clinical validation of blood BD-tau in a wide range of research groups, including those that recruit participants from diverse racial and ethnic backgrounds, from memory clinics, and from the community.

Additionally, these studies will include older adults with no biological evidence of Alzheimer’s disease as well as those at different stages of the disease. These projects are crucial to ensure that the biomarker results are generalizable to people from all backgrounds, and will pave the way to making BD-tau commercially available for widespread clinical and prognostic use.

Additional coauthors are from the University of Gothenburg; Bioventix Plc, in the UK; the University of California, San Diego; the University of Brescia in Italy; and RCCS Istituto Centro San Giovanni di Dio Fatebenefratelli in Brescia, Italy.

The research had support in part from the Alzheimer’s Association and the Swedish Research Council.

Source: University of Pittsburgh

source

COVID infection messes up healthy gut bacteria balance

In an intensive look at the effects of the virus causing COVID-19 on patients’ microbiome, researchers found that acute infection disrupts a healthy balance between good and bad microbes in the gut, especially with antibiotic treatment.

The microbiome is the collection of microorganisms that live in and on the human body.

The new work may lead to the development of probiotic supplements to redress any gut imbalances in future patients, the researchers say.

Reporting in the scientific journal Molecular Biomedicine, the researchers described the first results of an ongoing study examining the microbiome of patients and volunteers at Robert Wood Johnson University Hospital in New Brunswick.

“These findings may help identify microbial targets and probiotic supplements for improving COVID-19 treatment.”

The study, which began in May 2020, the early days of the pandemic, was designed to zero in on the microbiome because many COVID-19 patients complained of gastrointestinal issues—both during the acute phases of their illness and while recuperating.

“We wanted to gain a deeper understanding by looking at specimens that would give us an indication about the state of the gut microbiome in people,” says Martin Blaser, chair of the human microbiome at Rutgers University, director of the Center for Advanced Biotechnology and Medicine (CABM) at Rutgers, and an author on the study.

“What we found was that, while there were differences between people who had COVID-19 and those who were not ill, the biggest difference from others was seen in those who had been administered antibiotics,” Blaser says.

Early in the pandemic, before the introduction of vaccines and other antiviral remedies, it was a common practice to treat COVID-19 patients with a round of antibiotics to attempt to target possible secondary infections, says Blaser, who also is a professor of medicine and pathology and laboratory medicine at Rutgers Robert Wood Johnson Medical School.

Humans carry large and diverse populations of microbes, Blaser says. These microorganisms live in the gastrointestinal tract, on the skin and in other organs, with the largest population in the colon. Scientists such as Blaser have shown over recent decades that the microbiome plays a pivotal role in human health, interacting with metabolism, the immune system and the central nervous system.

The microbiome has many different functions. “One is to protect the human body against invading pathogens, whether they’re bacteria or viruses or fungi,” Blaser says. “That goes deep into evolution, maybe a billion years of evolution.”

Medical problems often arise when the balance between beneficial and pathogenic microbes in a person’s microbiome is thrown off, a condition known as dysbiosis.

The scientists studied microbiomes by measuring populations of microorganisms in stool samples taken from 60 subjects. The study group consisted of 20 COVID-19 patients, 20 healthy donors, and 20 COVID-19-recovered subjects. They found major differences in the population numbers of 55 different species of bacteria when comparing the microbiomes of infected patients with the healthy and recovered patients.

The researchers plan to continue to test and track the microbiomes of patients in the study to ascertain the long-term effect on individual microbiomes from COVID-19.

“Further investigation of patients will enhance understanding of the role of the gut microbiome in COVID-19 disease progression and recovery,” Blaser says. “These findings may help identify microbial targets and probiotic supplements for improving COVID-19 treatment.”

Support for the study came from Danone and by the National Institutes of Health (National Institute of Allergy and Infectious Diseases).

Source: Rutgers University

source

Some guts get more energy from the same food

New findings are a step towards understanding why some people gain more weight than others, even when they eat the same diet.

The research indicates that some Danish people have a composition of gut microbes that, on average, extracts more energy from food than do the microbes in the guts of their fellow Danes. Part of the explanation could be related to the composition of their gut microbes.

Researchers at the University of Copenhagen’s department of nutrition, exercise, and sports studied the residual energy in the feces of 85 Danes to estimate how effective their gut microbes are at extracting energy from food. At the same time, they mapped the composition of gut microbes for each participant.

The results show that roughly 40% of the participants belong to a group that, on average, extracts more energy from food compared to the other 60%. The researchers also observed that those who extracted the most energy from food also weighed 10% more on average, amounting to an extra nine kilograms (about 20 pounds).

“We may have found a key to understanding why some people gain more weight than others, even when they don’t eat more or any differently. But this needs to be investigated further,” says associate professor Henrik Roager.

The results indicate that being overweight might not be related to how healthily a person eats or the amount of exercise they get. It may also have something to do with the composition of their gut microbes.

As reported in the journal Microbiome, participants were divided into three groups, based on the composition of their gut microbes. The so-called B-type composition (dominated by Bacteroides bacteria) is more effective at extracting nutrients from food and was observed in 40% of the participants.

Following the study, the researchers suspect that having gut bacteria that are more effective at extracting energy may result in more calories being available for the human host from the same amount of food.

“The fact that our gut bacteria are great at extracting energy from food is basically a good thing, as the bacteria’s metabolism of food provides extra energy in the form of, for example, short-chain fatty acids, which are molecules that our body can use as energy-supplying fuel. But if we consume more than we burn, the extra energy provided by the intestinal bacteria may increase the risk of obesity over time,” says Roager.

From mouth to esophagus, stomach, duodenum, and small intestine, large intestine, and finally to rectum, the food we eat takes a 12-to-36-hour journey, passing several stations along the way, before the body has extracted all the food’s nutrients.

The researchers also studied the length of this journey for each participant, all of whom had similar dietary patterns. Here, the researchers hypothesized that those with long digestive travel times would be the ones who harvested the most nutrition from their food. But the study found the exact opposite.

“We thought that there would be a long digestive travel time would allow more energy to be extracted. But here, we see that participants with the B-type gut bacteria that extract the most energy, also have the fastest passage through the gastrointestinal system, which has given us something to think about,” says Roager.

The new study in humans confirms earlier studies in mice. In these studies, researchers found that germ-free mice that received gut microbes from obese donors gained more weight compared to mice that received gut microbes from lean donors, despite being fed the same diet.

Even then, the researchers proposed that the differences in weight gain could be attributable to the fact that the gut bacteria from obese people were more efficient at extracting energy from food. The new research confirms this theory.

“It is very interesting that the group of people who have less energy left in their stool also weigh more on average. However, this study doesn’t provide proof that the two factors are directly related. We hope to explore this more in the future,” says Roager.

Source: University of Copenhagen

source

Can machine learning predict the next big disaster?

A new study shows how machine learning could predict rare disastrous events, like earthquakes or pandemics.

The research suggests how scientists can circumvent the need for massive data sets to forecast extreme events with the combination of an advanced machine learning system and sequential sampling techniques.

When it comes to predicting disasters brought on by extreme events (think earthquakes, pandemics, or “rogue waves” that could destroy coastal structures), computational modeling faces an almost insurmountable challenge: Statistically speaking, these events are so rare that there’s just not enough data on them to use predictive models to accurately forecast when they’ll happen next.

But the new research indicates it doesn’t have to be that way.

In the study in Nature Computational Science, the researchers describe how they combined statistical algorithms—which need less data to make accurate, efficient predictions—with a powerful machine learning technique and trained it to predict scenarios, probabilities, and sometimes even the timeline of rare events despite the lack of historical record on them.

Doing so, the researchers found that this new framework can provide a way to circumvent the need for massive amounts of data that are traditionally needed for these kinds of computations, instead essentially boiling down the grand challenge of predicting rare events to a matter of quality over quantity.

“You have to realize that these are stochastic events,” says study author George Karniadakis, a professor of applied mathematics and engineering at Brown University. “An outburst of pandemic like COVID-19, environmental disaster in the Gulf of Mexico, an earthquake, huge wildfires in California, a 30-meter wave that capsizes a ship—these are rare events and because they are rare, we don’t have a lot of historical data.

“We don’t have enough samples from the past to predict them further into the future. The question that we tackle in the paper is: What is the best possible data that we can use to minimize the number of data points we need?”

The researchers found the answer in a sequential sampling technique called active learning. These types of statistical algorithms are not only able to analyze data input into them, but more importantly, they can learn from the information to label new relevant data points that are equally or even more important to the outcome that’s being calculated. At the most basic level, they allow more to be done with less.

That’s critical to the machine learning model the researchers used in the study. Called DeepOnet, the model is a type of artificial neural network, which uses interconnected nodes in successive layers that roughly mimic the connections made by neurons in the human brain.

DeepOnet is known as a deep neural operator. It’s more advanced and powerful than typical artificial neural networks because it’s actually two neural networks in one, processing data in two parallel networks. This allows it to analyze giant sets of data and scenarios at breakneck speed to spit out equally massive sets of probabilities once it learns what it’s looking for.

The bottleneck with this powerful tool, especially as it relates to rare events, is that deep neural operators need tons of data to be trained to make calculations that are effective and accurate.

In the paper, the research team shows that combined with active learning techniques, the DeepOnet model can get trained on what parameters or precursors to look for that lead up to the disastrous event someone is analyzing, even when there are not many data points.

“The thrust is not to take every possible data and put it into the system, but to proactively look for events that will signify the rare events,” Karniadakis says. “We may not have many examples of the real event, but we may have those precursors. Through mathematics, we identify them, which together with real events will help us to train this data-hungry operator.”

In the paper, the researchers apply the approach to pinpointing parameters and different ranges of probabilities for dangerous spikes during a pandemic, finding and predicting rogue waves, and estimating when a ship will crack in half due to stress. For example, with rogue waves—ones that are greater than twice the size of surrounding waves—the researchers found they could discover and quantify when rogue waves will form by looking at probable wave conditions that nonlinearly interact over time, leading to waves sometimes three times their original size.

The researchers found their new method outperformed more traditional modeling efforts, and they believe it presents a framework that can efficiently discover and predict all kinds of rare events.

In the paper, the research team outlines how scientists should design future experiments so that they can minimize costs and increase the forecasting accuracy. Karniadakis, for example, is already working with environmental scientists to use the novel method to forecast climate events, such as hurricanes.

Ethan Pickering and Themistoklis Sapsis from the Massachusetts Institute of Technology led the study. Karniadakis and other Brown researchers introduced DeepOnet in 2019. They are currently seeking a patent for the technology.

Support for the study came from the Defense Advanced Research Projects Agency, the Air Force Research Laboratory, and the Office of Naval Research.

Source: Juan Siliezar for Brown University

source